abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
A method and apparatus are disclosed for implementing early release of speculatively read data in a hardware transactional memory system. A processing core comprises a hardware transactional memory system configured to receive an early release indication for a specified word of a group of words in a read set of an active transaction. The early release indication comprises a request to remove the specified word from the read set. In response to the early release request, the processing core removes the group of words from the read set only after determining that no word in the group other than the specified word has been speculatively read during the active transaction. |
WHAT IS CLAIMED IS: 1. An apparatus comprising: a hardware transactional memory system configured to receive an early release indication for a specified word of a group of two or more words in a read set of an active atomic memory transaction attempt, wherein: the hardware transactional memory system is configured to remove the group of words from the read set in response to: receiving the early release indication and determining that no word in the group other than the specified word has been speculatively read during the active atomic memory transaction attempt. 2. The apparatus of claim 1, wherein the hardware transactional memory system is configured to add the group of words to the read set of the active atomic memory transaction attempt in response to the processing core speculatively reading any one of the group of words, wherein the one word is first in the group to have been speculatively read as part of executing the active atomic memory transaction attempt. 3. The apparatus of claim 2, wherein the group of words corresponds to a single cache line in a cache of the processing core; and adding the group of words to the read set comprises setting a flag associated with the cache line to indicate that the group of words in the cache line is in the read set of the active transaction. 4. The apparatus of claim 2, wherein adding the group of words to the read set comprises setting a flag at a given index of a storage array, wherein for each word in the group of words, the index corresponds to a result of applying a given hash function to a respective address in a shared memory at which the word is stored. 5. The apparatus of claim 2, wherein adding the group of words to the read set of the active atomic memory transaction attempt comprises recording an indication of the one word in the group that was first to be speculatively read within the active transaction, and wherein the indication of the one word comprises an index indicating a position of the one word in an ordering of the group of words. 6. The apparatus of claim 5, wherein the hardware transactional memory is further configured to respond to the processing core reading another word in the group subsequent to reading the one word by: determining whether the one word and the another word correspond to the same memory address; and in response to determining that the one word and the another word do not correspond to the same memory address, recording an indication that multiple words in the group have been speculatively read. 7. The apparatus of claim 1, wherein the early release indication specifies multiple words of the group to remove from the read set and wherein the hardware transactional memory system is further configured to remove the specified multiple words from the read set based, at least in part, on a determination that no word in the group other than the specified multiple words has been speculatively read during the active transaction. 8. A method comprising: a hardware transactional memory system receiving an early release indication for a specified word of a group of two or more words in a read set of an active atomic memory transaction attempt; and responsive to no other word in the group having been speculatively read during the active atomic memory transaction attempt, the hardware transactional memory system removing the group of words from the read set of the active atomic memory transaction attempt. 9. The method of claim 8, further comprising before receiving the early release indication, adding the group of words to the read set of the active atomic memory transaction attempt in response to a processing core speculatively reading one of the group of words, wherein the one word is the first in the group to have been speculatively read as part of executing the active transaction. 10. The method of claim 9, wherein the group of write set words corresponds to a single cache line of a cache, and wherein adding the group of words to the read set comprises setting a flag associated with the cache line to indicate that the group of words in the cache line is in the read set of the active transaction. 1 1. The method of claim 9, wherein adding the group of words to the read set comprises setting a flag at a given index of a storage array, wherein for each word in the group of words, the index corresponds to a result of applying a given hash function to a respective address in a shared memory at which the word is stored. 12. The method of claim 9, wherein adding the group of words to the read set of the active atomic memory transaction attempt comprises recording an indication of the one word in the group that was first to be speculatively read within the active transaction. 13. The method of claim 12, wherein the indication of the one word comprises an index indicating a position of the one word in an ordering of the group of write set words. 14. The method of claim 12, further comprising: in response to another word in the group being speculatively read subsequent to the one word being read and determining that the one word and the another word do not correspond to the same memory address: recording an indication that multiple words in the group have been speculatively read. 15. A computer readable storage medium comprising a data structure which is operated upon by a program executable on a computer system, the program operating on the data structure to perform a portion of a process to fabricate an integrated circuit including circuitry described by the data structure, the circuitry described in the data structure including: a hardware transactional memory system configured to receive an early release indication for a specified word of a group of two or more words in a read set of an active atomic memory transaction attempt, wherein: the hardware transactional memory system is configured to remove the group of words from the read set in response to: receiving the early release indication and determining that no word in the group other than the specified word has been speculatively read during the active atomic memory transaction attempt. |
PREVENTING UNINTENDED LOSS OF TRANSACTIONAL DATA IN HARDWARE TRANSACTIONAL MEMORY SYSTEMS BACKGROUND Hardware Transactional Memory (HTM) is a mechanism in computer architecture for supporting parallel programming. With HTM, programmers may simply declare a group of instructions as being part of a single speculative region and the HTM hardware may then guarantee that the instructions in the region are executed as a single atomic and isolated transaction. Atomicity means that all the instructions of the transaction are executed as a single atomic block with respect to all other concurrent threads of execution on one or more other processing cores in the system. Isolation means that no intermediate result of the transaction is exposed to the rest of the system until the transaction completes. HTM systems may allow transactions to run in parallel as long as they do not conflict. Two transactions may conflict when they both access the same memory area (e.g., 8-byte word address) and either of the two transactions writes to that memory area (e.g., addressable 8-byte word). To detect data conflicts between threads of execution, an HTM may keep track of which memory areas have been read from and/or written to speculatively during a transaction execution attempt. Memory areas that the processor tracks for the purposes of detecting data conflicts may be referred to as the read set (addresses the processor tracks as having been read) and the write set (addresses the processor tracks as having been modified) of the transaction. The read and write sets are often buffered in a speculative buffer, which is a logical entity that may be implemented by cache, load/store queue, both, and/or other hardware components. Since HTM systems are implemented in hardware and therefore have limited resources with which to track read and write sets, various techniques have been used to make efficient use of HTM resources. One set of techniques is word grouping, whereby an HTM may group multiple addressable words and track them together in the read and/or write sets. For example, in response to detecting a speculative access to a given word in an active transaction, an HTM may mark the entire cache line in which the word resides (which holds multiple different addressable words) as being in the read or write set of the transaction. Marking a cache line may be done by setting a flag associated with the cache line. Thus, the HTM adds all the words in the cache line to the read or write set, even though only a single word in the group was speculatively accessed. Another technique for making efficient use of HTM resources is early release. In this technique, the programmer is permitted to explicitly release an addressable word from the read set of an active transaction (e.g., using an explicit RELEASE instruction). For example, if a given transaction does not rely on a given read value beyond a certain point in the transaction, the programmer may explicitly remove from the read set, the memory address from which the value was read. Using such techniques, HTM designers have been able to increase the capacity of their systems and allow programmers to express and execute larger and more complex atomic memory transactions using HTM. SUMMARY OF EMBODIMENTS A method and apparatus are disclosed for implementing early release of speculatively read data in a hardware transactional memory system. A processing core comprises a hardware transactional memory system configured to receive an early release indication, such as an early release instruction, for a specified word of a group of words in a read set of an active transaction. The early release indication comprises a request to remove the specified word from the read set. In response to the early release indication, the processing core removes the group of words from the read set only after determining that no word in the group other than the specified word has been speculatively read during the active transaction. In some embodiments, the group of words may be tracked by the hardware transactional memory system together, such that all words in the group are added to the read set of an active transaction if any one of the group of words is speculatively read during an active transaction. For example, in some embodiments, the group of words may correspond to a single cache line in a cache of the processing core and added to the read set together by setting a flag associated with the cache line. In other embodiments, the group may be tracked using an index in a Bloom filter and added to the read set by setting a flag at that Bloom filter index. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram illustrating a system configured to implement hardware transactional memory with word grouping and early release functionality, according to some embodiments. FIG. 2 is a block diagram illustrating the components of a speculative region of code, according to some embodiments. FIG. 3 is a flow diagram illustrating a high-level method of executing an atomic memory transaction using word grouping and early release techniques in a manner that avoids unintentional data release, according to some embodiments. FIG. 4a is a block diagram of a cache-based speculative buffer equipped to support correct coexistence of word grouping and early release mechanisms, according to some embodiments. FIG. 4b is a block diagram of a Bloom filter based speculative buffer equipped to support correct coexistence of word grouping and early release mechanisms, according to some embodiments. FIG. 5 is a flow diagram illustrating a method for obviating an early release operation in response to determining that multiple words in a given cache line have been speculatively read by an active transaction, according to some embodiments. FIG. 6 is a block diagram of the components of an HTM configured to support word grouping with early release, according to some embodiments. FIG. 7 is a block diagram illustrating a computer system configured to implement hardware transactional memory with word grouping and early release mechanisms as described herein, according to various embodiments. DETAILED DESCRIPTION This specification includes references to "one embodiment" or "an embodiment." The appearances of the phrases "in one embodiment" or "in an embodiment" do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure. Terminology. The following paragraphs provide definitions and/or context for terms found in this disclosure (including the appended claims): "Comprising." This term is open-ended. As used in the appended claims, this term does not foreclose additional structure or steps. Consider a claim that recites: "An apparatus comprising one or more processor units Such a claim does not foreclose the apparatus from including additional components (e.g., a network interface unit, graphics circuitry, etc.). "Configured To." Various units, circuits, or other components may be described or claimed as "configured to" perform a task or tasks. In such contexts, "configured to" is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the "configured to" language include hardware— for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is "configured to" perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 1 12, sixth paragraph, for that unit/circuit/component. Additionally, "configured to" can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. "Configure to" may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks. "First," "Second," etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). For example, in a processor having eight processing elements or cores, the terms "first" and "second" processing elements can be used to refer to any two of the eight processing elements. In other words, the "first" and "second" processing elements are not limited to logical processing elements 0 and 1. "Based On." As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase "determine A based on B." While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. HTM systems are implemented in hardware and therefore have limited resources with which to track read and write sets. Consequently, various techniques have been used to make efficient use of these resources. For example, such techniques include tracking addressable words as groups (i.e., word grouping), allowing a program to release a specified address from a read set (i.e., early release), and other techniques. In word grouping, the HTM may enter an entire group (e.g., cache line) into a read or write set if any word in the group is accessed. Subsequently, the HTM may detect conflicts at the granularity of an entire group rather than at the granularity of each word. For example, each cache line may include or be associated with a flag indicating if it contains speculatively accessed data. Early release may allow a programmer to explicitly remove a specified word from a read set of an active transaction, such as by explicitly executing a RELEASE instruction (as used herein, a transaction is active if it has been initiated but not yet committed). A traditional early release system in the example above, may release a given word by unsetting the flag on the cache line, thereby removing all words in the cache line from the read set. However, traditional systems are unable to guarantee correctness when implementing both word grouping and early release optimizations. For example, if an active transaction reads and relies on two distinctly addressable words that happen to be in the same group, then a traditional word grouping system places both words (and all others in their group) in the read set of the transaction. However, if an early release mechanism is used to subsequently remove one of the two words from the read set of the active transaction, then the traditional system would release the entire group of words, including the other word. However, releasing the entire group of words in this instance is not the intent of the programmer. Rather, the programmer intended to release only the specified word. Accordingly, the traditional HTM may not detect subsequent data conflicts concerning the other word and incorrect program behavior may result (e.g., violation of atomicity). This phenomenon may be referred to herein as unintentional data release. In some embodiments, an HTM may be configured to implement correct coexistence between word grouping and early release techniques. Such an HTM may track whether multiple words in a given group have been read by an active transaction and condition any early release of a word in the given group on a determination that no other word in the group has been accessed by the transaction. Thus, the HTM may avoid unintentionally releasing one or more words from an active transaction's read set. FIG. 1 is a block diagram illustrating a system configured to implement hardware transactional memory with word grouping and early release functionality, according to some embodiments. According to the illustrated embodiment, system 100 includes multiple processors, including processor 105 and other processors ) 130. As used herein, the term processor refers to a processing core configured to execute computer program instructions. Therefore, the term processor may refer to a physical or logical (e.g., symmetric multi-threading) processing core on a dedicated chip or on a chip that includes one or more other processing cores (e.g., chip multi-processor). In the latter case, processors 105 and 130 may exist on the same chip and be connected by an on-chip network rather than a system bus such as 150. Although the general term "processor" is used to describe the embodiments herein, the term itself is not meant to limit embodiments to particular arrangements of processing cores or their distribution on one or more chips. As illustrated in FIG. 1, processor 105 comprises HTM mechanisms 1 10, which includes one or more hardware units configured to detect and/or to execute speculative regions of code as isolated, atomic transactions. These HTM mechanisms may include various components used by the processor to maintain correct program behavior while performing word grouping and early release operations. As shown in the illustrated embodiment, processor 105 may also include any number of registers 120, which may be implemented as a microarchitectural register file, and one or more local data caches 125 (e.g., LI cache). Data caches 125 may cache data from shared memory 140 for quick access by processor 105. In embodiments where data cache(s) 125 include multiple caches, those caches maybe be configured to function as a cache hierarchy. Processor 105 and/or data caches 125 may include cache coherence mechanisms configured to communicate with other processors (e.g., 130) to maintain a consistent view of memory in the presence of separate private caches used by different processors. In embodiments where processor 105 includes multiple processing cores, one or more of caches 125 may be shared by various ones of these processing cores. In some embodiments, data cache 125 and/or shared caches 135 may be arranged as multiple blocks (i.e., cache lines), each usable to store a block of sequentially addressed words (e.g., 8-byte words) from shared memory 140. In some embodiments, different instructions may address one or more of these words. For example, an early release instruction may address a given word in data cache 125 and indicate that it is to be released from the read set of an active transaction. In different embodiments, data cache 125 and/or shared caches 135 may be managed according to different set-associativity and/or eviction policies for handling overflow of buffered data. According to the illustrated processors 105 and 130 are connected via bus 150 to each other, to shared memory 140, and to any number of shared data caches 135. As used herein, the term memory hierarchy refers to a system's shared memory and the series of caches (i.e., cache hierarchy) used by a given processor to store data. In some embodiments, processors 105 and 130 may utilize bus 150 to communicate messages to one another, such as cache coherence messages as part of a cache coherence protocol (e.g., MESI, MOESI). In such embodiments, multiple processors, such as 105 and 130, may maintain a consistent view of shared memory data cached in their respective caches. In some embodiments, processor 105 may implement a speculative data buffer to support HTM transactions. In different embodiments, such a buffer may be implemented in whole or in part using data caches 125, a Bloom filter unit, and/or other mechanisms. For example, in some embodiments, HTM mechanisms 110 may associate various fields with each line in data cache 125 and use the fields to indicate whether any word in that line is in the read or write set of the active transaction. In other embodiments, HTM mechanisms 1 10 may use a Bloom filter unit to keep track of read and/or write sets. In such embodiments, when a word is accessed speculatively from within a transaction, the HTM system may map the address of the word to an index of a special-purpose array (e.g., a read set Bloom filter), such as by applying a hash function to the address to derive the index. The HTM may then set a flag at the derived index to indicate that the word at the given address has been speculatively read (or written in the case of a write set Bloom filter). Although Bloom filter implementations that utilize a single array and hash function are discussed herein, one of ordinary skilled in the art will appreciate that in different embodiments, Bloom filter implementations may utilize multiple hash functions and/or arrays. The reader should note that the use of a Bloom filter to track a read set is a form of word grouping since the respective addresses of different words can map to the same Bloom filter index. FIG. 2 is a block diagram illustrating the components of a speculative region of code, according to some embodiments. According to the illustrated embodiment, speculative region 200 begins with a transaction start indication 205, which is followed by a transaction body 210 of one or more instructions, and ends with a transaction commit indication 215. In some embodiments, transaction start indication may comprise a special- purpose instruction indicating the start of a speculative region. For example, the start indication 205 may include a SPECULATE instruction indicating the start of a speculative region. In other embodiments, the start indication may correspond to a general-purpose instruction, such as lock acquisition, that may be indicative of a speculative region of code. After a transaction is started but before it is committed (as in 215), the transaction is said to be active. It should be noted that due to conflicts and/or other abort conditions, an active transaction may be reattempted multiple times before eventually succeeding. Transaction body 210 may include one or more program instructions, which may include one or more memory operations. In some embodiments, transaction body 210 may include a first subset of memory operations that are designated as part of the transaction and a second subset of memory operations that are designated as not part of the transaction. In such instances, the HTM may be configured to execute transactionally only those instructions designated as part of the transaction and to provide no such atomicity or isolation guarantees for the other instructions in the body. In such execution, the memory addresses accessed as a result of executing speculative instructions may be tracked as part of the active transaction's read or write set as appropriate. As indicated in the illustrated embodiment, speculative region 200 may include a commit indication (e.g., 215) indicating the end of the speculative region started by start indication 205. In some embodiments, the commit indication may comprise a special-purpose COMMIT instruction. In other embodiments, the commit indication of 215 may correspond to a general-purpose instruction, such as a release of a lock acquired earlier, such as in start indication 205. FIG. 3 is a flow diagram illustrating a high-level method of executing an atomic memory transaction using word grouping and early release techniques in a manner that avoids unintentional data release, according to some embodiments. The method of FIG. 3 may be executed by a processor that includes HTM mechanisms, such as processor 105. According to the illustrated embodiment, method 300 begins when the processor enters a speculative execution mode, as in 305. As described above, the processor may enter such a mode in response to executing a SPECULATE instruction indicating the start of a transactional region of code. As part of the transaction, the processor may speculatively load a word from shared memory, as in 310. This word may be loaded directly by a load instruction or indirectly as part of executing some other instruction. Since the data was speculatively read from within the transaction, the processor adds the speculatively loaded word (and its whole group) to the read set of the transaction, as in 315. In different embodiments, adding the word group to the read set may include setting a flag in the cache line to which the word was loaded. This effectively adds all the other words in the cache line (i.e., in the group) to the read set. In other embodiments, adding the group to the read set in 315 may comprise setting a bit in a Bloom filter at an index corresponding to the address from which the word was speculatively loaded. For example, in 315, the HTM may derive the index by applying a hash function to the address of the speculatively loaded word. However, by setting the bit at the derived index in 315, the system essentially adds to the read set every word whose address hashes to that index value. Therefore, even if only one such address was actually speculatively read, setting a bit in a Bloom filter essentially adds a group of words to the read set being tracked by the HTM. According to the illustrated embodiment, the processor may subsequently detect a RELEASE instruction, which specifies a word to release from the read set, as indicated by the affirmative exit from 320. In this case, the HTM may determine the group to which the specified word belongs (e.g., cache line, Bloom filter index, etc.) and determine if more than one word in the group was speculatively read, as in 325. The HTM may make this determination as described in more detail below. If the specified word for release is the only speculatively read word in the group, as indicated by the negative exit from 325, then the HTM may remove the RELEASE-specified word from the read set, as in 330. In various embodiments, removing the RELEASE-specified word may include unsetting a flag associated with the speculatively read word and/or with its group. However, if the HTM determines that multiple words in the group of the specified word were speculatively read in the active transaction, as indicated by the affirmative exit from 325, then the HTM does not remove the specified word from the read set. The reader should note that in some embodiments, if multiple words in the group were speculatively read, but each one was successfully released from the transaction read set before the next word in the group was read, then the decision of 325 would be decided negatively. For example, in such embodiments, if a first word in a group is read and then released, and subsequent to that release a second word in the group is read, then an attempt to release the second word (as in 320) would result in a negative determination in 325. Once a word is released from the read set, the word may no longer cause data conflicts for the transaction. That is, if another thread writes to the address of the released word, this would not necessarily trigger a transaction abort. According to the illustrated embodiment, as the transaction continues, if an abort condition is detected (as in 335) then the HTM may abort the transaction attempt (as in 340) and reattempt the transaction (as indicated by the feedback loop from 340 to 310). In some embodiments, reattempting the transaction may comprise issuing an explicit restart instruction. Aborting the transaction may comprise removing all words from the read and write sets, rolling back and/or dropping all memory modifications attempted by the transaction attempt, modifying the instruction pointer to point to the beginning of the transactional region, and/or other implementation-specific steps. While the transactional region contains more instructions that have not yet been executed, as indicated by the affirmative exit from 345, the processor continues execution. When program control reaches the end of the transactional region, as indicated by the negative exit from 345, the HTM may commit the results of the transaction, as in 350. Committing the results in 350 may include flushing the results of the transaction (e.g., values in the write set) to memory, which may be done differently in different embodiments (e.g., redo versus undo models). According to some embodiments, various flags may be added to speculative buffering mechanisms so that the processor may track whether multiple words in a given group have been speculatively read. FIG. 4a and FIG. 4b present two embodiments of speculative buffering mechanisms that include respective flags for this purpose. FIG. 4a is a block diagram of a cache-based speculative buffer equipped to support correct coexistence of word grouping and early release mechanisms, according to some embodiments. In some embodiments, data cache 400 of FIG. 4a may correspond to a private LI data cache, such as data cache 125 of FIG. 1. In the illustrated embodiment, data cache 400 comprises multiple cache lines, such as cache line 405. Each line may be used to store a sequence of multiple words of data from a given region of memory. For example, data 425 of cache line 405 may comprise 64 or 128 bytes of storage space. Therefore, it may be used to store 8 or 16 addressable 8-byte words from memory. For example, if data 425 is 64-bytes long, then the shared memory may be divided into 64-byte blocks (i.e., word groups) and in response to reading any word in the block, the system may copy the entire block into a line of data cache 400 (e.g., to data 425 in cache line 405). The line may be tagged (e.g., using address tag 410) to indicate which memory block is currently being stored in the cache line. As illustrated in FIG. 4a, cache line 405 may also include various cache coherence fields 415 usable to maintain coherence with data cached by other processors, and speculative buffer fields, such as 430 and 435. For example, in some embodiments, SW field 430 may be set to indicate that at least some data in 425 has been speculatively written by an active transaction. Likewise, SR field 435 may be usable to indicate that at least some data in 425 has been speculatively read by an active transaction. Thus, the HTM may use these fields when detecting data conflicts. According to the illustrated embodiment, cache line 405 also includes UL field 440 and FA field 445. In some embodiments, these fields may be usable by the HTM (as in 325 of FIG. 3) to determine if more than one word in the cache line (e.g., data 425) has been speculatively read as part of an active transaction. In some embodiments, FA field 445 may be used to store an indication of the word in cache line 405 (i.e., in data 425) that was first speculatively read during the active transaction. In various embodiments, this indication may be the address of that particular word, an index into the cache line corresponding to that word (e.g., "the 5th word in the block"), or another identifier uniquely distinguishing the first speculatively read word from among the others. For example, in some embodiments, if cache line 405 is 64-bytes wide (i.e., holds eight uniquely addressable 8-byte words), then FA field 445 may include three bits, which are usable to indicate eight unique indices. The value of the FA field may thus identify which of the eight words stored in the cache line was the first to be speculatively read. In some embodiments, the logic for adding a word to the read set may include determining whether the SR field 435 is already set, and if not, setting SR field 435 to a value indicating that the cache line is in the read set of the transaction and setting FA field 445 to a value indicating the word. In some embodiments, when the SR field is cleared, the FA and/or UL fields may be cleared as well. In some embodiments, UL field 440 may be usable to indicate whether multiple words in cache line 405 have been speculatively read by the active transaction. For example, logic for adding a word to the read set may include determining whether the SR field 435 is already set and if so, determining whether the FA field indicates the word. If not, then the word is not the first word in the group to be added to the read set and the processor may set the UL field 440 to a value indicating so. In some embodiments, UL field 440 may comprise a single bit, which may be set or unset to indicate whether multiple words are in the read set. The HTM may thus use the combination of UL and FA fields to track and determine whether multiple words in a given cache line (i.e., word group) have been speculatively read by an active transaction. FIG. 5 is a flow diagram illustrating a method for obviating an early release operation in response to determining that multiple words in a given cache line have been speculatively read by an active transaction, according to some embodiments. Method 500 may be executed by a processor that supports both early release and word grouping as described herein. According to the illustrated embodiment, method 500 begins when the processor enters a speculative execution mode, as in 505. The processor then speculatively reads a given word, as in 510, as part of the transaction. In response to speculatively reading the word, the memory subsystem may copy the block in which the word resides to a cache line if it is not already there. Furthermore, the HTM adds the cache line to the read set of the transaction, as in 515. In some embodiments, adding the cache line to the read set may comprise setting the SR bit of the cache line and thus effectively adding the entire block stored in the cache line to the read set. After, or as part of, adding the cache line to the transaction read set, as in 515, the HTM may set the FA flag to a value indicating the given word, as in 520. For example, if the word read in 510 is the third word in the block according to address order, then in 520, the FA flag may be set to the binary value 010, indicating the third index according to a 0-indexing scheme. Various other identifier schemes are possible. According to the illustrated embodiment, subsequent to speculatively reading the given word (510) and adding the cache line to the read set (515), the processor speculatively reads another word from the same, as in 525. In response to the processor reading the other word, the HTM determines that the SR field of the cache line containing the word is already set and in response, reads the FA field of the cache line (as in 530) to determine whether the other word is the same as the first word in the cache line to be read speculatively. For example, if the FA field indicates the index of the given word within the block, the HTM may determine whether the other word has the same index. If the indices are different (as in 530), then the given word and the other word are not the same. In response to determining that the other speculatively read word is not the same as the one read in 510, the HTM sets the UL flag of the cache line, as in 535. Setting the UL flag indicates that multiple words in the cache line have been speculatively read within the current transaction. A set UL flag may indicate to the HTM that an unintended loss of transactional data may occur if the words in the cache line were all released in response to an early release instruction that is intended to target only a subset of the words. In 540, the processor detects a RELEASE instruction specifying that the given speculatively read word is to be released from the read set. The RELEASE instruction of this example corresponds to an early release mechanism. In response to the RELEASE instruction, the HTM locates the cache line of the given word and checks the UL flag of the cache line, determining that the UL flag is set, as in 550. Since the UL flag is set, the HTM determines that multiple words in the cache line have been read speculatively and that unintended and/or incorrect behavior may result if the entire cache line were released. Accordingly, in 555, the HTM prevents the early release functionality indicated by the RELEASE instruction. Since the early release is prevented, the HTM does not clear the SR flag of the cache line, the words in the cache line remain in the read set, and the HTM continues to check for data conflicts involving those words. Although example of FIG. 5 includes a RELEASE instruction targeting the first word to be speculatively written, the system behavior may be analogous had the RELEASE instruction targeted the other word. Thus, in some embodiments, once at least two words in the same cache line (i.e., word group) are speculatively read, the words in that cache line may no longer be eligible for early release. In various embodiments, unintended loss fields, such as UL and FA fields 440 and 445, may be adapted to enable word grouping with early release for speculative buffers implemented with mechanisms other than caches. For example, FIG. 4b is a block diagram illustrating a Bloom filter with unintended loss fields that enable safe coexistence of word grouping and early release, according to some embodiments. As described above, another word grouping method by which an HTM may track a read set is by using a hardware-implemented Bloom filter. FIG. 4b is a block diagram of a Bloom-filter-based speculative buffer equipped to support correct coexistence of word grouping and early release mechanisms, according to some embodiments. In some such embodiments, when a word is speculatively read, the HTM may determine an index for the word, such as by applying a hash function to the address of the word. The HTM may then set a flag at the index value of a special-purpose array, such as Bloom filter 450, to indicate that the word has been speculatively read. When the processor receives probes indicating that another thread has accessed a given memory address, the processor may apply the hash function to the received memory address to determine an index and check whether a flag in the Bloom filter is set at that index. In some embodiments, the HTM may associate unintended loss fields (such as 440 and 445) with each index of a read set Bloom filter. For example, each field of Bloom filter 450 (such as indices 460 and 470) is associated with respective unintended loss fields (such as 462 and 472). In some embodiments, these fields may be part of the Bloom filter itself, contained in a separate hardware structure, or otherwise associated with respective indices of the Bloom filter. According to the illustrated embodiment, each of the unintended loss fields 462 and 472 comprise respective UL fields (464 and 474) and FA fields (466 and 476). These fields may be analogous to fields 440 and 445 in the data cache implementation. For example, if a processor speculatively reads a word at a given address, where the address hashes to index 460, the HTM may determine whether the value at index 460 indicates that this address is already in the read set. If the value at index 460 does not indicate that this address is already in the read set, the HTM may set index 460 to such a value and set FA field 466 to a value identifying the given address. As with the data cache implementation, FA field 466 may hold the address itself, an offset into the group of addresses that map to index 466, or another identifier. For example, if the hash function used is a modulo function, then one method of deriving an identifier for the given address may be to shift the address by log2(M) bits, where M is the number of Bloom filter indices. On the other hand, if the value in index 460 indicates that the address is already in the read set, the HTM may compare the identifier of the address with that stored in FA field 466. If the two values are not the same, then the processor has read at least two different words in the same group and the HTM may respond by setting the UL field 464 to a value indicating a danger of unintended loss from early release. As with the data cache implementation, the HTM may prevent early release operations targeting index 460 if the UL field 464 indicates that multiple words in the group have been speculatively read. FIG. 6 is a block diagram of the components of an HTM configured to support word grouping with early release, according to some embodiments. HTM 110 of FIG. 6 may correspond to HTM mechanisms 110 of FIG. 1. According to the illustrated embodiment, HTM 1 10 may include a speculative buffer, such as 605, that supports both word grouping and early release. In some embodiments, speculative buffer 605 may be implemented using cache (e.g., data cache 400), using a Bloom filter (e.g., Bloom filter 450), and/or using other mechanisms. In some embodiments, speculative buffer 605 may include fields usable to indicate that at least one of a group of words in shared memory is in the read set of a transaction (e.g., SR field 435, index 460, etc.). In some embodiments, speculative buffer 605 may also include fields usable to indicate whether multiple words in a given group have been speculatively read within an active transaction (e.g., UL fields 440, 464, 474, etc.). In the illustrated embodiment, HTM 1 10 also includes word grouping logic 610 configured to determine and set appropriate flags in speculative buffer 605 when a given word is speculatively read. For example, if speculative buffer 605 is implemented as data cache 400 in FIG. 4a, then word grouping logic 610 may be configured to read and set the SR, UL, and FA fields corresponding to a given address in response to the word at that address being speculatively read. According to the illustrated embodiment of FIG. 6, HTM 1 10 may also include early release logic 615, which may be configured to implement word grouping aware early release functionality. For example, early release logic 615 may be configured to execute a RELEASE instruction specifying a memory address to release from a read set of an active transaction, as described herein. Logic 615 may also be configured to determine the group of the specified address and to prevent such functionality in response to determining that multiple words in the group have been speculatively read by the active transaction. For example, early release logic 615 may determine if more than one word in the group has been read, at least in part, by reading an appropriate UL field in speculative buffer 605. Although speculative buffer 605, word grouping logic 610, and early release logic 615 are shown in FIG. 6 as distinct components, in various embodiments, they may be combined and/or further divided. In some embodiments, HTM 110 may further include various other components, such as cache coherence mechanisms, abort logic, commit logic, out-of-order execution support, etc. FIG. 7 is a block diagram illustrating a computer system configured to implement hardware transactional memory with word grouping and early release mechanisms as described herein, according to various embodiments. The computer system 700 may correspond to any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, a peripheral device such as a switch, modem, router, etc, or in general any type of computing device. Computer system 700 may include one or more processors 760, any of which may include multiple physical and/or logical cores. Processors 760 may include respective mechanisms to implement HTM with word grouping and early release mechanisms as described herein, such as mechanisms 770. For example, in some embodiments, HTM 770 may include an appropriate speculative buffer, such as data cache 400 and/or Bloom filter 450. Computer system 700 may also include one or more persistent storage devices 750 (e.g. optical storage, magnetic storage, hard drive, tape drive, solid state memory, etc), which may persistently store data. According to the illustrated embodiment, computer system 700 may include one or more shared memories 710 (e.g., one or more of cache, SRAM, DRAM, RDRAM, EDO RAM, DDR 10 RAM, SDRAM, Rambus RAM, EEPROM, etc.), which may be shared between multiple ones of processors 760. The one or more processors 760, the storage device(s) 750, and the shared memory 710 may be coupled via interconnect 740. In various embodiments, the system may include fewer or additional components not illustrated in FIG. 7 (e.g., video cards, audio cards, additional network interfaces, peripheral devices, a network interface such as an ATM interface, an Ethernet interface, a Frame Relay interface, monitors, keyboards, speakers, etc.). In some embodiments, shared memory 710 may store program instructions 720, which may be encoded in platform native binary, any interpreted language such as Java™ byte-code, or in any other language such as C/C++, Java™, etc. or in any combination thereof. Program instructions 720 may include program instructions to implement one or more multi-threaded applications 722, which include speculative sections of code to be executed respectively as single atomic transactions. In some embodiments, program instructions 720 may also include instructions executable to implement an operating system 724 that provides software support for executing applications 722 (e.g., scheduling, software signal handling, etc.). According to the illustrated embodiment, shared memory 710 may include shared data 730, which may be accessed by multiple ones of processors 760. Ones of processors 760 may cache various components of shared data 730 in local caches, and coordinate the data in these caches by exchanging messages according to a cache coherence protocol, as described herein. Program instructions 720, such as those used to implement multithreaded applications 722 and/or operating system 724, may be stored on a computer-readable storage medium. A computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The computer-readable storage medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; electrical, or other types of medium suitable for storing program instructions. A computer-readable storage medium as described above may be used in some embodiments to store instructions read by a program and used, directly or indirectly, to fabricate the hardware comprising one or more of processors 760. For example, the instructions may describe one or more data structures describing a behavioral-level or register-transfer level (RTL) description of the hardware functionality in a high level design language (HDL) such as Verilog or VHDL. The description may be read by a synthesis tool, which may synthesize the description to produce a netlist. The netlist may comprise a set of gates (e.g., defined in a synthesis library), which represent the functionality of processor 500. The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to processor 500. Alternatively, the database may be the netlist (with or without the synthesis library) or the data set, as desired. Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of this disclosure. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. |
Techniques for adaptive backchannel equalization. A total equalization value is determined over a preselected training period. A total balance equalization value is determined over the preselected training period. A transmitter equalization coefficient is determined based on the total equalization value and the total balance equalization value. Data is transmitted over a serial link using the transmitter equalization coefficient. |
1.A method for adaptive reverse channel equalization, comprising:Determining the total equilibrium value during training;Determining the total equilibrium equilibrium value during the pre-selected training period;Determining a transmitter equalization coefficient of the transmitter based on the total equalization value and the total balanced equalization value;Determining a continuous time linear equalizer (CTLE) peak setting of the receiver, wherein the receiver is coupled to the transmitter;Transmitting data over a serial link based on the transmitter equalization coefficient and the CTLE peak setting,Wherein determining the CTLE peak setting comprises determining a CTLE peak change by:Increasing the CTLE peak if the first decision feedback equalizer (DFE) tap exceeds the first preselected percentage of maximum capability;If the second DFE tap has an opposite amplitude to the first DFE tap and exceeds a second preselected percentage of the first DFE first tap amplitude, the CTLE peak is reduced.2.The method of claim 1 wherein determining a total equalization value during training comprises:Determining a total equalization detection value for the interval for each interval during the training period;The total equilibrium values of the plurality of intervals are summed to determine a total equilibrium value during the training period.3.The method of claim 2, wherein the total equalization detection value of the interval is a positive value if a middle bit in the multi-bit mode exceeds a reference voltage value.4.The method of claim 2, wherein the total equalization detection value of the interval is a negative value if the intermediate bit in the multi-bit mode is less than the reference voltage value.5.The method of claim 1 wherein said transmitter equalization coefficients are determined by detecting a total equalization and balance equalization, and said method further comprising:Determining the transmitter equalization coefficient to increase the coefficient change (Δcpl);Determining the transmitter equalization coefficient pre-shoot coefficient change (Δcml);Determine the main tap change value (Δc0), where Δcpl, Δcml, and Δc0 are calculated using the following formula:Δ1=sign(teq_total)Δ2=sign(beq_total)Δcpl+Δcml=Δ1Δcpl-Δcml=Δ2→Δcpl=sign(Δ1+Δ2)Δcml=sign(Δ1-Δ2)Δc0=-(Δcpl+Δcml)Where teq_total includes the total equilibrium value, and beq_total includes the total equilibrium equilibrium value.6.The method of claim 1 wherein determining a balance equalization value during the pre-selected training comprises:Determining a balanced equalization detection value of the interval for each interval during the preselected training period;A plurality of spaced equilibrium equalization values are summed to determine a balance equalization value during the preselected training period.7.The method of claim 6, wherein if the first intermediate bit of the two intermediate bits exceeds the reference voltage value in the multi-bit mode and the second intermediate bit of the two intermediate bits is less than the reference voltage, The balanced balance equalization detection value is a positive value.8.The method of claim 6, wherein if the first intermediate bit of the two intermediate bits is less than the reference voltage value in the multi-bit mode and the second intermediate bit of the two intermediate bits exceeds the reference voltage, The balanced balance equalization detection value is a negative value.9.The method of claim 1 wherein said first preselected percentage and said second preselected percentage are programmable.10.The method of claim 1 wherein said serial link comprises a Peripheral Component Interconnect (PCI) compatible link.11.The method of claim 10 wherein the PCI compatible link comprises a PCI Express (PCIe), third generation or higher compatible link.12.An apparatus for adaptive reverse channel equalization, comprising:Feedforward equalizer module for:Determine the total equilibrium value during training,Determining the total equilibrium equilibrium value during the pre-selected training period,Determining a transmitter equalization coefficient of the transmitter based on the total equalization value and the total balanced equalization value, andThe CTLE peak setting of the transmitter is determined by determining a continuous time linear equalizer (CTLE) peak change of the receiver, wherein determining the CTLE peak change comprises:Increasing the CTLE peak if the first decision feedback equalizer (DFE) tap exceeds the first preselected percentage of maximum capability;Decreasing the CTLE peak if the second DFE tap has an opposite amplitude to the first DFE tap and exceeds a second preselected percentage of the first DFE first tap amplitude;A link controller for transmitting data over a serial link based on the transmitter equalization coefficient and the CTLE peak setting.13.The apparatus of claim 12 wherein determining the total equalization value during training comprises:Determining a total equalization detection value for the interval for each interval during the training period;The total equilibrium values of the plurality of intervals are summed to determine a total equilibrium value during the training period.14.The apparatus according to claim 13, wherein if the intermediate bit in the multi-bit mode exceeds the reference voltage value, the total equalization detection value of the interval is a positive value.15.The apparatus according to claim 13, wherein if the intermediate bit in the multi-bit mode is smaller than the reference voltage value, the total equalization detection value of the interval is a negative value.16.The apparatus according to claim 12, wherein said transmitter equalization coefficient determines a transmitter equalization coefficient de-emphasis coefficient change (Δcpl) and a transmitter equalization coefficient pre-shoot coefficient change by detecting a total equalization and balance equalization and also using the following formula: (Δcml) and the main tap change value (Δc0) are determined:Δ1=sign(teq_total)Δ2=sign(beq_total)Δcpl+Δcml=Δ1Δcpl-Δcml=Δ2→Δcpl=sign(Δ1+Δ2)Δcml=sign(Δ1-Δ2)Δc0=-(Δcpl+Δcml)Where teq_total includes the total equilibrium value, and beq_total includes the total equilibrium equilibrium value.17.The apparatus of claim 12 wherein determining a balance equalization value during the preselected training comprises:Determining a balanced equalization detection value of the interval for each interval during the preselected training period;A plurality of spaced equilibrium equalization values are summed to determine a balance equalization value during the preselected training period.18.The apparatus of claim 17, wherein if the first intermediate bit of the two intermediate bits exceeds the reference voltage value in the multi-bit mode and the second intermediate bit of the two intermediate bits is less than the reference voltage, The balanced balance equalization detection value is a positive value.19.The apparatus according to claim 17, wherein if the first intermediate bit of the two intermediate bits is smaller than the reference voltage value in the multi-bit mode, and the second intermediate bit of the two intermediate bits exceeds the reference voltage, The balanced balance equalization detection value is a negative value.20.The apparatus of claim 12 wherein said first preselected percentage is 50% and said second preselected percentage is 50%.21.The apparatus of claim 12 wherein said serial link comprises a Peripheral Component Interconnect (PCI) compatible link.22.The apparatus of claim 21 wherein the PCI compatible link comprises a PCI Express (PCIe), third generation or higher compatible link.23.The apparatus of claim 12 wherein said training period is a pre-selected training period. |
Adaptive reverse channel equalizationpriorityThe present application claims priority to U.S. Provisional Application No. 61/801,014, entitled "Adaptive Backchannel Equalization", filed on March 15, 2013, by the name of the s.Technical fieldEmbodiments of the invention relate to fast interconnections. More specifically, embodiments of the invention relate to fast serial links and associated transmitter controls.Background techniqueThe Fast Serial Input/Output (I/O) interface has recently been targeted at 8-10 Gbit. Providing reliable data communication at such speeds is often complex and challenging because inter-symbol interference (ISI), random and deterministic jitter, crosstalk, and power supply noise can severely degrade the signal, which results in a recovery signal on the receiving side. It's hard. For example, in the PCIe (third generation) specification, an interactive reverse channel equalization protocol is defined. This protocol allows link partners to exchange information and assign a time window to each receiver to adjust the transmitter settings of its link partner. However, the protocol does not specify a method of receiver adaptation, but the sender side of the link partner must respond to its request.Existing solutions using link balancing require each platform and plug-in card to be characterized and configured for reliable operation. This combines individual platform customization and presents a huge logical difficulty for electrical verification.Summary of the inventionAccording to a first aspect of the present invention, a method for adaptive reverse channel equalization is provided, comprising: determining a total equalization value during training; determining a total balance equalization value during a preselected training period; a total equalization value and the total balanced equalization value determining a transmitter equalization coefficient of the transmitter; determining a continuous time linear equalizer (CTLE) peak setting of the receiver, wherein the receiver is coupled to the transmitter; Data is transmitted over the serial link based on the transmitter equalization coefficient and the CTLE peak setting, wherein determining the CTLE peak setting includes determining a CTLE peak change by: if the first decision feedback equalizer (DFE) tap exceeds The first preselected percentage of maximum capability increases the CTLE peak; and if the second DFE tap has an opposite amplitude to the first DFE tap and exceeds a second preselected percentage of the first DFE first tap amplitude, the CTLE peak is decreased.According to a second aspect of the present invention, an apparatus for adaptive reverse channel equalization is provided, comprising: a feedforward equalizer module for determining a total equalization value during training, determining during a preselected training period a total balance equalization value, determining a transmitter equalization coefficient of the transmitter based on the total equalization value and the total balanced equalization value, and determining the peak change of a continuous time linear equalizer (CTLE) of the receiver Determining a CTLE peak setting of the receiver, wherein determining the CTLE peak change comprises: increasing a CTLE peak if the first decision feedback equalizer (DFE) tap exceeds a first preselected percentage of maximum capability; and if the second DFE tap has Decreasing a CTLE peak by a magnitude opposite the first DFE tap and exceeding a second preselected percentage of the first DFE first tap amplitude; and a link controller for basing the transmitter equalization coefficient and the CTLE The peak setting sends data over the serial link.DRAWINGSThe embodiments of the present invention are illustrated by way of example and not limitation, the same reference1 is a block diagram of one embodiment of a computer system having links that can use adaptive reverse channel equalization.2 is a timing sequence corresponding to one embodiment of an adaptive equalization process.Figure 3 depicts a data pattern for analysis in one embodiment of an adaptive equalization process.4 is a flow diagram of an example technique for calculating balanced equalization and overall equalization.Figure 5 is an example equalization map.Figure 6 depicts the convergence trajectory of transmitter equalization and receiver continuous time linear equalizer (CTLE) peak adjustment for an embodiment using adaptive reverse channel equalization.FIG. 7 depicts an embodiment of a computing system including a Peripheral Component Interconnect Express (PCIe) compatible architecture.Figure 8 depicts an embodiment of a PCIe compatible interconnect architecture including a layered stack.Figure 9 depicts an embodiment of a PCIe compatible request or packet to be generated or received within an interconnect fabric.Figure 10 depicts an embodiment of a transmitter and receiver pair for a PCIe compatible interconnect architecture.Figure 11 depicts an embodiment of a block diagram of a computing system.Figure 12 depicts another embodiment of a block diagram of a computing system.Figure 13 depicts yet another embodiment of a block diagram of a computing system.Detailed waysIn the following description, numerous specific details are set forth. However, embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques are not described in detail to avoid obscuring the understanding of the specification.One current solution for providing adaptive link equalization is to perform receiver eye margin testing based on transmitter optimization. It passes through each equalization setting of the transmitter and tests the receiver margin by a test design (DFT) function that utilizes time margin and/or voltage margin. After the eye margin test passes all of the link partner's transmitter equalization settings, the transmitter equalization corresponding to the highest margin obtained during the test is selected. The transmitter-optimized eye margin test is typically run by the BIOS at system startup, but can also be integrated into a PCIe controller or system agent. It can be called "software equalization" and it has many drawbacks. These disadvantages include one or more of the following.The transmitter-optimized eye margin significantly increases the boot time because it requires a full set of margin tests for the transmitter partner's transmitter equalization settings. Since this is performed during power up, it means a negative impact on the user experience. Eye-based optimization based on transmitter optimization does not produce the same result every time. The reliability of the margin test is directly related to the dwell time during which the receiver adds additional pressure in time and/or voltage while checking the bit error rate. In order to get reliable results, a full margin test may take a few minutes to complete.Transmitter-optimized eye margin requires the insertion of a card to notify third-generation PCIe compatibility at power up and provides an additional reset signal. These additional requirements can cause system interoperability issues. Based on the transmitter-optimized eye margin, a setting that introduces system stability risk can be selected. The transmitter-optimized eye margin focuses only on margins and does not know the internal receiver state. It can choose to place the receiver at a setting close to the boundary condition of the stability edge of the internal circuit.Eye-optimized eye margins can only be selected from different pre-sets, which limits link performance improvements. Pre-setting is a small part of the possible equalization settings, and the optimization settings are platform dependent, which may not correspond to the pre-settings. Many receiver analog settings are very dependent on processing, operating temperature, and voltage. Adding new receiver parameters to the eye margin test increases the optimization time exponentially and is not possible in real world systems. The lack of ability to self-adjust the receiver configuration is another limitation to link performance improvement.1 is a block diagram of one embodiment of a computer system having a link that can utilize adaptive reverse channel equalization. In one embodiment, the link is a PCIe compatible link. For example, PCIe 3 generation or higher is compatible with links. The computer system in this example includes a processor 100, a link partner 150, and a channel 170. In one embodiment, link partner 150 includes a PCIe 3 generation compatible transmitter 160. In one embodiment, the transmitter 160 implements a three-tap equalization FIR filter in a Transmitter Analog Front End (TxAFE) block 161. The three-tap FIR filter is controlled by the TX equalization coefficients {Cm1, C0 and Cp1}, respectively. Cm1 is a pre-cursor tap, C0 is a main cursor tap and Cp1 is a post cursor tap. The transmitter 160 converts the binary data stream into a differential analog signal, and equalizes the outputs TXP and TXN according to the coefficients {Cm1, C0 and Cp1}. The transmit signal is connected to channel 170.In one embodiment, processor 100 implements a third generation PCIe receiver 110. The weakened and degraded signals RXP and RXN from the channel output are coupled to the receiver 110. The continuous time linear equalizer (CTLE) block 111 amplifies and determines the input signal. The CTLE 111 has a variable gain amplifier that receives an automatic gain control coefficient (AGCCoef), and a frequency peak equalizer that receives a CTLEPeak coefficient (CTLEPeak). The CTLE 111 provides a stable differential signal to the decision feedback equalizer (DFE) block 112 at the outputs outp, outn. The DFE 112 is adjusted by a decision feedback equalizer coefficient (DFECoef).Once the differential signal is compensated by DFE 112, it is provided to sampler block 116 for sampling the digital data and the error. The sampler 116 operates to determine whether the DFE output signals vxp, vxn are above or below a reference, such as a reference voltage, at the rising or falling edge of the clock signal ck.In one embodiment, if the DFE voltage is greater than zero, it corresponds to digital data "1". If the DFE voltage is less than 0, it corresponds to the digital data "0". If the magnitude of the DFE voltage is above the reference voltage level (eg, vref), eg, 100 mV, 150 mV, or other selectable value, then it corresponds to a digital error of "1." If the amplitude of the DFE voltage is lower than the same reference voltage level, it corresponds to a digital error of "0". The logic levels for the data and error information can be inverted in different embodiments or encoded in different schemes such that the logic levels are inverted at different times.The data and error signals are provided to a Least Mean Square (LMS) error block 115 and a Continuous Time Offset Cancellation (CTOC) block 114, which provide decision feedback equalizer coefficients (DFECoef), automatic gain control coefficients (AGCCoef), and CTLE, respectively. Offset correction factor (CTOCCoef) to DFE 112 and CTLE 111. The DFECoef, data and error signals are provided to a clock and data recovery (CDR) block 117 which generates receiver data and clock outputs RxData and Rxclk and extracts sample phase information pi_dac1 to control the phase interpolator block 113. The phase interpolator mixes the input PLL clocks CLKPLL1 and CLKPLLQ according to pi_dac1 and produces a piclk output to the DFE block 112.In one embodiment, adaptive back channel equalization block 120 analyzes the data and error signals along with AGCCoef and DFECoef during initial link training. In the techniques described herein, reverse channel equalization may utilize adaptive tuning in hardware to select the transmitter equalization settings {Cm1, C0 and Cp1} that are optimal for the receiver. In one embodiment, the technique includes selecting a continuous time linear equalizer (CTLE) that adaptively peaks by hardware adaptation. One embodiment enables hardware adaptive implementation of third generation PCIe requirements that are capable of meeting reverse channel balance. In one embodiment, the technique uses a gradient search strategy for fast convergence. In general, it can be used by any receiver with a decision feedback balancer (DFE).The techniques described herein may provide one or more of the following advantages. In one embodiment, the transmitter equalization coefficients and the receiver CTLE peak settings are jointly optimized, which are the two most critical parameters in link performance. Improves electrical robustness. The higher the power margin, the better the link stability. In one embodiment, the mechanisms described herein require a relatively small footprint and accommodate the design of third generation PCIe.In one embodiment, the techniques described herein may operate on a per lane basis and may be optimized over the entire equalization space of each lane. In contrast, transmitter optimization based on eye margin is per bundle (two lanes) or per port (all lanes) and can only be selected in a preset for the bundle or port.2 is a timing sequence corresponding to one embodiment of an adaptive equalization process. In one embodiment, adaptive equalization may be provided by a digital finite state machine (FSM) residing in an analog front end (AFE). In one embodiment, the FSM controls the receiver circuitry, calculates optimized equalization settings, and communicates with an I/O (eg, PCIe) System Agent (SA) or controller during training. The example of Figure 2 relates to PCIe training; however, the techniques described herein can be applied to different interfaces and are not limited to PCIe.In one embodiment, the receiver circuit passes through a first acquisition sequence (ACQ) 215 after the PCIe controller begins a speed change (eg, to a third generation data rate) 210. The receiver attempts to lock the bits during ACQ 215 while aggregating clock and data recovery loop (CDR), automatic gain control (AGC), decision feedback equalization (DFE), and continuous offset cancellation (CTOC). When ACQ 215 is complete, the receiver reaches the optimal operating conditions for a given default link partner transmitter equalization and receiver CTLE peak setting.In one embodiment, adaptive equalization begins at the end of ACQ 215. In one embodiment, feed forward equalization (FFE) is used to perform an overall link evaluation by looking up the gradient of the overbalanced and balanced equalization. In one embodiment, the equalized gradient is used to drive transmitter equalization adaptation in the first half of iteration 220, and jointly optimizes transmitter equalization (TxEQ) and receiver CTLE (RxCTLE) in the second half of iteration 260.In one embodiment, the FFE is used to calculate new TxEQ and RxCTLE coefficients and pass these new values to the system agent. The system agent then transmits these new settings with the link partner and waits for the new value to be valid in the SA segment. In one embodiment, the adaptive equalization can then perform a pre-selected number of iterations to ensure that TxEQ and RxCTLE ultimately reach an optimal setting. In PCIe, for example, 24ms is the maximum training time and the process described herein can be completed in 1.5ms or less, which significantly improves system performance.Figure 3 illustrates a data pattern for analysis in one embodiment of an adaptive equalization process. In one embodiment, the data patterns "x101x" and "x010x" are analyzed in a DFE error sampler to search for an indication of overall equalization. In one embodiment, the middle of the 3 bits is the sample bits, which are compared and used for the adaptive equalization process. In one embodiment, the sample bits are compared to a reference voltage (+vref, -vref).In one embodiment, if the single single conversion bit in the middle is below the convergence reference voltage and the non-conversion bit is above the reference voltage, this is considered to be under-equalized, as shown by A in FIG. In one embodiment, the formula Δcp1 + Δcm1 = -1 is used to represent under-equalization. In one embodiment, if the intermediate single conversion bit is greater than the convergence reference voltage and the non-conversion bit is lower than the reference voltage, then this is considered to be over-equalized, as shown by B in FIG. . In one embodiment, the formula Δcp1 + Δcm1 = +1 is used to indicate over-equalization. In the adaptive equalization process, the variable "TEQ" is used such that TEQ = +1 if overbalanced and TEQ = -1 if underbalanced, and TEQ = 0 if balanced.In one embodiment, the data patterns "x1100x" and "x0011x" are analyzed by the DFE error sampler to search for an indication of balanced equalization. The voltage amplitudes of the two intermediate bits are compared to the convergence reference voltage. In one embodiment, if the first bit in the middle exceeds the convergence reference voltage and the second bit is below the reference voltage, it is considered to be "pre-shoot overweight", as shown by C in FIG. . In one embodiment, if the first bit in the middle is lower than the convergence reference voltage and the second bit is higher than the reference voltage, it is considered to be "de-emphasis overweight", as shown by D in FIG. Show. In the adaptive equalization process, the variable "BEQ" is used such that BEQ = +1 if the weight is overweighted and BEQ = -1 if the pre-shot is overweight, and BEQ = 0 if the equalization is balanced.In one embodiment, during the FFE training segment (eg, 64k UI time window), the input bitstream is sampled therein and filtered by data pattern analysis to collect TEQ and BEQ statistics. An example flow chart for total TEQ and BEQ calculations is shown in FIG. The example of Figure 4 is a training window for 64k UI; however, other training windows that are longer or shorter may also be supported.The FFE TxEQ training segment starts 410 with a UI count of zero. If the UI count is greater than the specified window (eg, 64K) 420, a new TxEQ value calculation 430 is performed and the FFE TxEQ training segment ends 440. If the UI count is not greater than the specified window 420, a total equalization (TEQ) detection 450 is performed. In one embodiment, TEQ detection is performed as described above. The detected TEQ value is added to the total TEQ (teq_total) 460.Balance Balance (BEQ) 470 is then performed. In one embodiment, BEQ detection is performed as described above. The detected BEQ value is added to the total BEQ (beq_total) 480. The UI count is incremented by 490 and the process is repeated.Once the total TEQ and total BEQ values are calculated, a new TxEQ value can be calculated. In the following description, Δcpl represents a change in the TxEQ de-emphasis coefficient, Δcml represents a change in the TxEQ pre-shoot coefficient, and +Δc0 represents a main tap change. In one embodiment, the following equation is utilized:In one embodiment, if boundary conditions including a full deployment (FS) level, a low frequency (LF) level, and a coefficient polarity are satisfied, then Δcm1, Δc0, and Δcp1 are added to the current TxEQ to calculate a new value. If the boundary conditions are not satisfied, Δcm1, Δc0, and Δcp1 are applied according to the boundary conditions.In one embodiment, in the second half of the adaptive iteration (as shown in Figure 2), the FFE can be used to jointly optimize TxEQ and RxCTLE. In one embodiment, the CTLE is an analog circuit having process, voltage, and temperature (PVT) dependent characteristics. It can also be subject to partial to partial changes.In one embodiment, examples of digital peak indices 0-15 are assigned to represent the extent of CTLE equalization. 0 represents a flat band response, or no equalization; 1 indicates a slight increase in equalization; and 15 indicates a maximum amount of equalization. The digital index is just an example, and other parameters can be applied according to the same principle.Typically, the higher peak setting of the CTLE results in spreading the equalization pulse to the subsequent UI, which can result in excessive equalization in the short and medium channels. A common symptom is a low (or negative) DFE value that attempts to correct for excessive EQ pulses from the CTLE.Margin and link stability issues can arise when the DFE is overworked to cancel the overbalance caused by CTLE. In some cases, it is sufficient that the CTLE peak is zero when the DFE can handle intersymbol interference (ISI) separately. CTLE can improve link performance in long channels when ISI is significant.In one embodiment, CTLE adaptability can be controlled by the AGC and DFE loops. In one embodiment, CTLE adaptation begins after the first phase of the TxEQ iteration with initial peak=0 (see Figure 2). Thereafter, Δpeak represents the CTLE peak change and is available below for CTLE adaptation.When the first tap of the DFE exceeds 50% of the operating capacity (this represents a significant ISI), Δpeak = +1;When the second tap of the DFE is in the opposite direction of the first tap and exceeds 50% of the amplitude of the first tap (which indicates that the CTLE is overbalanced), Δpeak = -1;Δpeak is applied subject to the boundary conditions of AGC saturation and CTLE peak range.The operational capability is a programmable value that represents a reasonable range for determining the feedback equalizer across a set of channel conditions.Figure 5 is an exemplary equalization map. Pre-set P4, P7, inverted P7 (rP7) and P8 represent exemplary initial TxEQ settings for the adaptive equalization process. The equalization map of Figure 5 is an indicator map of the receiver margin for all link partner TxEQ coefficient combinations. The horizontal axis is the TxEQ post-label value and the vertical axis is the pre-label value. For each pair of post-standard and pre-standard values, the receiver margin is measured based on the mean square error (MSE) of the DEF output relative to the reference voltage (eg, vref) at the sample instance.In the example of FIG. 5, the cells in group 520 are TxEQ settings with too high a final MSE value, and the cells in groups 530 and 560 are TxEQ settings with a moderately high final MSE value, group 540 The cells in are the TxEQ settings with acceptable but not the most final final MSE values, and the cells in group 550 are TxEQ settings with the desired final MSE value. In one embodiment, the adaptive equalization process converges from different starting TxEQ settings (including P4, P7, inverted P7 (rP7) and P8) to the same cell 551 within group 550.The example shown in Fig. 6 is a convergence trajectory from the T1EQ preset P4. For 3rd generation PCIe reverse channel equalization, for example, 60 iterations can be run, with the first 30 iterations adjusting the link partner TxEQ and the receiver CTLEPeak set to 0, and the next 30 iterations performing the TxEQ and RxCTLE joint optimization. Other iterations greater than or less than 60 can also be used. Curve 610 shows the trajectory of TxEQ C0 convergence, curve 630 shows the trajectory of TxEQ Cm1 convergence, curve 620 shows the trajectory of TxEQ Cp1 convergence, and line 640 shows the trajectory of CTLEPeak convergence. MSE Improvement 660 was also drawn for comparison purposes.In one embodiment, adaptive equalization can begin with different initial TxEQ settings (including P7, P4, P8, inverted P7). In one embodiment, the initial receiver CTLEPeak is fixed at zero. In one embodiment, the final converged TxEQ coefficients and receiver CTLEPeak are the same for different initial settings. Under typical conditions, 60 iterations require approximately 1.5 mS, which is much lower than the specified 24 mS training window required by the 3rd generation PCIe specification.As described above, the techniques described herein can be used in a PCI or PCIe architecture. One interconnect architecture includes a Peripheral Component Interconnect (PCI) Fast (PCIe) architecture. The primary goal of PCIe is to enable components and devices from different vendors to interoperate in an open architecture across multiple market segments; clients (desktop and mobile), servers (standard and enterprise), and embedded and communications device. PCI Express is a high performance, general purpose I/O interconnect defined for various future computing and communication platforms. Some PCI properties, such as its usage mode, load-store architecture, and software interface, have been maintained through its modifications, while previous parallel bus implementations have been replaced with highly scalable, fully serial interfaces. More recent versions of PCI quickly take advantage of point-to-point interconnects, switch-based technologies, and packetization protocols to achieve new levels of performance and features. Power management, quality of service (QoS), hot plug/plug support, data integrity, and error handling are some of the advanced features supported by PCI Express.Referring to Figure 7, an embodiment of a structure consisting of a point-to-point link interconnecting a set of components is shown. System 700 includes a processor 705 and a system memory 710 coupled to a controller hub 715. Processor 705 includes any processing element such as a microprocessor, host processor, embedded processor, coprocessor, or other processor. Processor 705 is coupled to controller hub 715 via a front side bus (FSB) 706. In one embodiment, FSB 706 is a serial point-to-point interconnect as described below. In another embodiment, link 706 includes a serial, differential interconnect architecture that conforms to different interconnect standards.System memory 710 includes any storage device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 700. System memory 710 is coupled to controller hub 715 via memory interface 716. Examples of memory interfaces include dual data rate (DDR) memory interfaces, dual channel DDR memory interfaces, and dynamic RAM (DRAM) memory interfaces.In one embodiment, controller hub 715 is a root hub, a root complex, or a root controller in a peripheral component interconnect fast (PCIe or PCIE) interconnect hierarchy. Examples of controller hub 715 include a chipset, a memory controller hub (MCH), a north bridge, an interconnect controller center (ICH), a south bridge, and a root controller/center. The term "chipset" generally refers to two physically separate controller centers, that is, a Memory Controller Hub (MCH) coupled to an Interconnect Controller Center (ICH). Note that current systems typically include an MCH integrated with processor 705, and controller 715 communicates with the I/O device in a similar manner as described below. In some embodiments, end-to-end routing is optionally supported by root complex 715.Here, controller hub 715 is coupled to switch/bridge 720 via serial link 719. Input/output modules 717 and 721 (which may also be referred to as interfaces/ports 717 and 721) include/implement a layered protocol stack to provide communication between controller hub 715 and switch 720. In one embodiment, multiple devices can be coupled to switch 720.The switch/bridge 720 routes packets/information from the device 725 upstream (i.e., one level up toward the root complex) to the control center 715, and downstream from the processor 705 or system memory 710 (i.e., from the root controller to the next level) ) to device 725. In one embodiment, switch 720 is referred to as a logical component of a plurality of virtual PCI to PCI bridge devices. Device 725 includes any internal or external device or component coupled to the electronic system, such as an I/O device, a network interface controller (NIC), an add-in card, an audio processor, a network processor, a hard drive, a storage device, a CD/DVD ROM, monitors, printers, mice, keyboards, routers, portable storage devices, Firewire devices, Universal Serial Bus (USB) devices, scanners, and other input/output devices. Usually in PCIe terminology, such as a device is referred to as an endpoint. Although not specifically shown, device 725 can include a PCIe to PCI/PCI-X bridge to support legacy or other versions of PCI devices. Endpoint devices in PCIe are often classified as traditional PCIe or root complex integration endpoints.Graphics accelerator 730 is also coupled to controller hub 715 via serial link 732. In one embodiment, graphics accelerator 730 is coupled to the MCH, which is coupled to the ICH. Switch 720 and corresponding I/O device 725 are then coupled to the ICH. I/O modules 731 and 718 also implement a layered protocol stack for communication between graphics accelerator 730 and controller hub 715. Similar to the MCH described above, the graphics controller or graphics accelerator 730 itself may be integrated into the processor 705.Turning to Figure 8, an embodiment of a layered protocol stack is shown. The layered protocol stack 800 includes any form of layered communication stack, such as a fast path interconnect (QPI) stack, a PCIe stack, a next generation high performance computing interconnect stack, or other layered stack. Although the discussion below is related to the PCIe stack, the same concepts can be applied to other interconnect stacks. In one embodiment, protocol stack 800 is a PCIe protocol stack including transaction layer 805, link layer 810, and physical layer 820. A representation as a communication protocol stack may also be referred to as a module or interface that implements/includes a protocol stack.PCI fast use packets to transfer information between components. Packets are formed at the transaction layer 805 and the data link layer 810 for transmitting information from the transmitting component to the receiving component. As the transmitted packets flow through other layers, they are extended with additional information necessary to process the packets at those layers. On the receiving side, the reverse process occurs, and the packets transition from their physical layer 820 representation to the data link layer 810 representation, and finally (to the transaction layer packet) to a form that can be processed by the transaction layer 805 of the receiving device.In one embodiment, transaction layer 805 is used to provide an interface between the processing core of the device and the interconnect fabric, such as data link layer 810 and physical layer 820. In this regard, the primary responsibility of the transaction layer 805 is to assemble and disassemble packets (ie, transaction layer packets, or TLPs). Transaction layer 805 generally manages credit-based flow control of the TLP. PCIe implements split transactions, that is, transactions with time-separated requests and responses, allowing the link to carry other traffic while the target device collects data for the response.In addition, PCIe uses credit-based flow control. In this scenario, the device announces the initial amount of credit to each receive buffer in transaction layer 805. The credits consumed by each TLP are counted by an external device at the opposite end of the link (e.g., controller hub 115 in Figure 8). If the transaction does not exceed the credit limit, the transaction can be transferred. When a response is received, the credit is restored. The advantage of a credit scheme is that the delay in credit return does not affect performance if no credit restrictions are encountered.In one embodiment, the four transaction address spaces include a configuration address space, a memory address space, an input/output address space, and a message address space. A memory space transaction includes one or more read requests and write requests to transfer data to/from a memory mapped location. In one embodiment, a memory space transaction can use two different address formats, such as a short address format, such as a 32-bit address, or a long address format, such as a 64-bit address. The configuration space transaction is used to access the configuration space of the PCIe device. Transactions to the configuration space include read requests and write requests. A message space transaction (or simply a message) is defined to support in-band communication between PCIe agents.Thus, in one embodiment, transaction layer 805 assembles packet header/payload 806. The format used for the current packet header/payload can be found in the PCIe specification on the PCIe specification website.Referring briefly to Figure 9, an embodiment of a PCIe transaction descriptor is shown. In one embodiment, transaction descriptor 900 is a mechanism for carrying transaction information. In this regard, transaction descriptor 900 supports the identification of transactions in the system. Other potential uses include tracking changes to the default transaction ordering and associating transactions with channels.Transaction descriptor 900 includes a global identifier field 902, an attribute field 904, and a channel identifier field 906. In the illustrated example, global identifier field 902 is depicted as including a local transaction identifier field 908 and a source identifier field 910. In one embodiment, the global transaction identifier 902 is unique to all outstanding requests.According to one implementation, the local transaction identifier field 908 is a field generated by the requesting agent and is unique to all outstanding requests that need to be completed for the requesting agent. Moreover, in this example, the source identifier 910 uniquely identifies the requesting agent in the PCIe hierarchy. Accordingly, along with the source ID 910, the local transaction identifier field 908 provides a global identification of the transaction within the hierarchical domain.Attribute field 904 specifies the characteristics and relationships of the transaction. In this regard, the attribute field 904 may be used to provide additional information that allows modification of the default processing of the transaction. In one embodiment, the attribute field 904 includes a priority field 912, a reserved field 914, a sort field 916, and a non-listening field 918. Here, the priority subfield 912 can be modified by the initiator to assign a priority to the transaction. The reserved attribute field 914 is reserved for future or vendor defined use. A possible usage model that utilizes priority or security attributes can be implemented using the reserved attribute field.In this example, the Sort Attribute field 916 is used to provide optional information that conveys the sort type that may modify the default collation. According to an example implementation, the sort attribute "0" indicates that the default collation is applied, wherein the sort attribute "1" represents a loose sort, where the write can be written in the same direction, and the read completion can be written in the same direction. The Listening Attributes field 918 is used to determine if a transaction is being listened to. As shown, channel ID field 906 identifies the channel associated with the transaction.Figure 10 illustrates an embodiment of a transmitter and receiver pair for a PCIe compatible interconnect architecture. Link layer 1010, also referred to as data link layer 1010, serves as an intermediate stage between transaction layer 1005 and physical layer 1020. In one embodiment, the responsibility of the data link layer 1010 is to provide a reliable mechanism for exchanging transaction layer packets (TLPs) between the two components of the link. One side of the data link layer 1010 accepts the TLP assembled by the transaction layer 1005, applies the packet sequence identifier 1011 (ie, the identification number or the packet number), calculates and applies the error detection code (ie, CRC 1012), and submits the modified TLP. To the physical layer 1020, for transmission to the external device through the physical layer.In one embodiment, physical layer 1020 includes logical sub-block 1021 and electrical sub-block 1022 to physically transfer packets to an external device. Here, logical sub-block 1021 is responsible for the "digital" function of physical layer 1021. In this regard, the logical sub-block includes a transmitting portion for preparing output information transmitted by the physical sub-block 1022, and a receiver portion for identifying and preparing for transmitting the received information to the link layer 1010. The received information.Physical block 1022 includes a transmitter and a receiver. A symbol is supplied to the transmitter by the logical sub-block 1021, and the transmitter serializes the symbol and transmits it to an external device. The receiver is supplied with a serialized symbol from an external device and converts the received signal into a bit stream. The bitstream is deserialized and provided to logical sub-block 1021. In one embodiment, an 8b/10b transmission code is employed in which a 10-bit symbol is transmitted/received. Here, the special symbols are used to construct a packet having a frame 1023. Additionally, in one example, the receiver also provides a symbol clock that is recovered from the incoming serial stream.As discussed above, while transaction layer 1005, link layer 1010, and physical layer 1020 are discussed with respect to particular embodiments of the PCIe protocol stack, the layered protocol stack is not limited in this regard. In fact, any layered protocol can be included/implemented. By way of example, a port/interface represented as a layered protocol includes: (1) a first layer for assembling a packet, ie a transaction layer; a second layer for sorting packets, ie a link layer; and for transmitting a packet The third layer, the physical layer. As a specific example, a Common Standard Interface (CSI) layered protocol can be utilized.Referring next to Figure 10, an embodiment of a PCIe serial point-to-point architecture is illustrated. Although an embodiment of a PCIe serial point-to-point link is shown, the serial point-to-point link is not limited to this as it includes any transmission path for transmitting serial data. In the illustrated embodiment, the basic PCIe link includes two low voltage, differential drive signal pairs: a transmit pair 1006/1011 and a receive pair 1012/1007. Accordingly, device 1005 includes transmitting logic 1006 that transmits data to device 1010 and receiving logic 1007 that receives data from device 1010. In other words, two transmit paths (ie, paths 1016 and 1017), and two receive paths (ie, paths 1018 and 1019) are included in the PCIe link.A transmission path refers to any path used to transmit data, such as a transmission line, a copper line, an optical line, a wireless communication channel, an infrared communication link, or other communication path. The connection between two devices, such as device 1005 and device 1010, is referred to as a link, such as link 1015. A link can support one channel, and each channel represents a set of differential signal pairs (one pair for transmission and one pair for reception). To extend the bandwidth, the link can aggregate multiple lanes represented by xN, where N is any supported link width, such as 1, 2, 4, 8, 12, 16, 32, 64 or more.A differential pair refers to two transmit paths, such as lines 1016 and 1017, that are used to transmit differential signals. As an example, when line 1016 switches from a low voltage level to a high voltage level, i.e., a rising edge, line 1017 is driven from a high logic level to a low logic level, i.e., a falling edge. Differential signals may exhibit better electrical characteristics, such as better signal integrity, ie cross-coupling, voltage overshoot/undershoot, ringing, etc. This allows for a better time window, which supports faster transmission frequencies.It should be noted that the apparatus, methods, and systems described above can be implemented in any of the electronic devices or systems described above. As a specific example, the following figures provide an exemplary system that utilizes the present invention as described herein. As the following system will be described in more detail, a number of different interconnections are disclosed, described, and reviewed from the discussion above. Also, it will be apparent that the above described advances can be applied to any interconnection, structure or architecture.Referring now to Figure 11, a block diagram of the components present in a computer system in accordance with an embodiment of the present invention is shown. As shown in Figure 11, system 1100 includes any combination of components. These components can be implemented as an IC, a portion thereof, discrete electronics, or other modules, logic, hardware, software, firmware, or a combination thereof that is suitable for integration in a computer system, or otherwise incorporated into a computer system. Parts. It is also noted that the block diagram of Figure 11 is intended to show a high level view of many of the components of a computer system. However, it should be understood that some of the illustrated components may be omitted, additional components may be present, and in other implementations, the components may be shown in different arrangements. As a result, the invention described above can be implemented in any portion of one or more interconnections described or illustrated below.As shown in FIG. 11, in one embodiment, processor 1110 includes a microprocessor, a multi-core processor, a multi-threaded processor, an ultra low voltage processor, an embedded processor, or other known processing elements. In the illustrated implementation, processor 1110 functions as a primary processing unit and a central center for communicating with various components of system 1100. As an example, processor 1100 is implemented as a system on a chip (SoC). As a specific illustrative example, processor 1110 includes anArchitecture CoreTM-based processor, such as i3, i5, i7, or other such processor available from Intel Corporation of Santa Clara, California. However, it should be understood that the MIPS-based design, ARM Holdings, Ltd. licensed ARM-based design, or its customers, from Advanced MicroDevices, Inc. (AMD) in Sunnyvale, California, MIPS Technologies, Inc., Sunnyvale, Calif. Other low power processors available at or with its licensee or adopter may alternatively appear in other embodiments, such as an Apple A5/A6 processor, a Qualcomm Snapdragon processor, or a TI OMAP processor. Note that many client versions of such processors are modified and changed; however, they can support or identify a particular set of instructions that perform the defined algorithms set forth by the microprocessor. Here, the microarchitecture implementation may change, but the architectural functionality of the processor is usually constant. Certain details regarding the architecture and operation of processor 1110 are discussed further below in one implementation to provide an illustrative example.In one embodiment, processor 1110 is in communication with system memory 1115. As an illustrative example, it may be implemented in one embodiment via a plurality of memory devices to provide a given amount of system memory. As an example, the memory may be based on the Low Power Double Data Rate (LPDDR) design of the Joint Electron Devices Engineering Council (JEDEC), such as the current LPDDR2 standard according to JEDEC JESD 209-2E (published April 2009), or For the next generation LPDDR standard for LPDDR3 or LPDDR4, it will provide an extension to LPDDR2 to increase bandwidth. In various implementations, the personal memory device can be of a different package type, such as a single die package (SDP), a dual die package (DDP), or a four die package (Q17P). In some embodiments, these devices are soldered directly to the motherboard to provide a low profile solution, while in other embodiments, the devices are configured as one or more memory modules, which in turn are provided by a given connector. Coupled to the motherboard. Of course, other memory implementations are possible, such as other types of memory modules, such as different variations of dual in-line memory modules (DIMMs), including but not limited to microDIMMs, MiniDIMMs. In a specific illustrative embodiment, the memory is between 2GB and 16GB in size and can be configured to be soldered to a DDR3LM package or LPDDR2 or LPDDR3 memory on the motherboard via a ball grid array (BGA).In order to provide persistent storage of information such as data, applications, one or more operating systems, mass storage 1120 may also be coupled to processor 1110. In various embodiments, such bulk storage can be implemented via SSDs in order to achieve a thinner and lighter system design and improved system responsiveness. However, in other embodiments, the mass storage may be implemented primarily using a hard disk drive (HDD) with a smaller amount of SSD storage, used as an SSD fast cache to support contextual state and other such during a power down event. Non-volatile storage of information so that a fast power up can occur on the re-activation of system activity. As shown in FIG. 11, a flash memory device 1122 can be coupled to the processor 1110, for example, via a serial peripheral interface (SPI). The flash device can provide non-volatile storage of system software, including basic input/output software (BIOS) and other firmware of the system.In various embodiments, the mass storage of the system is implemented solely by SSD, or as a disk, optical disk, or other drive with an SSD fast cache. In some embodiments, the mass storage is implemented as an SSD or an HDD with a recovery (RST) fast cache module. In various implementations, HDDs provide storage between 320GB-4 terabytes (TB) and above, while RST Fast Cache implements SSDs with 24GB-256GB capacity. Note that this SSD fast cache can be configured as a single-level fast cache (SLC) or multi-level fast cache (MLC) option to provide an appropriate level of responsiveness. In an SSD-only option, the module can be housed in a different location, such as in an mSATA or NGFF slot. As an example, SSDs range in capacity from 120GB to 1TB.Various input/output (IO) devices may be present in system 1100. Specifically shown in the embodiment of Figure 11 is a display 1124, which may be a high definition LCD or LED panel disposed within the cover of the chassis. The display panel can also provide a touch screen 1125 that, for example, adapts to the exterior through the display panel such that user interaction with the user can be provided to the system via the user interaction with the touch screen, such as information display, information access, etc. . In one embodiment, display 1124 is coupled to processor 1110 via a display interconnect that can be implemented as a high performance graphics interconnect. Touch screen 1125 can be coupled to processor 1110 via another interconnect, which in the embodiment can be an I2C interconnect. As further shown in FIG. 11, in addition to touch screen 1125, user input through touch can also occur via touch pad 1130, which can be configured within a rack and can also be coupled to the same I2C interconnect as touch screen 1125.The display panel can operate in multiple modes. In the first mode, the display panel can be arranged in a transparent state, wherein the display panel is transparent to visible light. In various embodiments, the majority of the display panel is a display screen in addition to the bezel surrounding the perimeter. When the system is operating in notebook mode and the display panel is operating in a transparent state, the user can see the information presented on the display panel while also being able to see the objects behind the display. In addition, the information displayed on the display panel can be seen by a user located behind the display. Or the operating state of the display panel may be an opaque state in which visible light cannot be transmitted through the display panel.In the tablet mode, the system is folded closed such that when the bottom surface of the substrate is on the surface or held by the user, the back display surface of the display panel is in an outwardly facing position to the user. In the operating tablet mode, the back display surface functions as a display and user interface because the surface can have touch screen functionality and can perform other known functions of conventional touch screen devices, such as tablet devices. For this purpose, the display panel may include a transparency adjustment layer disposed between the touch screen layer and the front display surface. In some embodiments, the transparency adjustment layer can be an electrochromic layer (EC), an LCD layer, or a combination of EC and LCD layers.In various embodiments, the display can have different sizes, for example, a 11.6" or 13.3" screen, and can have a 16:9 aspect ratio, and at least 300 nit brightness. The display can also have full high definition (HD) resolution (at least 1920 x 1080p), is compatible with the embedded display port (eDP), and is a low power panel with panel self-refresh.Regarding touch screen capabilities, the system can provide a display multi-touch panel that is capacitive multi-touch and can use at least 5 fingers. And in some embodiments, the display can be 10 fingers. In one embodiment, for lower friction to reduce "finger burns" and avoid "finger skipping", the touch screen is placed in a tamper resistant and scratch resistant glass and coating (eg, Gorilla GlassTM or Gorilla Glass 2TM). To provide an enhanced touch experience and responsiveness, in some embodiments, the touch panel has multi-touch functionality, such as less than 2 frames (30 Hz) per static view during two-finger zoom, and 200 ms (finger behind the pointer) per finger Single point touch function with frame (30Hz) less than 1cm. In some implementations, the display supports edge-to-edge glass with a minimal screen bezel that is also flush with the panel surface, which limits IO interference when using multi-touch.For sensory computing and other purposes, various sensors may be present in the system and may be coupled to the processor 1110 in different ways. Certain inertial and environmental sensors may be coupled to the processor 1110 through the sensor hub 1140 (eg, via an I2C interconnect). In the embodiment shown in FIG. 11, these sensors may include an accelerometer 1141, an ambient light sensor (ALS) 1142, a compass 1143, and a gyroscope 1144. Other environmental sensors may include one or more thermal sensors 1146 that are coupled to the processor 1110 via a system management bus (SMBus) in some embodiments.As noted above, in other embodiments, the system can be configured as a convertible tablet system that can be used in at least two different modes: tablet mode and notebook mode. The convertible system can have two panels, a display panel and a substrate, such that in the tablet mode, the two panels are arranged in one stack, one on top of the other. In tablet mode, the display panel faces outward and can provide touch screen functionality as found in conventional tablets. In notebook mode, the two panels can be arranged in an open clamshell configuration.In various embodiments, the accelerometer can be a 3-axis accelerometer having a data rate of at least 50 Hz. A gyroscope can also be included, and the gyroscope can be a 3-axis gyroscope. In addition, an electronic compass/magnetometer may be present. Moreover, one or more proximity sensors can be provided (eg, opening the cover to sense when someone is approaching (or not approaching) the system and adjusting power/performance to extend battery life). Sensor fusion capabilities for some OSs, including accelerometers, gyroscopes, and compasses, can provide enhanced features. Furthermore, via a sensor center with a real time clock (RTC), wakeup from the sensor mechanism can be implemented to receive sensor inputs while the rest of the system is in a low power state.As also shown in FIG. 11, various peripheral devices can be coupled to the processor 1110 via a low pin count (LPC) interconnect. In the illustrated embodiment, the various components can be coupled by embedded controller 1135. These components may include a keyboard 1136 (eg, coupled via a PS2 interface), a fan 1137, and a thermal sensor 1139. In some embodiments, touch pad 1130 can also be coupled to EC 1135 via a PS2 interface. Additionally, a secure processor, such as Trusted Platform Module (TPM) 1138, conforms to the Trusted Computing Group (TCG) TPM specification version 1.2 of October 2, 2003, and may also be coupled to processor 1110 via the LPC interconnect. However, it should be understood that the scope of the present invention is not limited thereto, and that secure processing and storage of security information may be in another protected location, such as static random access memory (SRAM) within a secure coprocessor, or as Encrypted data block that is decrypted when protected by the Secure Area (SE) processor mode.In particular implementations, the peripheral ports may include high definition media interface (HDMI) connectors (which may have different form factors, such as full size, small or small); one or more USB ports, such as conforming to universal serial bus revisions The full-size external port of the 3.0 specification (November 2008) has a USB device (such as a smartphone) with at least one power charge when the system is in a connected standby state and plugged into an AC wall power source. In addition, one or more ThunderboltTM ports can be set. Other ports may include external access card readers, such as full size SD-XC card readers and/or SIM card readers (eg, 8-pin card readers) for WWAN. For audio, there may be a 3.5mm jack with stereo and microphone functionality (eg, a combination function) while supporting jack detection (eg, the headset only supports headphones that use a microphone or a microphone in the cable in the cover). In some embodiments, this jack can redistribute tasks between stereo headphones and stereo microphone inputs. Additionally, a power outlet can be provided for coupling to the AC brick.System 1100 can communicate with external devices in a variety of ways, including wirelessly. In the embodiment shown in Figure 11, there are various wireless modules, each of which may correspond to a radio configured for a particular wireless communication protocol. One way of short range (e.g., near field) wireless communication may be via a near field communication (NFC) unit 1145, which in one embodiment may communicate with the processor 1110 via SMBus. Note that with such an NFC unit 1145, various devices that are close to each other can communicate. For example, a user can adapt the system 1100 to another, for example, a portable device (eg, a user's smartphone) by adapting the two devices to each other and capable of transmitting information such as identification information payment information, such as image data. Communication. Wireless power transfer can also be done using an NFC system.Using the NFC unit described herein, the user can collide the device edge to edge and by placing the coupling between the coils of one or more of such devices, placing the device side to side for near field coupling functions (eg Near Field Communication and Wireless Power Transfer (WPT)). More specifically, embodiments provide a device that strategically sets and places ferrite materials to provide better coil coupling. Each coil has an inductance associated with it that can be selected in conjunction with resistivity, capacitance, and other characteristics of the system to achieve a common resonant frequency of the system.As further seen in FIG. 11, the additional wireless unit can include other short range wireless engines, including WLAN unit 1150 and Bluetooth unit 1152. With the WLAN unit 1150, Wi-FiTM communication according to the 802.11 standard given by the Institute of Electrical and Electronics Engineers (IEEE) can be realized, and via the Bluetooth unit 1152, short-range communication via the Bluetooth protocol can occur. These units can communicate with the processor 1110, for example, via a USB link or a Universal Asynchronous Receive Transmitter (UART) link. Or these units may be coupled to the processor 1110 via an interconnect in accordance with the Peripheral Component Interconnect ExpressTM (PCIeTM) protocol, for example, according to the PCI ExpressTM Specification Basic Specification Version 3.0 (published on January 17, 2007), or another Such protocols, such as the Serial Data Input/Output (SDIO) standard. Of course, the actual physical connection between these peripherals (which are configured on one or more add-in cards) can be achieved by adapting to the NGFF connector of the motherboard.Moreover, wireless wide area communication, such as in accordance with cellular or other wireless wide area protocols, can occur via WWAN unit 1156, which in turn can be coupled to a Subscriber Identity Module (SIM) 1157. Further, in order to be able to receive and use location information, a GPS module 1155 may also be present. Note that in the embodiment shown in FIG. 11, the WWAN unit 1156 and the integrated capture device (eg, camera module 1154) can communicate via a given USB protocol (eg, a USB 2.0 or 3.0 link) or a UART or I2C protocol. Again, the actual physical connection of these units can be adapted via the NGFF add-in card to the NGFF connector configured on the motherboard.In one embodiment, the wireless functionality can be modularly set, for example, by a WiFiTM 802.11ac solution that supports Windows 8CS (eg, an IEEE 802.11 abgn-enabled plug-in card). Such a card can be configured in an internal slot (eg, via an NGFF adapter). The add-on module provides Bluetooth capability (for example, Bluetooth 4.0 with backward compatibility) andwireless display. Additionally, NFC support can be provided via a separate device or multi-function device and can be located, for example, in the front right portion of the rack for easy access. Another add-on module may be a WWAN device that can provide support for 3G/4G/LTE and GPS. Such a module can be implemented in an internal (eg, NGFF) slot. Integrated antenna support for WiFiTM, Bluetooth, WWAN, NFC and GPS to enable seamless transition from WiFiTM to WWAN radio and wireless Gigabit (WiGig) in accordance with the Wireless Gigabit Specification (July 2010) Of course.As mentioned above, an integrated camera can be incorporated into the cover. As an example, this camera can be a high resolution camera, for example, having a resolution of at least 2.0 megapixels (MP) and extending to 6.0 MP and higher.To provide audio input and output, the audio processor can be implemented via a digital signal processor (DSP) 1160, which can be coupled to the processor 1110 via a high definition audio (HDA) link. Similarly, DSP 1160 can be in communication with an integrated encoder/decoder (CODEC) and amplifier 1162, which in turn can be coupled to an output speaker 1163 that can be implemented in a rack. Similarly, the amplifier and CODEC 1162 can be coupled to receive audio input from the microphone 1165, which in embodiments can be implemented via a dual array microphone (eg, a digital microphone array) to provide high quality audio input to enable various Voice activated control of the operation. It should also be noted that the audio output can be provided from the amplifier/CODEC 1162 to the headphone jack 1164. Although shown by these specific components in the embodiment of Fig. 11, it should be understood that the scope of the invention is not limited in this respect.In a particular embodiment, the digital audio codec and amplifier are capable of driving a stereo headphone jack, a stereo microphone jack, an internal microphone array, and a stereo speaker. In various embodiments, the codec can be integrated into the audio DSP or coupled to a peripheral controller hub (PCH) via an HD audio path. In some implementations, in addition to integrated stereo speakers, one or more subwoofers can be set up, and the speaker scheme can support DTS audio.In some embodiments, the processor 1110 can be powered by an external voltage regulator (VR) and a plurality of internal voltage regulators (referred to as fully integrated voltage regulators (FIVRs)) integrated within the processor die. Using multiple FIVRs in the processor allows components to be grouped into separate power planes such that power is only regulated and powered by the FIVR to those components in the group. During power management, when a processor is placed in a particular low power state, a given power plane of one FIVR can be powered down or turned off while another power plane of another FIVR remains active or fully powered.In one embodiment, during some deep sleep states, a power supply layer can be used to power I/O pins for multiple I/O signals, such as an interface between the processor and the PCH, an interface with an external VR, and Interface with EC 1135. This sustain power plane also powers up the on-chip voltage regulator, which supports on-board SRAM or other fast buffer memory that stores the processor context in a sleep state. The power supply layer is also used to power up the wake-up logic of the processor, which monitors and processes various wake-up source signals.During power management, although the other power planes are powered down or turned off while the processor enters certain deep sleep states, the power plane is maintained energized to support the components mentioned above. However, when these components are not needed, this may result in unnecessary power consumption or loss. To this end, embodiments may provide a connection standby sleep state to maintain the context of the processor using a dedicated power plane. In one embodiment, utilizing the resources of the PCH, connecting the alternate sleep state facilitates processing wake-up, which may itself be present in a package with a processor. In one embodiment, connecting the standby sleep state facilitates maintaining processor architecture functionality in the PCH until the processor wakes up, which enables all unnecessary processor components that previously reserved power during the deep sleep state to be turned off, including turning off all clocks . In one embodiment, the PCH includes a timestamp counter (TSC) and connection standby logic for controlling the system during the connection standby state. An integrated voltage regulator for maintaining the power plane can also reside in the PCH.In an embodiment, during the connected standby state, the integrated voltage regulator can act as a dedicated power plane that remains powered to support a dedicated flash buffer that is stored in the memory when the processor enters a deep sleep state and a connected standby state. The context of the processor, such as a critical state variable. The critical state can include state variables associated with the architecture, microarchitecture, debug state, and/or similar state variables associated with the processor.During the Connected Standby state, the wake-up source signal from EC 1135 can be sent to the PCH instead of the processor so that the PCH, rather than the processor, can manage the wake-up process. In addition, the TSC remains in the PCH to facilitate maintaining processor architecture functionality. Although shown by these specific elements in the embodiment of Fig. 11, it should be understood that the scope of the invention is not limited in this respect.Power control in the processor can result in increased power savings. For example, power can be dynamically distributed between cores, each core can change frequency/voltage, and multiple deep low power states can be provided to support very low power consumption. In addition, dynamic control of the core or individual core portions can provide reduced power consumption by powering down the components when components are not in use.Some implementations may provide a specific power management IC (PMIC) to control platform power. With such a solution, the system can see very low (e.g., less than 5%) battery degradation for an extended duration (e.g., 16 hours) in a given standby state, such as when Win8 is connected to the standby state. In the Win8 idle state, battery life can be achieved for more than 9 hours (for example, at 150 nit). As for video playback, long battery life can be achieved, for example, full HD video playback can last for at least 6 hours. In one implementation, the platform may have an energy capacity of, for example, 35 watt hours (Whr) using SSD for Win8CS, and 40-44 Whr for Win8CS using, for example, an HDD with a RST fast cache configuration.A specific implementation can provide support for 15W nominal CPU Thermal Design Power (TDP) with configurable CPU TDP up to approximately 25W TDP design points. Due to the above thermal characteristics, the platform can include a minimum of vents. In addition, the platform is pad-friendly (ie no hot air is blowing to the user). Depending on the material of the frame, different maximum temperature points can be achieved. In the implementation of a plastic frame (at least the cover or base part is plastic), the maximum operating temperature can be 52 degrees Celsius (°C). For metal rack implementations, the maximum operating temperature can be 46 °C.In different implementations, a security module such as a TPM can be integrated into the processor or can be a discrete device such as a TPM 2.0 device. With an integrated security module, also known as Platform Trust Technology (PTT), the BIOS/Firmware can present certain hardware features for certain security features, including security instructions, secure boot,anti-theft technology,identity protection technology,Letter Execution Technology (TXT) andmanagement engine technology, as well as secure user interfaces such as secure keyboards and displays.Turning to FIG. 12, a block diagram of an exemplary computer system formed with a processor including an execution unit that executes instructions, wherein one or more interconnects implement one or more embodiments in accordance with an embodiment of the present invention, is shown feature. In accordance with the present invention, as in the embodiments described herein, system 1200 includes components, such as processor 1202, to employ an execution unit that includes logic to execute algorithms for processing data. System 1200 represents a processing system based on PENTIUM IIITM, PENTIUM 4TM, XeonTM, Itanium, XScaleTM, and/or StrongARMTM microprocessors available from Intel Corporation of Santa Clara, California, but other systems (including PCs with other microprocessors) , engineering workstations, set-top boxes, etc.) can also be used. In one embodiment, the exemplary system 1200 executes a version of the WINDOWSTM operating system available from Microsoft Corporation of Redmond, Washington, but other operating systems (eg, UNIX and Linux), embedded software, and/or graphical users. The interface can also be used. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.Embodiments are not limited to computer systems. Alternative embodiments of the present invention are applicable to other devices, such as handheld devices and embedded applications. Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital video cameras, personal digital assistants (PDAs), and handheld PCs. The embedded application can include a microcontroller, a digital signal processor (DSP), a system on a chip, a network computer (NetPC), a set top box, a network center, a wide area network (WAN) switch, or can execute one or more instructions in accordance with at least one embodiment. Any other system.In the illustrated embodiment, processor 1202 includes one or more execution units 1208 to implement an algorithm that executes at least one instruction. An embodiment may be described in the context of a single processor desktop or server system, although alternative embodiments may be included in a multi-processor system. System 1200 is an example of a "central" system architecture. Computer system 1200 includes a processor 1202 to process data signals. As an illustrative example, processor 1202 includes a Complex Instruction Set Computer (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a processor that implements a combination of instruction sets Or any other processor device, such as a digital signal processor. Processor 1202 is coupled to a processor bus 1210 that transmits data between processor 1202 and other components of system 1200. Elements of system 1200 (eg, graphics accelerator 1212, memory controller hub 1216, memory 1220, I/O controller hub 1224, wireless transceiver 1226, flash BIOS 1228, network controller 1234, audio controller 1236, serial port expansion port 1238) The I/O controller 1240, etc.) performs those conventional functions well known to those skilled in the art.In one embodiment, processor 1202 includes a level one (L1) internal fast cache memory 1204. Depending on the architecture, processor 1202 may have a single internal fast cache or multiple levels of internal fast cache. Other embodiments include a combination of internal and external fast caches, depending on the particular implementation and needs. Register file 1206 is used to store different types of data in different registers, including integer registers, floating point registers, vector registers, group registers, shadow registers, checkpoint registers, status registers, and instruction pointer registers.Execution unit 1208 includes logic for performing integer and floating point operations, also resident in processor 1202. In one embodiment, processor 1202 includes a microcode (ucode) ROM to store microcode that, when executed, is used to execute algorithms for certain macroinstructions or to process complex scenes. Here, the microcode is potentially updatable for processing logic errors/determinations of the processor 1202. For one embodiment, execution unit 1208 includes logic for processing packed instruction set 1209. By including packed instructions 1209 in the instruction set of general purpose processor 1202, and associated circuitry for executing the instructions, the operations used by many multimedia applications can be performed using the packed data in general purpose processor 1202. Thus, many multimedia applications are more efficiently accelerated and executed by using the full width of the data bus of the processor for performing operations on the packed data. This potentially eliminates the need to transfer smaller units of data across the processor's data bus to perform one or more operations, one data element at a time.Alternative embodiments of execution unit 1208 can also be used with microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 1200 includes a memory 1220. Memory 1220 includes a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, or other memory device. Memory 1220 stores instructions and/or data represented by data signals to be executed by processor 1202.It should be noted that any of the above features or aspects of the present invention may be utilized in one or more of the interconnections illustrated in FIG. For example, an on-chip interconnect (ODI), not shown, is used to couple the internal units of processor 1202 that implement one or more aspects of the present invention described above. Or the present invention is associated with a processor bus 1210 (eg, Intel's Fast Path Interconnect (QPI) or other known high performance computing interconnect), a high bandwidth memory path 1218 to memory 1220, to a graphics accelerator Point-to-point links of 1212 (eg, Peripheral Component Interconnect Express (PCIe) compatible architecture), controller center interconnect 1222, I/O, or other interconnects for coupling to other components shown (eg, USB, PCI, PCIe). Examples of such components include audio controller 1236, firmware center (flash BIOS) 1228, wireless transceiver 1226, data storage device 1224, conventional I/O controller 1210 including user input and keyboard interface 1242, serial expansion port 1238 (eg Universal Serial Bus (USB)), and network controller 1234. Data storage device 1224 may include a hard drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.Referring now to Figure 13, a block diagram of a second system 1300 in accordance with an embodiment of the present invention is shown. As shown in FIG. 13, multiprocessor system 1300 is a point-to-point interconnect system and includes a first processor 1370 and a second processor 1380 that are coupled via a point-to-point interconnect 1350. Each of processors 1370 and 1380 can be some version of the processor. In one embodiment, 1352 and 1354 are part of a serial, point-to-point coherent interconnect structure such as Intel's Fast Path Interconnect (QPI) architecture. As a result, the present invention can be implemented within the QPI architecture.Although only two processors 1370, 1380 are shown, it should be understood that the scope of the invention is not so limited. In other embodiments, one or more additional processors may be present in a given processor.Processors 1370 and 1380 are shown as including integrated memory controller units 1372 and 1382, respectively. Processor 1370 also includes point-to-point (P-P) interfaces 1376 and 1378 as part of its bus controller unit; similarly, second processor 1380 includes P-P interfaces 1386 and 1388. Processors 1370, 1380 can exchange information via point-to-point (P-P) interface 1350 using P-P interface circuits 1378, 1388. As shown in FIG. 13, IMCs 1372 and 1382 couple the processors to respective memories, namely memory 1332 and memory 1334, which may be part of the main memory that is locally attached to the respective processor.Each of the processors 1370, 1380 and the chipset 1390 exchange information via respective P-P interfaces 1352, 1354 using point-to-point interface circuits 1376, 1394, 1386, 1398. Chipset 1390 can also exchange information with high performance graphics circuitry 1338 via interface circuitry 1392 along high performance graphics interconnect 1339.A shared fast cache (not shown) may be included in either or both of the two processors; also connected to the processor via a PP interconnect such that if the processor is placed in a low power mode, either or both The local fast cache information of the processors can be stored in the shared fast cache.Chipset 1390 can be coupled to first bus 1316 via interface 1396. In one embodiment, the first bus 1316 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not limited this.As shown in FIG. 13, a variety of I/O devices 1314 are coupled to a first bus 1316, and a bus bridge 1318 that couples the first bus 1316 to a second bus 1320. In one embodiment, the second bus 1320 includes a low pin count (LPC) bus. A variety of devices are coupled to the second bus 1320, including, for example, a keyboard and/or mouse 1322, a communication device 1327, and a storage unit 1328, such as a disk drive or other mass storage device, which in one embodiment typically includes instructions/code and Data 1330. Further, the illustrated audio I/O 1324 is coupled to the second bus 1320. Note that other architectures are also possible in which the components and interconnect architectures included are variable. For example, instead of the point-to-point architecture of Figure 13, the system can implement a multipoint bus or other such architecture.The phrase "one embodiment" or "an embodiment" or "an embodiment" or "an embodiment" or "an embodiment" or "an" or "an" The appearances of the phrase "in one embodiment"While the invention has been described in terms of several embodiments, it will be understood by those skilled in the art Come to practice. The description is therefore to be considered as illustrative and not restrictive. |
A microcontroller architecture that adds a dedicated bit in the op-code decode field to force data access to take place on a page of the random access memory (RAM) for that instruction. This allows the user to have any page selected and still have direct access to the special function registers or the register variables that are located on a pre-defined page of the RAM. The setting of the dedicated bit will not affect the current operation of the microcontroller nor will the setting of the bit modify the currently selected address stored in a page select register currently being used by the microcontroller. |
What is claimed is: 1. A paging scheme for a microcontroller that uses data random access memory to allow tracking of a currently selected address in said random access memory comprising the steps of: (a) linearizing an entire address range of said random access memory; (b) dividing said linearized address range of said random access memory into a plurality of pages, wherein each of said plurality of pages is selected from the group consisting of 256 bytes and 64K bytes in size; (c) dedicating a page of said random access memory to special and general purpose registers; and (d) dedicating a bit in each op-code instruction of said microcontroller which when set forces data access to take place on said dedicated page while not affecting current operations of said microcontroller and not modifying said currently selected address stored in a page select register being used by said microcontroller. 2. The paging scheme for a microcontroller according to claim 1, wherein said step of dedicating a bit in each op-code instruction of said microcontroller further comprises the step of dedicating a bit in only numeric processing op-code instructions of said microcontroller. 3. The paging scheme for a microcontroller according claim 2, wherein the step of dedicating a bit in only numeric processing op-code instructions of said microcontroller further comprises the step of removing non-numeric processing op-code instructions from an instruction decode map of said microcontroller to allow adding said dedicated bit in only said numeric processing op-code instructions of said microcontroller without increasing a size of said instruction decode map for said microcontroller. 4. A microcontroller having a forced page paging architecture comprising: (a) system memory having an entire address range that is linearized, said system memory being arranged into a plurality of pages, each of said plurality of pages having a size selected from the group consisting of 256 bytes and 64K bytes, one page of said plurality of pages being dedicated to special and general purpose registers; and (b) said system memory comprising a plurality of op-code instructions, each op-code instruction having a dedicated bit which when set forces data access to take place on said dedicated page while not affecting current operations of said microcontroller and not modifying a currently selected address stored in a page select register being used by said microcontroller. 5. A microcontroller according to claim 4, wherein said dedicated bit is placed only in numeric processing op-code instructions of said microcontroller. 6. The microcontroller according to claim 4, wherein each op-code instruction is 12 bits wide, with the first six bits defining the instruction, the second six bits defining the address where the instruction is executed and the dedicated bit is added to the first six bits. 7. The microcontroller according to claim 4, wherein each op-code instruction is 14 bits wide with the first seven bits defining the instruction, the second seven bits defining the address where the instruction is executed and the dedicated bit is added to the first seven bits. 8. The microcontroller according to claim 4, wherein each op-code instruction is 16 bits wide with the first eight bits defining the instruction, the second eight bits defining the address where the instruction is executed and the dedicated bit is added to the first eight bits. |
FIELD OF THE INVENTION This invention relates generally to microcontrollers and, more specifically, to a random access memory paging scheme for a microcontroller that will allow a user to have any page selected in the random access memory of the microcontroller and still have direct access to special function registers or the register variables without modifying the page select register of a current instruction. BACKGROUND OF THE INVENTION Current microcontrollers, including PIC microcontrollers, use a random access memory (RAM) paging scheme to address all the data memory. This scheme is extremely cumbersome in that it takes several instructions to ensure that the user is writing or reading the proper address in RAM. It also complicates the job of the C-compiler since the C-compiler must keep track of which page is currently selected in RAM. This presents even more problems when handling interrupts. In classic microcontroller architecture, increasing the op-code field to handle larger addresses would solve the address paging problem. However, increasing the op-code field has the disadvantage of increasing the size of the microcontroller and thus increasing the overall cost of the microcontroller. Another way to alleviate the RAM paging problem is to map all special function and register dedicated memory space that is available in every bank or page. This wastes precious RAM space since every location that is mapped takes up one general purpose RAM location in every bank. If the micro has eight (8) pages, seven (7) locations of RAM are wasted. Therefore, a need existed to provide an improved microcontroller architecture and paging scheme. The improved microcontroller architecture and paging scheme must allow for direct access to special function registers. The improved microcontroller architecture and paging scheme must allow direct access to special function registers without modifying the page select register of the current instruction being used by the microcontroller. The improved microcontroller architecture and paging scheme must further allow for direct access to special function registers without increasing the size of the microcontroller. SUMMARY OF THE INVENTION In accordance with one embodiment of the present invention, it is an object of the present invention to provide an improved microcontroller architecture and paging scheme. It is another object of the present invention to provide an improved microcontroller architecture and paging scheme that allows direct access to special function registers without modifying the page select register of the current instruction being executed by the microcontroller. It is still another object of the present invention to provide an improved microcontroller architecture and paging scheme that allows direct access to special function registers without increasing the size of the microcontroller. In one embodiment, the present invention provides a paging scheme for a microcontroller that uses data random access memory to allow tracking of a currently selected address in the random access memory. The method comprises the step of dedicating a bit in each op-code instruction of the microcontroller. When the bit is set, the bit forces data access to take place on a section of the random access memory storing special and general purpose registers while not affecting current operations of the microcontroller. Even when set, the dedicated bit will not modify the currently selected address stored in the page select register currently being used by the microcontroller. The method may further comprise the steps of: linearizing an entire address range of the random access memory; and dedicating a specific address section of the random access memory to the special and general-purpose registers. The specific address section that is so dedicated can be any page within the memory. This is a useful feature of the present invention as it enables the utilization of, for example, programs that must use specific portions of memory (for instance the first page (0) or the last page (f)). In accordance with another embodiment, the present invention provides a microcontroller having forced page architecture. The microcontroller has a random access memory that has an entire linearized address range. The random access memory is divided into plurality of pages wherein one page is dedicated to special and general purpose registers. A dedicated bit in each op-code instruction of the microcontrollers is used to force data access to take place on a page of the random access memory that stores the special and general purpose registers. The setting of the dedicated bit will not affect the current operations of the microcontroller nor will the setting of the bit modify the currently selected address stored in the page select register currently being used by the microcontroller. The foregoing and other objects, features, and advantages of the invention will be apparent from the following, more particular, description of the preferred embodiments of the invention, as illustrated in the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a simplified data memory map of a prior art paging scheme for a microcontroller to address data memory. FIG. 2 is a simplified data memory map of an 8-bit microcontroller having a forced page paging scheme. FIG. 3 is a simplified diagram of a 16-bit op-code instruction. FIG. 4 is a simplified data memory map of a 16-bit microcontroller having a forced page paging scheme. FIG. 5 is a simplified diagram of a 12-bit op-code instruction. FIG. 6 is a simplified diagram of a 14-bit op-code instruction. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Referring to FIG. 1, a simplified block diagram of a prior art paging scheme 10 for a microcontroller to address data memory is shown. As stated above, all special function and register variables 12 are mapped in the first page of the RAM. However, since the special function registers 12 have to be accessible all the time, the special function registers 12 are mapped into every bank (i.e., Bank 1-7). This wastes precious RAM space since every location that is mapped takes up one general purpose RAM location. Referring to FIG. 2, a microcontroller 20 with forced page architecture is shown. The microcontroller 20 uses a random access memory (RAM) 22 for storing data. The size of the RAM 22 is based on the particular use of the microcontroller 20. As can be seen from FIG. 2, the entire address range of the RAM 22 is linearized. By linearizing the address range, the problems associated with banking and page bits of the prior art are removed. However, in general, many of the op-code instructions of the microcontroller 20 are limited in address space. In the preferred embodiment of the present invention, the microcontroller 20 is an 8-bit PIC microcontroller. Thus, many of the op-code instructions of the microcontroller 20 are limited to an 8-bit address. For this reason, the linear address range is broken into a plurality of pages. If the microcontroller 20 is an 8-bit microcontroller, the RAM 22 is divided into a plurality of 256 byte pages. However, as those of ordinary skill in the art will appreciate, the microcontroller 20 may be a 16-bit microcontroller or other size microcontroller. In the case where the microcontroller 20 is a 16-bit microcontroller, the RAM 22 can be divided into a plurality of 64K byte pages, as shown in FIG. 4. It should be appreciated by those skilled in the art, however, that other configurations are possible. One page 24, known hereinafter as the forced page, is used for storing the special function registers 12 (shown in FIG. 1) and general purpose registers 14 (shown in FIG. 1). As stated above, these registers 12 and 14 need to be accessible at all times. However, in accordance with the present invention, any of the pages (e.g., (0) through (f)) can be used for storing the special function registers 12 and general purpose registers 14. An example of a possible use of this feature is during the call of an interrupt. For example, inside of an interrupt service routine, the user will not have to worry about the address stored in the page select register. In order to access special function registers and/or general purpose registers 12 and 14 the user simply selects the forced page bit 36. In the preferred embodiment of the present invention, the forced page 24 is broken into two 128 byte sections. The first 128 section stores the special function registers while the second 128 section stores the general purpose registers. In the case wherein the microcontroller 20 is a 16-bit microcontroller, the forced page 24 may be broken into two 32K byte sections. The first 32K section stores the special function registers while the second 32K section stores the general-purpose registers, as shown in FIG. 4. It should be appreciated by those skilled in the art, however, that other configurations are possible. Referring now to FIGS. 2 and 3, in order to have the special and general purpose registers 12 and 14 accessible at all times, a bit 36 is dedicated in each op-code instruction 30 of the microcontroller 20 which when set forces data access to take place on, for example, the first page 24 (ie., page 0) of the RAM 22, or the last page (i.e., page (f)) of the RAM 22. As pointed out above, the present invention can be implemented using any of the pages available within the available memory (e.g. RAM 22). To facilitate this feature, one or more specific page select bits can be stored in a separate register. The setting of the dedicated bit does not affect the current operation of the microcontroller 20 nor does it modify the currently selected address stored in the page select register currently being used by the microcontroller 20. Thus, no matter where the user is in the RAM 22, if the bit 36 is set, the current instruction will always affect the forced page (the page where data access is forced to, (e.g., page (0) or page (f)) which stores the special and general purpose registers 12 and 14. Thus, if a user is in the general purpose RAM area (i.e., any page except the forced page) and receives an interrupt, the interrupt service routine can set the dedicated bit 36 in the op-code instruction 30. The user may then deal with the special and general purpose registers 12 and 14 without affecting anything else the microcontroller 20 was doing. When the interrupt has been properly serviced, the microcontroller 20 may go back to the current address location in the RAM 22 since the address location was not modified during the service of the interrupt. In the preferred embodiment of the present invention for an 8-bit microcontroller 20, the op-code instruction 30 is a 16-bit instruction. The first 8-bit section 32 defines the instruction and tells the microcontroller 20 what to do. The second 8-bits section 34 defines the address where the instruction is to be executed. The dedicated bit 36 is added to the first 8-bit section 32 of the op-code instruction 30 in order not to alter the address stored in an op-code instruction 30 when the dedicated bit 36 is set. In a first alternate embodiment of the present invention, the op-code instruction 30' is a 12-bit instruction, as shown in FIG. 5. In this embodiment, the first section 32' is 6-bits wide, defines the instruction, and tells the microcontroller 20 what to do. The second section 34' is 6-bits wide and defines the address where the instruction is to be executed. The dedicated bit 36 is added to the first 6-bit section 32' of the op-code instruction 30' in order not to alter the address stored in an op-code instruction 30' when the dedicated bit 36 is set. In a second alternate embodiment of the present invention, the op-code instruction 30" is a 14-bit instruction, as shown in FIG. 6. In this embodiment, the first section 32" is 7-bits wide, defines the instruction and tells the microcontroller 20 what to do. The second section 34' is 7-bits wide and defines the address where the instruction is to be executed. The dedicated bit 36 is added to the first 7-bit section 32" of the op-code instruction 30' in order not to alter the address stored in an op-code instruction 30" when the dedicated bit 36 is set. As those of ordinary skill in the art will appreciate, instructions 30 of any width (i.e., any multiple of 2) can be used. In the preferred embodiment of the present invention, the dedicated bit 36 is only added to numeric processing op-code instructions of the microcontroller 20. By removing a few non-numeric processing op-code instruction decode map of the microcontroller 20, the dedicated bit 36 may be added in the numeric processing op-code instructions of the microcontroller 20 without increasing the size of the instruction decode map of the microcontroller 20. While the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in form, and details may be made therein without departing from the spirit and scope of the invention. |
A computer system having a caches interconnected in parallel includes a first and a second cache module directly coupled to an address generating line for parallel lookup of data and directly connected to a data generating line. |
CLAIMS 1. A computer system having caches interconnected in parallel comprising: a first and a second cache module directly coupled to an address generating line for parallel lookup of data and directly connected to a data generating line. 2. The computer system according to claim 1, further comprising hiVmiss logic coupled between said first and second cache modules. 3. The computer system according to claim 1, wherein if a hit occurs in either of said first cache module or said second cache module, a lookup process in the other cache module is canceled. 4. The computer system according to claim 1, wherein if a miss occurs in said first cache module and said second cache module, said first cache module or said cache module is selected to receive data from memory. 5. The computer system according to claim 1, wherein data stored in said first cache module and data stored in said second cache module is the same data type. |
240 1 967 DUAL CACHE WITH MULTIPLE INTERCONNECTION OPERATIONMODESFIELD OF THE INVENTION The present invention yen.eral!, relates to c:ompter systems More particularly, the present invention relates to a method and apparatus of improving, performance in computer systems by arranging cache modules in several interconnected operational modes] O BACKGROUND OF THE, INVENTION A cache or cache module as used interchangeably throughout this specification. Is Intended to enhance the speed at which information and data are retrieved. A main memory typically stores a large amount of data which is time consuming to retrieve The cache module contains a copy of portions of the main memory When a processor attempts to read a word of memory, a check is made to determine if the word is in the cache module If so, the word is delivered to the processor If not, a block of main memory. consisting of some fixed number of words, is read into the cache module and then the word is delivered to the processor The main memory consists of up to 2" addressable words, with each word having a unique n- bt address For mapping purposes, this memory is considered to consist of a number of fixed-len lath b}ock.s of K words each That is, there are M=2n /K blocks The cache module consists of C lines of K words each, and the number of lines is considerably less than the number of main memory blocks. FIG I is a block diagram I]ustrating a simplified picture of a network Involving a processor 19 eighth cache module 40 cormected via address, control and data liTles 43, 44 aTld 45 respectively Address and data liT1es 43 and 45 also attached to address and data buffers 41 and 42, respectively which attached to system bus 20 from which nnanT me?noT-y (not shown) is reached Typically, processor 12 generates an address of a word to be read. If a "hit" occurs, (the word is contained in cache module 40), the afford is dc]Tvercd to processor i When this cache hit occurs, the data and address buffers 42 and 41, respectively, are disabled and communication is only between the processor]2 and the cache module 40, with no system bus traffic. '-en a cache "miss" occurs, (the word is nut containecl m cache module 40), the desired address TS loaded from main memory (not shown) Onto system bus 20 and the data is returned throug]1 data buffer 42 to both the cache module 40 and the main rnernory. Watts a cache miss, a 1iTle in the cache may be overwritten or copies out of cache module 40 when new data is stored in the cache module This overwritten dine TS refeTTed to as a "victim block" or a "victim]inc" The basic structure of a conventional multi-processor computer system 10 employing several cache modules is shown in FIG. 2. Computer system 10 includes processors 12, 120 and 220 as shown which arc connected to venous peripheral devices including input/output (LO) devices 14 (SUC]1 as a display monitor, keyboard, graphical pointer (mouse), and a permanent storage device (hard disk)' mcTnory 16 (such as random access memory or RAM) that TS used by processors 12, 120 and 220 to carry out program mstructTons, and FTnJlware] 8 whose primary purpose is to seek out and load an operating system front one of the perip]lera]s (usually the permanent memory device) whenever computes systcin 10 is first lunged on Processors 12, 120 and 22() communicate with the peripheral devices by various meaTJs, ncludiTlg a genera]Tzcd Interconnect or system bus 2(), or dircct-menTory-access channels (not shown) I'roccssor 12, as we]] as each of the other processors 120 and 220, Tnc]udes a processor core 09 having a pltrahty of re,rUSteTS and eyec!tTon units, \'>,ich 'watt'.' C'Ut progrram nstT-uctoTTs in order to operate the computer system 10 As ShONN'n, processor] 2 further mcludcs one or more cache modules, such as an instruction cache 24 and a data cache 26, which are implemented using h?gh-.speed memory devices. As described above, cache modules are commonly used to temporarily store values that might be repeatedly accessed by the processor, ?m order to speed up processing by avoiding the longer step of loading the values fron? memory 16. These cache modules are referred to as "on-board" when they are integrally packaged with the processor core on a smglc integrated chip 28. Each cache module is associated with a cache controller (not shown) that manage.) the transfer of data and instructions between the processor core 22 and the cache. Processor.2 can include additional cache modules, such as cache module 30, whicl? is referred to as a level 2 (L2) cache since it supports the onboard (level 1) caches ?4 and 26. In other words, cache module 30 acts as an Intermediary between memory] 6 and the on-board caches, and can store a much larger amount of Information (ustructons and data) than the on- board caches can, but at a longer access penalty. Cache module 30 is connected to system bus ?0, and all loading of information from memory 16 Alto processor core 22 comes through cache inodu]e 30. One drawback to the conventional cache module arrangement as shown Is that the cache modules do not benefit from being interconnected. Without the cache modules being interconnected, it is Inefficient to retrieve data since each cache must be searched Individually if data Is not found in the first cache that IS searched. Accordmgly, what Is needed is an effective and efficient method for directly connecting cache modules for retrieval of information. S:J)4MARY OF TF-ON' In accordance with the present ulvent?on there is provided a compute? system having a caches mteTCOnneCted m parallel Includes a fist and a second cache module directly coupled to an address generating line for parallel lookup of data and dn-ect]y COllilC-Cted Hi a data rene!-at!??r!???BRIEF DESCRIPTION OF TITE DRAWINGS FIG. 1 is a block diagram m illustrating a simplified picture of a network involving a processor with a cache module FIG 2 is a block diagram illustrating a prior art computer system FIG 3 is a block diagram illustrating the features of a typical cache module FIG 4 is a block diagram illustrating a serial interconnection mode of two cache modules according to an embodiment of the present invention. FIG is a block diagram illustrating a parallel interconnection mode of two cache modules according to an embodiment of the present invention. FIG. 6 is a block diagram illustrating a serial and parallel interconnection mode of two cache modules according to an embodiment of the present invention. Fl(] 7 is a flow diagram illustrating a method for transferring data n a serial interconnection mode of two cache modules according to an embodiment of the present invention. I 5 FIG 8 is a flow diagram illustrating a method for transferring data in a parallel interconnection mode of two cache modules according to an embodiment of the present InventionDETAILED DESCRIPTION Embodiments of the present invention relate to an apparatus of arranging cache modules in a serial, parallel and serial/parallel interconnection mode. ACCOFding 0 an embodiment of the present invention, a computer system ha'in;,r cache modules interconnected in series includes a first and a second cache module directly coupled to an address generating line for parallel lookup of data and data conversion logic coupled between the first cache module and said second cache module. According to an alternative embodiment of the present invention, a computer system having caches Interconnected in parallel includes a first and a second cache module directly coupled to an address generating dine for parallel lookup of data and directly connected to a data generating line According to another embodiment of the present invention, a computer system having cache modules interconnected m seres/paralle] includes a first and a second cache module directly coupled tO an address generating line for parallel lookup of data and data conversion logic coupled between the first cache module and said second cache module, wherein the first cache module is coupled to a data generating line and the second cache module is coupled to a]nultplexer providing converted data from memory and from the first cache module. The following de.scnpt]on is presented to enable one of c.'rdnary skill]n the art to make and use the invention. Various modifcatio]ls to the embodiments will be readily apparent to those skilled In the art and the generic principles herein may be applied to other embodiments. Thus, the present invention is not intended to the embodiments shown but is to be accorded to be limited to the widest scope consistent with the principles and features described herein. FIG. 3 Is a block diagram Illustrating the features of a typical cache module 50. Cache module 50 Includes a Tag array 51, 1-1t/Mss logic 52, Replacement logic 53, Data array 54 and Data output selection 55. Tag arTay 51 is coupled to H]t/Mss logic 52, Pcp]acement logic 53 and Data array 54. F:[]t/Mss logic Is additionally coupled to Replacement logic 53 and Data output selection 55. Data output selection is further coupled to Data array 54. Cache module 50 receives an address from processor 12 (not shown) over an address genera]ng One (address) 56. ':Lhe address Is sent to i ag array 5 i and -t/Miss logic 52 Tag array 51 stores tags associates with each cache line of cache 50. Hit/Miss logic 52 compares the address from processor 12 with a corresponding tag array value stored In Tag array 51. Ht/M]ss logic 52 also produces a hit/miss indication as tO whethe] the tag array value Is located ha Lag array 51 If a cache hit occurs, H]t/Mss logic forNva]ds an Indication that the tag array value Is located In Tag array 51 to Data output selection 55. Data output selection 55 selects data front Data array 54 based Oil a decision fi-om I-lt/Mss logic 59 A.lfc;na,Nel,v, If I-li./lIiis lc,gic 52 'ur-vad.s cut Decagon gnat the tag value Is not located In Tag array 51 this Indication is sent to Replacement logic 53 W] l]C]1 determines a "victim line" when this cache miss occurs. Data Is supplied by memory l 6;:ia data generating One (data-in line) 57 to Data array 54 t'or output. FIG. 4 llusn-ates a serial intercomection mode of two cache modules according to an embodiment of the present Invention In the serial mterconnecton mode, cache n1odule l may be a level 2 (L2) cache and cache module 2 may be a level l (Al) cache for example. Each cache module Includes the same circuitry of cache nodule as shown in FIG. 3. The serial interconnected mode may be used for data types that are transt'onned in character between a memory image and a usage with the processor. For example, the L2 cacl.1e may cache a memory image, and the Ll cache may contain a cache of processor data. An example of data that could utilize this behavior is a single-precison floating point data (in the L2 cache) transformed to 32 bit integer data (in the Ll cache). In the serial Interconnection mode, cache module l and cache module 2 are each directly coupled to an address generating line (address hoe) 60. Address line 60 may also be coupled to processor 12. Cache module l is further coupled to a data generating lute (Oata-In) 61 and a hit/miss generating line (hTt/mTss) 65. Cache module I outputs data on data output line (Data-Out) 62. Data output line 62 TS coupled to a data converter 70. Data converter 70 converts the format of data from cache module l to a format used by cache module 2. The output of data converter 70 is supplied to cache module 2 via data generating dine (Data-In) 63. Cache module 2 outputs dare via Clara OUtpUt line (Data-Out) 64. Adanonaiiy, cache module 2 sends an indication to cache module l via hTt/mss generating line (hTt/miss) 65 whether data was located in cache module 2 or not. FIG. 7 11ustratcs a method for transferring data ITS the serial Interconnection anode of two cache modules according to an embodiment of the present h1veTltiom Tle method begins by recevn1g an address from processor 12 by cache module l and cache module 2 for parallel lookup (Step 70()). A detemTnaton is made as to sNThcther cache module 2 stores the requested data associated with the address from processor ]2 (Step 710) If cache nodule 2 stores that data, the data IS output and the lookup In cache module l stops (Step 7:,(,) Al,eirati-v-ely-' if cclche module 2 does not have the stored data, a detennnatoT1 Is made as to whether cache module] stores the data i, (Step 700). If cache nodule l stores the data, the data IS first converted from a format used by cache module 1 to a format used by cache module 2 (Step 750) and then the concreted data Is moved from cache module I k' cache module 2 for OUIpUI (Step 760). If, however, the data is not stored in cache module l, the data Is loaded from memory into cache module l (Step 740), converted (Step 750) and moved to cache module 2 for output (Step 760). FIG. 5 illustrates a parallel ntcrconnection mode of two caches according to an embodiment of the present Invention. In the parallel interconnection mode, cache module I and cache module 2 may be the same type of cache module (e.g., Ll or L2 caches) and are interconnected in parallel as a single large cache module. in the parallel interconnector mode, cache module l and cache module 2 are each directly coupled to an address generating line (address line) 60. Address line 60 may also be coupled to processor 12. Cache module l and cache module 2 are also each directly coupled to a data generating line (Data-In) 61. Data generating brie 61 may also be coupled to memory 16. Hit/Mss generating brie 65 Is coupled between cache module 1 and cache module 2 for use With simultaneous lookup of requested data In order to know the status of the other cache module. Cache module l and cache module 2 output data via data output lines (Data-Out) 62 and 64 respectively. A multiplexer 200 Is used to output data from either cache module l or cache module 2. Also Includes Is a select vctnn unit 72 which determine which cache module to use if to ic.iteve data ttOm itteiti(J{y ii} the sLuaiion where Partner cache module has the data. Select victim unit 72 can for example, alternate between the cache modules In assigning the cache module to retrieve data fi-om memory or can use any other method of assigning a cache module to retrieve data known In the art. FIG llusn-aes a method for transferring data in the parallel n1terconnecton mode of two caches accc->rdnng to an embodn1lent of the present n1venton The method begins by receiving an address from processor 12 by cache module I and cache module 2 for parallel lookup (Step 810). A detenninaton is made as to whether the data Is in cutler cache module (Step 810). if the data Is in at least one of the cache r..od ales, the d to is output by that c..cl,c,,,odulc arid the lookout fo1- Ah other cache Is canceled (Step 830). Alten1atvely, if neither cache module has the data a selection Is made using select victim unit 79 to load data from memory by one of the cache modules and cancel the lookup for the other cache module (Step 820). FIG 6 illustrates a serial/parallel intercomecton mode of two cache modules according to an embodiment of the present invention. In the serial/parallel interconnection mode, cache module 1 may be a level 2 (L2) cache, and cache module 2 may be a level I (Ll) cache for example. Altematvely, cache module] and cacl1e module 2 may be the same type of cache. In the sepal/parallel interconnection node, cache m<-:lule I and cache module 2 are each directly coupled to an address generating line (address One) 60. Address line may also be coupled to processor 12. Cache module 1 is further coupled to a data generating One (Data-In) 61 and a hit/miss generating line (hit/miss) 65 Is coupled between cache module 1 and cache modu]e 2. In addition, select vchm unit 72 Is also coupled between cache module l and cache modu]e 2. Cache modu]e l outputs data on data output line (Data-Out) 62. Data output dine 62 is also coupled to a data converter 70. Data converter converts the format of data from cache modu] e l to a format used by cache module 2 The output of data converter 70 is supplied to a mu]tp]exer 75 via data generating lme (Data-In) 63. Data generating]me 6] is also supplied to multp]exer 75. Mu]tp]exer 75 determines what type of data (e.g., data from memory or data from cache modu]e l) to Input and send to cache modu]e 2. Data from cache modu]e l is sent via data output line 62 to mu]tp]exer 200 and data From cache module 2 is output via data output line 64 to mnitpioxer 70(). Multiplexer determines the con ect data to output. Several embodiments of the present mventon are specifically illustrated and/or described herem. However, it until] be appreciated that modifications and variations of the e:bodimets of the present invention are covered by the above teac].ings and within tle purview of the appended elands Shout departing Mom the spins and untended scope of the inventionX |
Provided are a method and system for allocating read requests in a solid state drive coupled to a host. An arbiter in the solid state drive determines which of a plurality of channels in the solid state drive is a lightly loaded channel of a plurality of channels. Resources for processing one or more read requests intended for the determined lightly loaded channel are allocated, wherein the one or more read requests have been received from the host. The one or more read requests are placed in the determined lightly loaded channel for the processing. In certain embodiments, the lightly loaded channel is the most lightly loaded channel of the plurality of channels. |
WHAT IS CLAIMED IS1. A method, comprising:determining, by an arbiter in a solid state drive, which of a plurality of channels in the solid state drive is a lightly loaded channel in comparison to other channels;allocating resources for processing one or more read requests intended for the determined lightly loaded channel, wherein the one or more read requests have been received from a host; andplacing the one or more read requests in the determined lightly loaded channel for the processing.2. The method of claim 1, wherein the determined lightly loaded channel is a most lightly loaded channel in the plurality of channels, and wherein subsequent to placing the one or more read requests in the determined most lightly loaded channel for the processing, the determined most lightly loaded channel is as close to being fully utilized as possible during the processing.3. The method of claim 1, wherein the one or more read requests are included in a plurality of read requests intended for the plurality of channels, and wherein an order of processing of the plurality of read requests is modified by the placing of the one or more read requests in the determined lightly loaded channel for the processing.4. The method of claim 3, wherein modifying the order of processing of the plurality of requests preferentially processes the one or more read requests intended for the determined lightly loaded channel over other requests.5. The method of claim 1, the method further comprising:receiving, by the solid state drive, the one or more read requests from the host via a peripheral component interconnect express (PCIe) bus, wherein each of the plurality of channels in the solid state drive has an identical bandwidth.6. The method of claim 5, wherein a sum of bandwidths of the plurality of channels equals a bandwidth of the PCIe bus.7. The method of claim 1, wherein at least one of the plurality of channels is coupled to a different number of NAND chips in comparison to other channels of the plurality of channels.8. The method of claim 1, wherein if the one or more read requests are not placed in the determined lightly loaded channel for the processing then read performance on the solid state drive decreases by over 10% in comparison to another solid state drive in which all channels are coupled to a same number of NAND chips.9. The method of claim 1, wherein the allocating of the resources for the processing is performed subsequent to determining by the arbiter in the solid state drive which of the plurality of channels in the solid state drive is the lightly loaded channel.10. The method of claim 1, wherein the arbiter polls relatively lightly loaded channels more often than relatively heavily loaded channels to preferentially dispatch re-ordered read requests to the relatively lightly loaded channels. 11. The method of claim 1 , the method further comprising:associating with each of the plurality of channels a data structure that maintains outstanding reads that are being processed by the channel; andmaintaining the one or more read requests that have been received from the host in an incoming queue of read requests received from the host.12. An apparatus, comprising:a plurality of non-volatile memory chips;a plurality of channels coupled to the plurality of non-volatile memory chips; andan arbiter for controlling the plurality of channels, wherein the arbiter is operable to:determine which of the plurality of channels is a lightly loaded channel in comparison to other channels; allocate resources for processing one or more read requests intended for the determined lightly loaded channel, wherein the one or more read requests have been received from a host; andplace the one or more read requests in the determined lightly loaded channel for the processing.13. The apparatus of claim 12, wherein the non- volatile memory chips comprise NAND chips, wherein the lightly loaded channel is a most lightly loaded channel in the plurality of channels, and wherein subsequent to placing the one or more read requests in the determined most lightly loaded channel for the processing, the determined most lightly loaded channel is as close to being fully utilized as possible during the processing.14. The apparatus of claim 12, wherein the one or more read requests are included in a plurality of read requests intended for the plurality of channels, wherein the plurality of read requests are received from the host, and wherein an order of processing of the plurality of read requests is modified by the placing of the one or more read requests in the determined lightly loaded channel for the processing. 15. The apparatus of claim 14, wherein modifying the order of processing of the plurality of requests preferentially processes the one or more read requests intended for the determined lightly loaded channel over other requests.16. The apparatus of claim 12, wherein the apparatus receives the one or more requests from the host via a peripheral component interconnect express (PCIe) bus, wherein each of the plurality of channels has an identical bandwidth.17. The apparatus of claim 16, wherein a sum of bandwidths of the plurality of channels equals a bandwidth of the PCIe bus.18. The apparatus of claim 12, wherein the non- volatile memory chips comprise NAND chips, and wherein at least one of the plurality of channels is coupled to a different number of NAND chips in comparison to other channels of the plurality of channels.19. The apparatus of claim 12, wherein the non-volatile memory chips comprise NAND chips, and wherein if the one or more read requests are not placed in the determined lightly loaded channel for the processing then read performance decreases by over 10% in comparison to another apparatus in which all channels are coupled to a same number of NAND chips.20. The apparatus of claim 12, wherein the allocating of the resources for the processing is performed subsequent to determining by the arbiter which of the plurality of channels is the lightly loaded channel.21. The apparatus of claim 12, wherein the arbiter polls relatively lightly loaded channels more often than relatively heavily loaded channels to preferentially dispatch re-ordered read requests to the relatively lightly loaded channels.22. The apparatus of claim 12, wherein the arbiter is further operable to:associate with each of the plurality of channels a data structure that maintains outstanding reads that are being processed by the channel; andmaintain the one or more read requests that have been received from the host in an incoming queue of read requests received from the host.23. An system, comprising:a solid state drive;a display; anda processor coupled to the solid state drive and the display, wherein the processor sends a plurality of read requests to the solid state drive, and wherein in response to the plurality of read requests, the solid state drive performs operations, the operations comprising:determine which of a plurality of channels in the solid state drive is a lightly loaded channel in comparison to other channels in the solid state drive;allocate resources for processing one or more read requests selected from the plurality of read requests, wherein the one or more read requests are intended for the determined lightly loaded channel; andplace the one or more read requests in the determined lightly loaded channel for the processing. 24. The system of claim 23, wherein solid state drive further comprises a plurality of non- volatile memory chips including NAND or NOR chips, wherein the lightly loaded channel is a most lightly loaded channel in the plurality of channels, and wherein subsequent to placing the one or more read requests in the determined most lightly loaded channel for the processing, the determined most lightly loaded channel is as close to being fully utilized as possible during the processing.25. The system of claim 23, wherein an order of processing of the plurality of requests is modified by the placing of the one or more read requests in the determined lightly loaded channel for the processing. |
REDUCTION OF PERFORMANCE IMPACT OF UNEVEN CHANNEL LOADING IN SOLID STATE DRIVESBACKGROUNDA solid state drive (SSD) is a data storage device that uses integrated circuit assemblies as memory to store data persistently. Many type of SSDs use NAND- based or NOR-based flash memory which retains data without power and is a type of non-volatile storage technology.Communication interfaces may be used to couple SSDs to a host system comprising a processor. Such communication interfaces may include a Peripheral Component Interconnect Express (PCIe) bus. Further details of PCIe may be found the publication entitled, "PCI Express Base Specification Revision 3.0," published on November 10, 2010, by PCI-SIG. The most important benefit of SSDs that communicate via the PCI bus is increased performance, and such SSDs are referred to as PCIe SSD.BRIEF DESCRIPTION OF THE DRAWINGSReferring now to the drawings in which like reference numbers represent corresponding parts throughout:FIG. 1 illustrates a block diagram of a computing environment in which a solid state disk is coupled to a host over a PCIe bus;FIG. 2 illustrates another block diagram that shows how an arbiter allocates read requests in an incoming queue to channels of a solid state drive, in accordance with certain embodiments;FIG. 3 illustrates a block diagram that shows allocation of read requests in a solid state drive before starting prioritization of the most lightly populated channel and a reordering of host commands, in accordance with certain embodiments; FIG. 4 illustrates a block diagram that shows allocation of read requests in a solid state drive after prioritization of the most lightly populated channel and a reordering of host commands, in accordance with certain embodiments;FIG. 5 illustrates a first flowchart for preventing uneven channel loading in solid state drives, in accordance with certain embodiments;FIG. 6 illustrates a second flowchart for preventing uneven channel loading in solid state drives, in accordance with certain embodiments; andFIG. 7 illustrates a block diagram of computational device, in accordance with certain embodiments.DETAILED DESCRIPTIONIn the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several embodiments. It is understood that other embodiments may be utilized and structural and operational changes may be made.The increased performance of PCIe SSDs may be primarily because of the number of channels implemented in the PCIe SSDs. For example, in certain embodiments, certain PCIe SSDs may provide improved internal bandwidth via an expanded 18-channel design.In a PCIe based solid state drive, the PCIe bus from the host to the solid state drive may have a high bandwidth (e.g., 40 gigabytes/second). The PCIe based solid state drive may have a plurality of channels where each channel has a relatively lower bandwidth in comparison to the bandwidth of the PCIe bus. For example, in a solid state drive with 18 channels, each channel may have a bandwidth of about 200 megabytes/second.In certain situations, the number of NAND chips that are coupled to each channel are equal in number, and in such situations, in case of random but uniform read requests from the host, the channels may be loaded roughly equally, i.e., each channel over a duration of time is utilized roughly the same amount for processing read requests. It may be noted that in many situations, more than 95% of the requests from the host to the solid state drive may be read requests, whereas less than 5% of the requests from the host to the solid state drive may be write requests and proper allocation of read requests to channels may be of importance in solid state drives.However, in certain situations, at least one of the channels may have a different number of NAND chips coupled to the channel in comparison to the other channels. Such a situation may occur when the number of NAND chips is not a multiple of the number of channels. For example, if there are 18 channels and the number of NAND chips is not a multiple of 18, then at least one of the channels must have a different number of NAND chips coupled to the channel, in comparison to the other channels. In such situations, channels that are coupled to a greater number of NAND chips may be loaded more heavily than channels that coupled to a fewer number of NAND chips. It is assumed that each NAND chip in the solid state drive is of identical construction and has the same storage capacity.In case of uneven loading of channels, some channels may be backlogged more than other and the PCIe bus may have to wait for the backlog to clear before completing the response to the hostCertain embodiments provide mechanisms to prevent uneven loading of channels even when at least one of the channels has a different number of NAND chips coupled to the channel in comparison to the other channels. This is achieved by preferentially loading the most lightly loaded channel with read requests intended for the most lightly loaded channel, and by reordering the processing of pending read requests awaiting execution in a queue in the solid state drive. Since resources are allocated when a read request is loaded onto a channel, by loading the most lightly loaded channels with read requests, resources are used only when needed and are used efficiently. As a result, certain embodiments improve the performance of SSDs.FIG. 1 illustrates a block diagram of a computing environment 100 in which a solid state drive 102 is coupled to a host 104 over a PCIe bus 106, in accordance with certain embodiments. The host 104 may be comprised of at least a processor.In certain embodiments, an arbiter 108 is implemented in firmware in the solid state drive 102. In other embodiments, the arbiter 108 may be implemented in hardware or software, in in any combination of hardware, firmware, or software.The arbiter 108 allocates read requests received from the host 104 over the PCIe bus 106 to one or more channels of a plurality of channels 110a, 110b,...,11 On of the solid state drive 102. In certain embodiments, the channels 110a...110η are coupled to a plurality of non-volatile memory chips, such as NAND chips, NOR chips, or other suitable non- volatile memory chips. In alternative embodiments other types of memory chips, such as chips based on phase change memory (PCM), a three dimensional cross point memory, a resistive memory, nanowire memory, ferro-electric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, spin transfer torque (STT)-MRAM or other suitable memory may also be used.For example, in certain embodiments, channel 110a is coupled to NAND chips 112a...112p, channel 110b is coupled to NAND chips 114a...114q, and channel 11 On is coupled to NAND chips 114a...114r. Each of the NAND chips 112a...112p, 114a...114q, 114a...114r are identical in construction. At least one of the channels of the plurality of channels 110a ....11 On has a different number of NAND chips coupled to the channel in comparison to other channels, so there is a possibility of uneven loading of the plurality of channels 110a...110η if the read requests from the host 104 are random and uniform.In certain embodiments, the solid state drive 102 may be capable of storing several terabytes of data or more, and the plurality NAND chips 112a...112p, 114a..l 14q, 116a...116r, each storing several gigabytes of data or more, may be found in the solid state drive 102. The PCIe bus 106 may have a maximum bandwidth (i.e., data carrying capacity) of 4 gigabytes per second. In certain embodiments, the plurality of channels 110a...110η may be eighteen in number and each channel may have a maximum bandwidth of 200 megabytes per second.In certain embodiments, the arbiter 108 examines the plurality of channels 110a...110η one by one in a sequence and after examining all of the plurality of channels 110a...110η loads the least loaded channel with read requests intended for the channel to increase the load on the least loaded channel, in an attempt to perform uniform loading of the plurality of channels.FIG. 2 illustrates another block diagram 200 of the solid state drive 102 that shows how the arbiter 108 allocates read requests in an incoming queue 202 to channels 110a...110η of the solid state drive 102, in accordance with certain embodiments. The arbiter 108 maintains the incoming queue 202, where the incoming queue 202 stores read request received from the host 104 over the PCIe bus 106. The read requests arrive in an order in the incoming queue 202 and are initially maintained in the same order as the order of arrival of the read requests in the incoming queue 202. For example, a request that arrives first may be for data stored in NAND chips coupled to channel 110b, and a second request that arrives next may be for data stored in NAND chips coupled to channel 110a. In such a situation the request that arrives first is at the head of the incoming queue 202 and the request that arrives next is the next element in the incoming queue 202.The arbiter 108 also maintains for each channel 110a...110b a data structure in which an identification of outstanding read requests being processed by the channel are kept. For example, the data structures 204a, 204b,...204n store the identification of the outstanding reads being processed by the plurality of channels 110a, 110b, ....11 On. The outstanding read requests for a channel are the read requests that have been loaded to the channel and that are being processed by the channel, i.e., the NAND chips coupled to the channel are being used to retrieve data corresponding the read requests that have been loaded to the channel.The solid state drive 102 also maintains a plurality of hardware, firmware, or software resources, such as buffer, latches, memory, various data structures, etc., (as shown via reference numeral 206) that are used when a read request is loaded to a channel. In certain embodiments, by reserving resources at the time of loading read requests on the least loaded channel, the arbiter 108 prevents unnecessary locking up of resources.Therefore FIG. 2 illustrates certain embodiments in which the arbiter 108 maintains the incoming queue 202 of read requests, and also maintains data structures 204a...204n corresponding to the outstanding reads being processed by each channel 110a.. l 10η of the solid state drive 102.FIG. 3 illustrates a block diagram that shows allocation of read requests in an exemplary solid state drive 300, before starting prioritization of the most lightly populated channel and a reordering of host commands, in accordance with certain embodiments. The most lightly populated channel has the least number of read requests undergoing processing by the channel, in comparison to other channels. The exemplary solid state drive 300 has three channels: channel A 302, channel B 304, and channel C 306. Channel A 302 has outstanding reads 308 indicated via reference numerals 310, 312, 314, i.e. there are three read requests (referred to as "Read A" 310, 312, 314) for data stored in NAND chips coupled to channel A 302. Channel B 304 has outstanding reads 316 indicated via reference numeral 318, and channel C 306 has outstanding reads 320 referred to by reference numerals 322, 324.The incoming queue of read requests 326 has ten read commands 328, 330, 332, 334, 336, 338, 340, 342, 344, 346, where the command at the head of the incoming queue 326 is the "Read A" command 328, and the command at the tail of the incoming queue 326 is the "Read B" command 346.FIG. 4 illustrates a block diagram that shows allocation of read requests in the solid state drive 300 after prioritization of the most lightly populated channel and a reordering of host commands, in accordance with certain embodiments.In certain embodiments, the arbiter 108 examines the incoming queue of read requests 326 (as shown in FIG. 3) and the outstanding reads being processed by the channels as shown in the data structures 308, 316, 318. The arbiter 108 then loads the most lightly loaded channel B 304 (which has only outstanding one read request 318 in FIG. 3) with the commands 340, 344 (which are "Read B" command) selected out of order from the incoming queue of read requests 326 (as shown in FIG 3).FIG. 4 shows the situation after the most lightly loaded channel B 304 has been loaded with command 340, 344. In FIG. 4, reference numerals 402 and 404 in the outstanding reads 316 being processed for channel B 304, show the commands 340, 344 of FIG. 3 that have now been loaded into channel B 304 for processing.Therefore, the channels 302, 304, and 306 are more evenly loaded by loading the most lightly loaded of the three channels 302, 304, 306 with appropriate read requests selected out of order from the incoming queue of read requests 326. It should be noted that neither of the commands 328, 330, 332, 334, 336, 338 which were ahead of command 340 in the incoming queue 326 can be loaded to channel B 304, as the commands 328, 330, 332, 334, 336, 338 are read requests for data accessed via channel A 302 or channel C 306. It should also be noted that there is only one arbiter 108 and a plurality of channels, so the arbiter 108 examines the outstanding reads 308, 316, 320 on the channels 302, 304, 306 one by one. The channels 302, 304, 306 may of course inform the arbiter 108 when the channels 302, 304, 306 complete processing of certain read requests and the arbiter 108 may keep track of the outstanding read requests on the channels 302, 304, 306 from such information provided by the channels 302, 304, 306.Additionally, the arbiter 108, when implemented by using a micro controller, is a serialized processor. A NAND chip (e.g. NAND chip 112a) has an inherent property that allows only one read request to it. The channel (e.g., channel 110a) for the NAND chip has a "busy" status until the read request to the NAND chip is complete. It is the responsibility of the arbiter 108 not to schedule a new read while a channel is busy. As soon as the channel is not busy, the arbiter 108 needs to dispatch the next command to the NAND chip. To improve the channel loading, in certain embodiments the arbiter 108 polls the "lightly loaded" channel (i.e., channels that are being used to process relatively fewer read requests) more often than the "heavily loaded" channels (i.e., channels that are being used to process relatively fewer read requests) so that re-ordered read commands are dispatched to lightly loaded channels as soon as possible. This is important because the time to complete a new read command is of the order of 100 micro seconds, while it takesapproximately the same amount time for the arbiter 108 to scan all 18 channels and reorder the read commands.FIG. 5 illustrates a first flowchart 500 for preventing uneven channel loading in solid state drives, in accordance with certain embodiments. The operations shown in FIG. 5 may be performed by the arbiter 108 that performs operations within the solid state drive 102.Control starts at block 502 in which the arbiter 108 determines the read processing load (i.e., bandwidth being used) on the first channel 110a of a plurality of channels 110a, 110b,...110η. Control proceeds to block 504 in which the arbiter 108 determines whether the read processing load on the last channel 110η has been determined. If not ("No" branch 505), the arbiter 108 determines the read processing load on the next channel and control returns to block 504. The read processing load may be determined by examining the number of pending read requests in the data structure for outstanding reads 204a...204n or via other mechanisms. If at block 504 a determination is made that the read processing load on the last channel 110η has been determined ("Yes" branch 507) control proceeds to block 508 in which it is determined which of the plurality of channels has the least processing load, and the channel with the least processing load is referred to as channel X.From block 508 control proceeds to block 509 in which a determination is made as to whether channel X is busy or not busy, where a channel that is busy is not capable of handling additional read requests and a channel that is not busy is capable for handling additional read requests. The determination of whether channel X is busy or not busy is needed because, a NAND chip coupled to channel X has an inherent property that allows only one read request to it. Channel X for the NAND chip has a "busy" status until the read request to the NAND chip is complete.If at block 509, it is determined that channel X is not busy (reference numeral 509a), then control proceeds to block 510 in which the arbiter 108 selects one or more read requests intended for channel X that have accumulated in the"incoming queue of read requests" 202, such that the available bandwidth of channel X is as close to fully utilized as possible, where the selection may result in a reordering of pending requests in the "incoming queue of read requests" 202. The arbiter 108 allocates resources for the selected one or more read requests and sends (at block 512) the one or more read requests to channel X for processing.If at block 509 it is determined that channel X is busy (reference numeral 509b) then the process waits till channel X is not busy.In alternative embodiments, instead of determining the channel which has the least processing load, a relatively lightly loaded channel (i.e., a channel with a relatively low processing load in the plurality of channels) may be determined. In certain embodiments, read requests may be sent preferentially to the relatively lightly loaded channel. It should be noted that the arbiter 108 does not schedule another read request for a lightly loaded channel, until the lightly loaded channel is confirmed as "not busy".It may be noted that while operations 502, 504, 505, 506, 507, 508, 510, 512, are being performed the host read requests keep on accumulating (at block 514) in the "incoming queue of read requests" data structure 202. Therefore, FIG. 5 illustrates certain embodiments for selecting the most lightly loaded channel, and reordering queue items in the incoming queue of read requests to select appropriate read requests to load in the most lightly loaded channel.FIG. 6 illustrates a second flowchart 600 for preventing uneven channel loading in solid state drives, in accordance with certain embodiments. The operations shown in FIG. 6 may be performed by the arbiter 108 that performs operations within the solid state drive 102.Control starts at block 602 in which a solid state drive 102 receives a plurality of read requests from a host 104 via a PCIe bus 106, where each of a plurality of channels 110a...110η in the solid state drive have identical bandwidths.. While the channels 110a...110η may have identical bandwidths, in actual scenarios one or more of the channels 110a...110η may not utilize the bandwidth fully.An arbiter 108 in the solid state drive 102 determines (at block 604) which of a plurality of channels 110a...110η in the solid state drive 102 is a lightly loaded channel (in certain embodiments the lightly loaded channel is the most lightly loaded channel). Resources for processing one or more read requests intended for the determined lightly loaded channel are allocated (at block 608), wherein the one or more read requests have been received from the host 104.Control proceeds to block 608 in which the one or more read requests are placed in the determined lightly loaded channel for the processing. Subsequent to placing the one or more read requests in the determined lightly loaded channel for the processing, the determined lightly channel is as close to being fully utilized as possible during the processing.Therefore, FIGs. 1-6 illustrate certain embodiments for preventing uneven loading of channels in a solid state drive by out of order selections of read requests from an incoming queue, and loading the out of order selections of read requests into the channel which is relatively lightly loaded or the least loaded.The described operations may be implemented as a method, apparatus or computer program product using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code maintained in a "computer readable storage medium", where a processor may read and execute the code from the computer storage readable medium. The computer readable storage medium includes at least one of electronic circuitry, storage materials, inorganic materials, organic materials, biological materials, a casing, a housing, a coating, and hardware. A computer readable storage medium may comprise, but is not limited to, a magnetic storage medium (e.g., hard drive drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, FlashMemory, firmware, programmable logic, etc.), Solid State Devices (SSD), etc. The code implementing the described operations may further be implemented in hardware logic implemented in a hardware device (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.). Still further, the code implementing the described operations may be implemented in "transmission signals", where transmission signals may propagate through space or through a transmission media, such as an optical fiber, copper wire, etc. The transmission signals in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The program code embedded on a computer readable storage medium may be transmitted as transmission signals from a transmitting station or computer to a receiving station or computer. A computer readable storage medium is not comprised solely of transmission signals. Those skilled in the art will recognize that many modifications may be made to this configuration, and that the article of manufacture may comprise suitable information bearing medium known in the art.Computer program code for carrying out operations for aspects of the certain embodiments may be written in any combination of one or more programming languages. Blocks of the flowchart and block diagrams may be implemented by computer program instructions.FIG. 7 illustrates a block diagram of a system 700 that includes both the host 104 (the host 104 comprises at least a processor) and the solid state drive 102, in accordance with certain embodiments. For example, in certain embodiments the system 700 may be a computer (e.g., a laptop computer, a desktop computer, a tablet, a cell phone or any other suitable computational device) that has the host 104 and the solid state drive 102 included in the system 700. For example, in certain embodiments the system 700 may be a laptop computer that includes the solid state drive 102.The system 700 may include a circuitry 702 that may in certain embodiments include at least a processor 704. The system 700 may also include a memory 706 (e.g., a volatile memory device), and storage 708. The storage 708 may include the solid state drive 102 or other drives or devices including a non- volatile memory device (e.g., EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, firmware, programmable logic, etc.). The storage 708 may also include a magnetic disk drive, an optical disk drive, a tape drive, etc. The storage 708 may comprise an internal storage device, an attached storage device and/or a network accessible storage device. The system 700 may include a program logic 710 including code 712 that may be loaded into the memory 706 and executed by the processor 704 or circuitry 702. In certain embodiments, the program logic 710 including code 712 may be stored in the storage 708. In certain other embodiments, the program logic 710 may be implemented in the circuitry 702. Therefore, while FIG. 7 shows the program logic 710 separately from the other elements, the program logic 710 may be implemented in the memory 706 and/or the circuitry 702. The system 700 may also include a display 714 (e.g., an liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a touchscreen display, or any other suitable display). The system 700 may also include one or more input devices 716, such as, a keyboard, a mouse, a joystick, a trackpad, or any other suitable input devices). Other components or devices beyond those shown in FIG. 7 may also be found in the system 700.Certain embodiments may be directed to a method for deploying computing instruction by a person or automated processing integrating computer-readable code into a computing system, wherein the code in combination with the computing system is enabled to perform the operations of the described embodiments.The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "someembodiments", and "one embodiment" mean "one or more (but not all)embodiments" unless expressly specified otherwise.The terms "including", "comprising", "having" and variations thereof mean "including but not limited to", unless expressly specified otherwise. The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.The terms "a", "an" and "the" mean "one or more", unless expressly specified otherwise.Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments.Further, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performedsimultaneously.When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments need not include the device itself.At least certain operations that may have been illustrated in the figures show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.The foregoing description of various embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to be limited to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.ExamplesThe following examples pertain to further embodiments.Example 1 is a method in which an arbiter in a solid state drive determines which of a plurality of channels in the solid state drive is a lightly loaded channel in comparison to other channels. Resources are allocated for processing one or more read requests intended for the determined lightly loaded channel, wherein the one or more read requests have been received from a host. The one or more read requests are placed in the determined lightly loaded channel for the processing.In example 2, the subject matter of claim 1 may include that the determined lightly loaded channel is a most lightly loaded channel in the plurality of channels, wherein subsequent to placing the one or more read requests in the determined most lightly loaded channel for the processing, the determined most lightly loaded channel is as close to being fully utilized as possible during the processing.In example 3, the subject matter of claim 1 may include that the one or more read requests are included in a plurality of read requests intended for the plurality of channels, wherein an order of processing of the plurality of read requests is modified by the placing of the one or more read requests in the determined lightly loaded channel for the processing.In example 4, the subject matter of claim 3 may include that modifying the order of processing of the plurality of requests preferentially processes the one or more read requests intended for the determined lightly loaded channel over other requests.In example 5, the subject matter of claim 1 may include that the solid state drive receives the one or more read requests from the host via a peripheral component interconnect express (PCIe) bus, wherein each of the plurality of channels in the solid state drive has an identical bandwidth.In example 6, the subject matter of claim 5 may include that a sum of bandwidths of the plurality of channels equals a bandwidth of the PCIe bus.In example 7, the subject matter of claim 1 may include that at least one of the plurality of channels is coupled to a different number of NAND chips in comparison to other channels of the plurality of channels.In example 8, the subject matter of claim 1 may include that if the one or more read requests are not placed in the determined lightly loaded channel for the processing then read performance on the solid state drive decreases by over 10% in comparison to another solid state drive in which all channels are coupled to a same number of NAND chips.In example 9, the subject matter of claim 1 may include that the allocating of the resources for the processing is performed subsequent to determining by the arbiter in the solid state drive which of the plurality of channels in the solid state drive is the lightly loaded channel.In example 10, the subject matter of claim 1 may include that the arbiter polls relatively lightly loaded channels more often than relatively heavily loaded channels to preferentially dispatch re-ordered read requests to the relatively lightly loaded channels.In example 1 l,the subject matter of claim 1 may include associating with each of the plurality of channels a data structure that maintains outstanding reads that are being processed by the channel; and maintaining the one or more read requests that have been received from the host in an incoming queue of read requests received from the host.Example 12 is an apparatus comprising a plurality of non- volatile memory chips, a plurality of channels coupled to the plurality of non-volatile memory chips, and an arbiter for controlling the plurality of channels, wherein the arbiter is operable to: determine which of the plurality of channels is a lightly loaded channel in comparison to other channels; allocate resources for processing one or more read requests intended for the determined lightly loaded channel, wherein the one or more read requests have been received from a host; and place the one or more read requests in the determined lightly loaded channel for the processing.In example 13, the subject matter of claim 12 may include that the nonvolatile memory chips comprise NAND chips, wherein the determined lightly loaded channel is a most lightly loaded channel in the plurality of channels, wherein subsequent to placing the one or more read requests in the determined most lightly loaded channel for the processing, the determined most lightly loaded channel is as close to being fully utilized as possible during the processing.In example 14, the subject matter of claim 12 may include that the one or more read requests are included in a plurality of read requests intended for the plurality of channels, wherein an order of processing of the plurality of read requests is modified by the placing of the one or more read requests in the determined lightly loaded channel for the processing.In example 15, the subject matter of claim 14 may include that modifying the order of processing of the plurality of requests preferentially processes the one or more read requests intended for the determined lightly loaded channel over other requests.In example 16, the subject matter of claim 12 may include that the apparatus receives the one or more read requests from the host via a peripheral component interconnect express (PCIe) bus, wherein each of the plurality of channels in the apparatus has an identical bandwidth.In example 17, the subject matter of claim 16 may include that a sum of bandwidths of the plurality of channels equals a bandwidth of the PCIe bus.In example 18, the subject matter of claim 12 may include that the non- volatile memory chips comprise NAND chips, wherein at least one of the plurality of channels is coupled to a different number of NAND chips in comparison to other channels of the plurality of channels.In example 19, the subject matter of claim 12 may include that may include that the non-volatile memory chips comprise NAND chips, wherein if the one or more read requests are not placed in the determined lightly loaded channel for the processing then read performance on the apparatus decreases by over 10% in comparison to another apparatus in which all channels are coupled to a same number of NAND chips.In example 20, the subject matter of claim 12 may include that the allocating of the resources for the processing is performed subsequent to determining by the arbiter in the apparatus which of the plurality of channels in the apparatus is the lightly loaded channel.In example 21, the subject matter of claim 12 may include that the arbiter polls relatively lightly loaded channels more often than relatively heavily loaded channels to preferentially dispatch re-ordered read requests to the relatively lightly loaded channels.In example 22,the subject matter of claim 12 may include associating with each of the plurality of channels a data structure that maintains outstanding reads that are being processed by the channel; and maintaining the one or more read requests that have been received from the host in an incoming queue of read requests received from the host.Example 23 is a system, comprising a solid state drive, a display, and a processor coupled to the solid state drive and the display, wherein the processor sends a plurality of read requests to the solid state drive, and wherein in response to the plurality of read requests, the solid state drive performs operations, the operations comprising: determine which of a plurality of channels in the solid state drive is a lightly loaded channel in comparison to other channels in the solid state drive; allocate resources for processing one or more read requests selected from the plurality of read requests, wherein the one or more read requests are intended for the determined lightly loaded channel; place the one or more read requests in the determined lightly loaded channel for the processing.In example 24, the subject matter of claim 23 further comprises that the solid state drive further comprises a plurality of non-volatile memory chips including NAND or NOR chips, wherein the lightly loaded channel is a most lightly loaded channel in the plurality of channels, and wherein subsequent to placing the one or more read requests in the determined most lightly loaded channel for the processing, the determined most lightly loaded channel is as close to being fully utilized as possible during the processing.In example 25, the subject matter of claim 23 further comprises that an order of processing of the plurality of requests is modified by the placing of the one or more read requests in the determined lightly loaded channel for the processing. |
A contact to a semiconductor substrate including a contact opening extending through an insulating layer to a doped active region of the semiconductor substrate. The contact opening can have a relatively high aspect ratio of 2:1 or greater. The contact further includes a refractory metal germanosilicide region at the bottom of the contact opening, a refractory metal germanide layer at the sidewalls of the contact opening, and an overlying refractory metal nitride layer. The refractory metals of the invention include at least tantalum, titanium, cobalt and mixtures thereof. The contact is metallized, preferably using tungsten or aluminum. The method of manufacturing the contact comprises etching the contact opening. A germane gas is used to clean native silicon dioxide from the bottom of the contact opening and to deposit a germanium layer thereon. A refractory metal layer is deposited over the germanium layer. After annealing in a nitrogen atmosphere at a temperature of about 600° C. or less, the contact opening is metallized with tungsten or aluminum. |
What is claimed and desired to be secured by United States Letters Patent is: 1. A contact to a substrate, said contact comprising:an active region in said substrate; an insulating layer on said substrate adjacent to and over said active region; a contact opening within said insulating layer, said contact opening extending to said active region and having an aspect ratio greater than about 2:1, said contact opening having a bottom and at least one sidewall; a refractory metal germanosilicide region at said bottom of said contact opening and over said active region; and a metallization material within said contact opening such that said contact opening is substantially filled. 2. A contact as recited in claim 1, wherein a refractory metal included in said refractory metal germanosilicide region is selected from the group consisting of chromium, cobalt, molybdenum, platinum, tantalum, titanium, tungsten, zirconium, and combinations thereof.3. A contact as recited in claim 1, wherein said refractory metal germanosilicide region comprises a refractory metal germanosilicide material represented by RSixGey, wherein R represents a refractory metal, the variable x of RSixGey has a value of about 1 and the variable y of RSixGey has a value of about 1.4. A contact as recited in claim 1, wherein said refractory metal germanosilicide region comprises TaSixGey.5. A contact as recited in claim 1, wherein said refractory metal germanosilicide region comprises CoSixGey.6. A contact as recited in claim 1, wherein said refractory metal germanosilicide region comprises TiSixGey.7. A contact as recited in claim 1, wherein said refractory metal germanosilicide region comprises TiwCo1-wSixGey, wherein the variable w has a value greater than 0 and less than 1.8. A contact as recited in claim 1, further comprising a refractory metal germanide layer on said at least one sidewall.9. A contact as recited in claim 8, further comprising a refractory metal nitride layer on said refractory metal germanosilicide region and on said refractory metal germanide layer.10. A contact to a substrate, said contact comprising:an active region in said substrate; an insulating layer on said substrate adjacent to and over said active region; a contact opening within said insulating layer, said contact opening extending to said active region and having a bottom and at least one sidewall; a refractory metal germanosilicide region at said bottom of said contact opening and over said active region; and tungsten within said contact opening such that said contact opening is substantially filled. 11. A contact as recited in claim 10, wherein said refractory metal germanosilicide includes titanium.12. A contact as recited in claim 10, wherein said refractory metal germanosilicide includes tantalum.13. A contact as recited in claim 10, wherein said refractory metal germanosilicide includes cobalt.14. A contact as recited in claim 10, wherein said refractory metal germanosilicide includes a mixture of titanium and cobalt.15. A contact to a substrate, said contact comprising an active region in said substrate;an insulating layer on said substrate adjacent to and over said active region; a contact opening within said insulating layer, said contact opening extending to said active region and having a bottom and at least one sidewall; a refractory metal germanosilicide region at said bottom of said contact opening and over said active region; a refractory metal germanide layer on said at least one sidewall; and a metallization material within said contact opening such that said contact opening is substantially filled. 16. A contact as recited in claim 15, wherein said contact opening has an aspect ratio of greater than about 2:1.17. A contact as recited in claim 15, wherein said metallization material comprises tungsten.18. A contact as recited in claim 15, wherein said metallization material comprises aluminum.19. A contact as recited in claim 15, wherein said insulating layer comprises silicon dioxide.20. A contact as recited in claim 15, wherein said insulating layer comprises BPSG.21. A contact to a substrate, said contact having a contact opening with a bottom and at least one sidewall, said contact comprising:an active region in said substrate; an insulating layer on said substrate adjacent to and over said active region; a refractory metal germanosilicide region at said bottom of said contact opening and over said active region; a refractory metal germanide layer on said at least one sidewall; a refractory metal nitride layer on said refractory metal germanosilicide region and on said refractory metal germanide layer; and a metallization material within said contact opening and in physical contact with said refractory metal nitride layer. 22. A contact as recited in claim 21, wherein said contact opening has an aspect ratio greater than about 2:1.23. A contact as recited in claim 21, wherein a refractory metal included in each of said refractory metal germanosilicide region, said refractory metal germanide layer, and said refractory metal nitride layer is selected from the group consisting of chromium, cobalt, molybdenum, platinum, tantalum, titanium, tungsten, zirconium, and combinations thereof.24. A contact as recited in claim 23, wherein said refractory metal germanosilicide region comprises a refractory metal germanosilicide material represented by RSixGey, wherein R represents said refractory metal, the variable x of RSixGey having a value of about 1 and the variable y of RSixGey having a value of about 1.25. A contact as recited in claim 23, wherein said refractory metal germanide layer comprises a refractory metal germanide material represented by RGey, wherein R represents said refractory metal and the variable y of RGey has a value of about 2.26. A contact as recited in claim 21, wherein said refractory metal nitride layer is positioned and has a thickness sufficient to substantially prevent diffusion of said metallization material into said active region.27. A contact as recited in claim 21, wherein said refractory metal nitride layer has a thickness sufficient to substantially prevent pitting, spiking and wormhole formation within said active region.28. A contact to a substrate, said contact comprising:a doped active region in said substrate; a BPSG layer on said substrate adjacent to and over said doped active region; a contact opening within said BPSG layer, said contact opening extending to said doped active region and having an aspect ratio greater than about 2:1, said contact opening having a bottom and at least one sidewall; a refractory metal germanosilicide region at said bottom of said contact opening and over said doped active region; a refractory metal germanide layer on said at least one sidewall; a refractory metal nitride layer on said refractory metal germanosilicide region and on said refractory metal germanide layer, wherein a refractory metal included in each of said refractory metal germnanosilicide region, said refractory metal germanide layer, and said refractory metal nitride layer is selected from the group consisting of chromium, cobalt, molybdenum, platinum,tantalum, titanium, tungsten, zirconium, and combinations thereof; and a metallization material within said contact opening and in physical contact with said refractory metal nitride layer such that said contact opening is substantially filled, said metallization material being selected from the group consisting of aluminum and tungsten. |
This is a divisional of U.S. patent application Ser. No. 09/146,850, filed on Sep. 3, 1998, now U.S. Pat. No. 6,239,029 B1, which is a continuation-in-part of U.S. patent application Ser. No. 08/816,165, filed on Mar. 12, 1997, which is a divisional of U.S. patent application Ser. No. 08/503,385, filed on Jul. 17, 1995, now U.S. Pat. No. 5,644,166, each of said applications being incorporated herein by reference.BACKGROUND OF THE INVENTION1. The Field of the InventionThe present invention relates to the formation of high aspect ratio submicron VLSI contacts. More specifically, the present invention is directed to depositing a germanium layer into a contact opening using germane gas in order to remove native silicon dioxide from the contact opening. The germanium layer at the bottom of the contact opening is consumed during annealing to form a low resistance contact.2. The Relevant TechnologyModern integrated circuits are manufactured by an elaborate process in which a large number of electronic semiconductor devices are integrally formed on a semiconductor substrate. In the context of this document, the term "semiconductor substrate" is defined to mean any construction comprising semiconductive material, including but not limited to bulk semiconductive material such as a semiconductive wafer, either alone or in assemblies comprising other materials thereon, and semiconductive material layers, either alone or in assemblies comprising other materials. The term "substrate" refers to any supporting structure including but not limited to the semiconductive substrates described above.The movement toward progressive miniaturization of semiconductor devices has resulted in increasingly compact and efficient semiconductor structures. This movement has been accompanied by an increase in the complexity and number of such structures aggregated on a single semiconductor integrated chip. As feature sizes are reduced, new problems arise which must be solved in order to economically and reliably produce the semiconductor devices. The submicron features which must be reduced include, for instance, the width and spacing of metal conducting lines as well as the size of various geometric features of active semiconductor devices.As an example, the requirement of submicron features in semiconductor manufacturing has necessitated the development of improved means of making contact with the various structures. The smaller and more complex devices are achieved, in part, by reducing device sizes and spacing and by reducing the junction depth of regions formed in the semiconductor substrate. Among the feature sizes which are reduced in size are the contact openings through which electrical contact is made to active regions in the semiconductor devices. As both the contact size and junction depth are reduced, new device metallization processes are required to overcome the problems which have been encountered.Historically, device interconnections have been made with aluminum or aluminum alloy metallization. Aluminum, however, presents problems with junction spiking. Junction spiking results in the dissolution of silicon into the aluminum metallization and aluminum into the silicon. Typically, when aluminum contacts with a silicon substrate directly, the aluminum eutectically alloys with the silicon substrate at temperatures lower than 450[deg.] C. When such a reaction occurs, silicon is dissolved into the aluminum electrode, and there is a tendency for silicon thus dissolved into the electrode to be precipitated at a boundary between the electrode and the substrate as an epitaxial phase. This increases the resistivity across the contact. Furthermore, aluminum in the electrode is diffused into the silicon substrate from the electrode and forms an alloy spike structure in the substrate.The resulting alloy spike structure is a sharp, pointed region enriched in aluminum. The alloy spikes can extend into the interior of the substrate from the boundary between the electrode and the substrate to cause unwanted short circuit conduction at the junction of the semiconductor in the substrate, particularly when the junction is formed in an extremely shallow region of the substrate. When such an unwanted conduction occurs, the semiconductor device no longer operates properly. This problem is exacerbated with smaller device sizes, because the more shallow junctions are easily shorted, and because the silicon available to alloy with the aluminum metallization is only accessed through the small contact area, increasing the resultant depth of the spike.Contact openings have also been metallized with chemical vapor deposited tungsten. This process has also proven problematic. The tungsten is typically deposited in an atmosphere of fluorine, which attacks the silicon, creating "wormholes" into the active region. Wormholes can extend completely through the active region, thereby shorting it out and causing the device to fail. Tungsten also presents a problem in that it does not adhere well directly to silicon.3. Prior State of the ArtIn order to eliminate the problems associated with the reaction between the silicon substrate and the metallization material, prior art solutions have typically used a diffusion barrier structure in which the reaction between the silicon substrate and the electrode is blocked by a barrier layer provided between the electrode and the substrate. Such a barrier layer prevents the diffusion of silicon and aluminum. It also provides a surface to which the tungsten will adhere and which will prevent tungsten and fluorine from diffusing into the active region.Prior art FIGS. 1 through 4 of the accompanying illustrations depict one conventional method known in the art of forming contacts having a diffusion barrier. In FIG. 1, a contact opening 18 is etched through an insulating layer 16 overlying an active region 14 on a substrate 12. Insulating layer 16 typically comprises a passivity layer of intentionally-formed silicon dioxide in the form of borophosphosilicate glass (BPSG). Contact opening 18 provides access to active region 14 by which an electrical contact is made. Native silicon dioxide layer 20 is a thin layer which forms on the active region from exposure to ambient. As shown in FIG. 2, a titanium metal layer 22 is then sputtered over contact opening 18 so that the exposed surface of active region 14 is coated.A high temperature anneal step is then conducted in an atmosphere of predominantly nitrogen gas (N2). Native silicon dioxide layer 20 is dissolved and titanium metal layer 22 is allowed to react with active region 14 and change titanium metal layer 22 into a dual layer. As shown in FIG. 3, a titanium silicide (TiSix) layer 26 is formed by the anneal step, and provides a conductive interface at the surface of active region 14. A titanium nitride (TiNx) layer 24 is also formed, and acts as a diffusion barrier to the interdifflusion of tungsten and silicon or aluminum and silicon, as mentioned above. Under such conditions, the lower portion of titanium metal layer 22 overlying active region 14, after dissolving native silicon dioxide layer 20, reacts with a portion of the silicon in active region 14 to form titanium silicide layer 26. Concurrently, the upper portion of titanium metal layer 22 reacts with the nitrogen gas of the atmosphere to form titanium nitride layer 24.The next step, shown in FIG. 4, is metallization. This is typically achieved by chemical vapor deposition (CVD) of tungsten, or by the deposition of aluminum using any of the various known methods. These include aluminum reflow sputtering, and chemical vapor deposition. In the case of tungsten, the titanium nitride helps improve the adhesion between the walls of the opening and the tungsten metal. In the case of both tungsten and aluminum, the titanium nitride acts as a barrier against the diffusion of the metallization layer into the diffusion region and vice-versa.Spiking and wormholes can still occur, even with the use of a deposition barrier, particularly when the diffusion barrier is too thin. This frequently occurs at the corners of the contact opening, where it is difficult to form a thick layer, particularly if the aspect ratio of the contact is high. Contact opening 18 of FIG. 3 is filled by an aluminum layer 32 in FIG. 4 which depicts the effects of spiking, with a spike 34 extending through active region 14, the effect of which is to short active region 14 out.The compound titanium nitride (TiN) is well suited to forming a diffusion barrier, as it is extremely hard, chemically inert, an excellent conductor, and has a high melting point. It also makes excellent contact with other conductive layers. Titanium nitride is typically formed by the reaction of sputtered titanium during annealing in nitrogen, or can be deposited directly on the substrate by reactive sputtering, evaporation, chemical vapor deposition and the like before the deposition of the metallization.As device dimensions continue to shrink and the contact openings become deeper and narrower, contact walls become vertical and most of the metal deposition techniques fail to provide the necessary step coverage to create adequate contact with the active area. Such narrow, high aspect ratio contact openings can result in a partial or total failure to make significant contact with the active region. Accordingly, it becomes increasingly difficult to produce the desired thickness of titanium at the bottom of the contact opening.FIG. 5 shows the dimensions used to calculate the aspect ratio, which is the ratio of the height H to the width W. In order to introduce a sufficiently thick titanium metal layer 22 using conventional sputtering techniques and thereby create titanium nitride layer 24 such that is acts as an effective diffusion barrier, the aspect ratio of contact opening 18 is required to be kept relatively low, generally under 2:1.The aspect ratios of contacts have been increased in the past by depositing the titanium layer using a collimator to directly sputter deposit plasma emanating from a target into the bottom of the contact openings on a semiconductor substrate. The use of a collimator to direct titanium metal layer 22 in FIG. 2 to the bottom of contact opening 18 prevents unwanted structures from forming on the walls of contact opening 18 and thereby plugging contact opening 18.A collimator having a honeycomb structure has an aspect ratio corresponding to the thickness of honeycomb structure divided by the diameter of the openings in the honeycomb structure. In order to deposit the thick layers of titanium needed for this conventional method, the honeycomb structure used in collimator sputtering has been required to have a high aspect ratio, typically around 2.5:1. This slows down the manufacturing process and reduces throughput. Higher aspect ratios also require a high surface area of the collimator. A consequence of a high surface area is a concomitant increase in particle contamination, and a reduced deposition ratio on the wafer.Other undesirable effects result from the conventional contact forming method. For instance, a high temperature of 800[deg.] C. or greater is required during the anneal step to properly form titanium silicide layer 26 as shown in FIG. 3. In practice, high temperatures tend to cause loss to the titanium silicide layer and can cause the BPSG to crack and to reflow.Another function of depositing a titanium layer in a contact opening is to remove native silicon dioxide (SiO2) which forms whenever the silicon substrate is exposed to air. Typical native silicon dioxide layers have a thickness of about 20 Angstroms. Such a layer is shown at 20 in FIG. 1. Native silicon dioxide layer 20 is highly insulative and can cause a high contact resistance so as to result in failure of the device. Titanium metal layer 22 of FIG. 2 serves to carry away oxygen, breaking down native silicon dioxide layer 20. In the process, a portion of titanium metal layer 22 is consumed. As a result, even more titanium must be deposited in order to form an effective diffusion barrier.Prior art methods employed plasma cleaning to remove the native silicon dioxide from the bottom of the contact openings prior to depositing titanium. These processes have proven unsatisfactory, as they are quite expensive, decrease throughput, and may require substantially higher rapid thermal processing (RTP) annealing temperatures. Furthermore, since native silicon dioxide grows in air, these methods do not prevent the reformation of native silicon dioxide in the contact openings once the methods are concluded.For these reasons, there is a need in the art for an improved method of creating diffusion barriers in contacts that minimize the amount of material needed for effective diffusion barriers. This will in turn allow greater miniaturization of devices. Such a method would be more desirable if it also had increased throughput, lowered costs, and increased yields.SUMMARY OF THE INVENTIONIn accordance with the invention as embodied and described herein, the present invention comprises a submicron VLSI contact and a corresponding method for manufacturing the contact. The submicron VLSI contact comprises a substrate having formed thereon an active region. An insulating layer such as silicon dioxide or BPSG overlies the active region. A contact opening is etched through the insulating layer to access the underlying active region. At the bottom of the contact opening is formed a refractory metal germanosilicide region. At the sides of the contact opening is a refractory metal germanide layer. Over the refractory metal germanide layer and the refractory metal germanosilicide region is a refractory metal nitride layer. The remainder of the contact opening is filled with a metal such as tungsten or aluminum. The germanium used in forming the contact may be doped in order to avoid depleting the active region.The corresponding method of manufacturing the high aspect ratio submicron contact comprises the following steps. First, a doped active region is formed within the semiconductor substrate. The maximum depth of the doped active region defines a junction depth. An insulating layer is formed, typically by covering the active region with BPSG, reflowing the BPSG, and planarizing it. Contact holes are then etched into the insulating layer down to the active region, typically using photolithography and dry etch procedures. The contact opening is then exposed to germane gas (GeH4) at a temperature of between about 200[deg.] to 600[deg.] C., at a pressure of 1 to 150 Torr, and for a period of time of about 60 seconds. This time may vary, but should be sufficient to remove the native silicon dioxide layer that has grown at the bottom of the contact opening, and to deposit a germanium layer having a thickness that is preferably approximately the same as the thickness of a refractory metal layer that is to be subsequently formed. Next, the refractory metal layer is deposited over the germanium layer so as to have a thickness less than about one-half the junction depth. The refractory metal layer may be deposited with, for example, a sputtering process. Since the refractory metal layer may be much thinner than with conventional methods, the sputtering process may be completed with the use of a collimator having a lower aspect ratio.The next step is to anneal the contact opening in an atmosphere of nitrogen gas (N2). This is done at a lower temperature than the conventional method, with the preferred temperature being about 600[deg.] C. The anneal step causes a refractory metal germanosilicide region to form at the bottom of the contact opening and a refractory metal germanide layer to form at the sidewalls. An overlying refractory metal nitride layer, which has been found to be an effective diffusion barrier, is formed over both the refractory metal germanosilicide region and the refractory metal germanide layer.Since a much thinner refractory metal layer can be deposited, the contact can have a higher aspect ratio. Aspect ratios greater than about 2:1 are attainable. The improved diffusion barrier of refractory metal nitride effectively prohibits spiking and wormholes from forming in the active region. Other advantages of the present invention include a higher yield and a more stable BPSG layer due to the use of a lower temperature anneal.BRIEF DESCRIPTION OF THE DRAWINGSIn order that the manner in which the above-recited and other advantages and objects of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:FIG. 1 is a cross-sectional elevation view showing the manner in which a typical contact opening is formed through an insulating layer to the surface of a semiconductor substrate.FIG. 2 is a cross-sectional elevation view illustrating the next step in the conventional known method for producing a contact, comprising depositing a titanium layer into the contact opening.FIG. 3 is a cross-sectional elevation view illustrating the next step in the conventional known process for producing a contact, comprising annealing the titanium layer in a nitrogen gas atmosphere to deposit an underlying titanium silicide region and an overlying titanium nitride layer.FIG. 4 is a cross-sectional elevation view illustrating the next step in the conventional known process for producing a contact, and comprises metallizing the contact opening. FIG. 4 also illustrates the consequences of an insufficient contact barrier, which are shown as spikes penetrating through the active region.FIG. 5 is a cross sectional elevational view showing the results of a step for producing a high aspect ratio submicron VLSI contact under the present invention, and comprises exposing the contact opening to germane gas to deposit a germanium layer over the bottom of the contact opening. FIG. 5 also shows the dimensions of the contact opening used in calculating the aspect ratio.FIG. 6 is a cross-sectional elevation view illustrating the next step of the process of the present invention, comprising depositing a refractory metal layer over the germanium layer.FIG. 7 is a cross sectional elevation view illustrating the next step of the process of the present invention, which is annealing the contact opening in a nitrogen gas atmosphere to form a refractory metal germanosilicide region at the bottom of the contact opening, a refractory metal germanide layer at the sidewalls of the contact opening, and an overlying refractory metal nitride layer.FIG. 8 is a cross sectional elevation view showing the last step of the process, which comprises metallizing the contact opening with a metal such as tungsten or aluminum.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe present invention comprises a high aspect ratio submicron VLSI contact and a method for forming the high aspect ratio submicron VLSI contact. The present invention utilizes a sacrificial CVD germanium layer in order to form a more intimate electrical contact, and a more efficient diffusion barrier at the bottom of the contact. The method of the present invention is highly beneficial in the formation of electrical contacts to devices such as diodes, resistors, capacitors, transistors, and other semiconductor devices formed in high density on microchips. The method of the present invention, is shown by steps in FIGS. 1 and 5-8.Shown in FIG. 1 is a substrate 12 as the surface of a semiconductor substrate. An active region is created on substrate 12 by doping a portion thereof. The resulting doped active region is seen at reference numeral 14. The maximum depth of doped active region 14 defines a junction depth as used herein. Specifically, the maximum depth of doped active region 14 is in the direction perpendicular to the plane defined by substrate 12. The junction depth is not a limitation on the present invention, which instead contemplates essentially any junction depth that may be used in the art. The junction depth is important to the extent that it correlates to a preferred thickness of a refractory metal layer that is to be subsequently formed during the method as disclosed herein.Next, a protective insulating layer 16 is formed over doped active region 14. Insulating layer 16 preferably comprises BPSG in order to allow it to reflow at temperatures of about 900[deg.] C. or below. Insulating layer 16 is preferably reflowed and planarized to form a flat surface on substrate 12. In order to access the underlying doped active region 14, a contact opening 18 is etched through insulating layer 16 by a process of masking and etching, preferably dry etching, as is commonly known in the art. Contact opening 18 extends through insulating layer 16 to doped active region 14 and has a bottom 15 and at least one sidewall 17.In order to clean a native silicon dioxide layer 20 from bottom 15 of contact opening 18, and in order to form an effective diffusion barrier in preparation of metallizing contact opening 18, substrate 12 is exposed in a vacuum environment to germane gas (GeH4). This is preferably done using a low pressure chemical vapor deposition (LPCVD) technique. The process is preferably conducted with a pressure of about 80 Torr, a temperature of about 500[deg.] C., a germane concentration of about 100%, and for a duration of about 60 seconds. The germane gas effectively cleans native silicon dioxide layer 20 from bottom 15 of contact opening 18 by turning the silicon dioxide into a silicon sub-oxide (SiOx) (X<2), which can be removed from the contact opening by sublimation in vacuum at a temperature of around 600[deg.] C. The cleaning of native silicon dioxide layer 20 from bottom 15 of contact opening 18 allows for optimal electrical contact between the metallization layer and underlying doped active region 14. It also allows an overlying refractory metal layer to be as thin as possible.The LPCVD process should be of sufficient duration to remove native silicon dioxide layer 20 and to also deposit a germanium layer 40 at bottom 15 of contact opening 18. In practice, for example, and not by way of limitation, the thickness of germanium layer 40 will typically be in a range from about 30 Angstroms to about 100 Angstroms. More specifically, however, the thickness of germanium layer 40 will be selected to be in a range less than to slightly greater than the thickness of a refractory metal layer48 that is to be subsequently formed. Preferably, the thickness of germanium layer 40 and the thickness of refractory metal layer 48 are approximately the same. The factors that determine the actual dimensions of these thickness will be more fully disclosed below.As shown in FIG. 6, a refractory metal layer 48 is then formed over germanium layer 40. Refractory metal layer 48 may be deposited by sputtering, CVD, or by other processes by which refractory metal is deposited. As used herein "refractory metal" may be chromium, cobalt, molybdenum, platinum, tantalum, titanium, tungsten, zirconium, or combinations thereof It has been found that tantalum, titanium, cobalt, and a mixture of cobalt and titanium are particularly useful under the present invention.Refractory metal layer 48 should be formed so as to have a thickness less than about one-half the junction depth. Providing such a thickness will significantly reduce the likelihood that junction leakage will occur in the completed contact. Limiting the thickness of refractory metal layer 48 reduces the amount of silicon removed from the doped active area to form refractory metal germanosilicide during subsequent steps of the method of the present invention. Since the refractory metal need not react with the silicon dioxide as in the conventional method, refractory metal layer 48 may be much thinner than typically used, typically a reduction from about 150 Angstroms, as used in conventional processes, to perhaps 50 Angstroms or less, depending on the junction depth.Since less refractory metal need to be laid in bottom 15 of contact opening 18 than with the conventional process, the aspect ratio of contact opening 18 may be substantially increased. As a result, aspect ratios above 2:1 are now attainable with the present invention. This increase in aspect ratio in turn increases the number of devices that may be placed on a microchip, thereby aiding in the miniaturization process.Refractory metal layer 48 is preferably deposited using a honeycomb structured collimator sputtering technique. By allowing a thinner refractory metal layer 48, the aspect ratio of the holes in the honeycomb structure of the collimator may be reduced. In conventional processes, the aspect ratio of the collimator is about 2.5:1. Using the current invention, this can be reduced to 2:1 or even as low as about 1.5 to 1. This speeds up the process, and due to the reduced surface area of the collimator, results in lower particle contamination. This will in turn result in a higher device yield.During the LPCVD process, the germanium can be doped in situ, with either N+ or P+ dopants, depending on whether the underlying junction is doped with N+ or P+ dopants. This can be done by adding sources of boron, phosphorus, arsenic or other dopants to the LPCVD procedure. Examples of dopants are phosphine (PH3), used with a P+ active region, and diborane (B2H6), used with a N+ active region. This will prevent germanium layer 40 from reacting with and depleting the dopant of doped active region 14. Instead, the dopant concentration in doped active region 14 will be substantially consistent throughout the method of the present invention.Next, contact opening 18 is annealed, the result of which is shown in FIG. 7. This is preferably done using rapid thermal processing (RTP) in an atmosphere of nitrogen gas (N2) and for a time period of about 20 to 60 seconds. The anneal step may be conducted at substantially lower temperatures than with conventional techniques. For example, conventional techniques use a temperature of about 800[deg.] C. for the anneal, while the method of the present invention may use a temperature of about 600[deg.] C. or less, with about 600[deg.] C. being preferred.As a result of the anneal step, a refractory metal germanosilicide (RSixGey) region 50, where R represents a refractory metal, is formed at bottom 15 of contact opening 18 and over doped active region 14. A refractory metal germanide (RGey) layer 52 is also formed at sidewalls 17 of contact opening 18. The nitrogen gas also combines with refractory metal layer 48 to form a refractory metal nitride (RN) layer 54 above both refractory metal germanosilicide region 50 and refractory metal germanide layer 52. Germanium layer 40 is sacrificially consumed in the process. The alloy will vary, but it is preferred that variable X in (RSixGey) have a value of about 1, that variable Y in (RSixGey) have a value of about 1, and that variable Y in (RGey) have a value of about 2. As previously mentioned, refractory metal, R, may be a combination of individual metals, for example, titanium and cobalt. In this case, RSixGey could be expressed as TiwCO1-wSixGey., where 0<w<1, and with variables X and Y preferably each having a value of about 1.Refractory metal germanosilicide can be formed at lower temperatures than refractory metal silicide (RSix), allowing a lower temperature anneal. This has the additional benefits of stabilizing the contact, avoiding cracking or detrimental reflow effects of the BPSG insulating layer, and helping to maintain the size of the doped active region 14.The final step, shown in FIG. 8, is metallization. In this step, a metallization material 56 is deposited in contact opening 18 and in physical contact with refractory metal nitride layer 54 such that contact opening 18 is substantially filled. Metallization material 56 is preferably tungsten formed in a CVD process or aluminum formed in a reflow, sputter, or CVD process.The resulting contact has high step coverage with strong adhesion, high electrical conduction, and can be more easily miniaturized as a result of the higher aspect ratio permitted. The process can also be conducted at lower temperatures and with higher throughput. Refractory metal nitride layer 54 acts as an effective diffusion barrier to resist pitting, spiking, and wormholes. The resulting microchip has better reliability and a higher yield.The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrated and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. |
Apparatus, systems, and methods for Recovery algorithm in memory are described. In one embodiment, a controller comprises logic to receive reliability information from at least one component of a storage device coupled to the controller, store the reliability information in a memory communicatively coupled to the controller, generate at least one reliability indicator for the storage device, and forward the reliability indicator to an election module. Other embodiments are also disclosed and claimed. |
A controller comprising logic, at least partially including hardware logic, configured to: receive reliability information from at least one component of a storage device coupled to the controller;store the reliability information in a memory communicatively coupled to the controller;generate at least one reliability indicator for the storage device; andforward the reliability indicator to an election module.The controller of claim 1 , wherein the reliability information includes at least one of: a failure count for the storage device;a failure rate for the storage device;an error rate for the storage device;an amount of time the storage device spent in a turbo mode;an amount of time the storage device spent in an idle modevoltage information for the storage device; ortemperature information for the storage device.The controller of claim 2, wherein the logic to generate a reliability indicator for the storage device further comprises logic to:apply a weighting factor to the reliability information.The controller of claim 2, wherein the logic to generate a reliability indicator for the storage device further comprises logic to:predict a likelihood of failure based upon the reliability information.The controller of claim 1, wherein the election module comprises logic to:receive the reliability indicator; anduse the reliability indicator in an election process to select a primary storage node candidate from a plurality of secondary storage nodes. An electronic device, comprising:a processor; anda memory, comprising:a memory device; anda controller coupled to the memory device and comprising logic to:receive reliability information from at least one component of a storage device coupled to the controller;store the reliability information in a memory communicatively coupled to the controller;generate at least one reliability indicator for the storage device; and forward the reliability indicator to an election module.7. The electronic device of claim 8, wherein the reliability information includes at least one of:a failure count for the storage device;a failure rate for the storage device;an error rate for the storage device;an amount of time the storage device spent in a turbo mode;an amount of time the storage device spent in an idle modevoltage information for the storage device; ortemperature information for the storage device.The electronic device of claim 7, wherein the logic to generate a reliability indicator for the storage device further comprises logic to:apply a weighting factor to the reliability information.The electronic device of claim 7, wherein the logic to generate a reliability indicator for the storage device further comprises logic to:predict a likelihood of failure based upon the reliability information.The electronic device of claim 6, wherein the election module comprises logic to:receive the reliability indicator; anduse the reliability indicator in an election process to select a primary storage node candidate from a plurality of secondary storage nodes. A computer program product comprising logic instructions stored on a nontransitory computer readable medium which, when executed by a controller coupled to a memory device, configure the controller to:receive reliability information from at least one component of a storage device coupled to the controller;store the reliability information in a memory communicatively coupled to the controller;generate at least one reliability indicator for the storage device; andforward the reliability indicator to an election module.The computer program product of claim 1 1, wherein the reliability information includes at least one of:a failure count for the storage device;a failure rate for the storage device;an error rate for the storage device;an amount of time the storage device spent in a turbo mode;an amount of time the storage device spent in an idle modevoltage information for the storage device; ortemperature information for the storage device.The computer program product of claim 12, wherein the logic to generate a reliability indicator for the storage device further comprises logic to:apply a weighting factor to the reliability information.The computer program product of claim 12, wherein the logic to generate a reliability indicator for the storage device further comprises logic to:predict a likelihood of failure based upon the reliability information.The computer program product of claim 11, wherein the election module comprises logic to:receive the reliability indicator; anduse the reliability indicator in an election process to select a primary storage node candidate from a plurality of secondary storage nodes. A controller-implemented method, comprising:receiving reliability information from at least one component of a storage device coupled to the controller;storing the reliability information in a memory communicatively coupled to the controller;generating at least one reliability indicator for the storage device; and forwarding the reliability indicator to an election module.The method of claim 16, wherein the reliability information includes at least one of: a failure count for the storage device;a failure rate for the storage device;an error rate for the storage device;an amount of time the storage device spent in a turbo mode;an amount of time the storage device spent in an idle modevoltage information for the storage device; ortemperature information for the storage device.The method of claim 17, further comprising:applying a weighting factor to the reliability information.The method of claim 17, further comprising:predicting a likelihood of failure based upon the reliability information.The method of claim 15, further comprising:receiving the reliability indicator; andusing the reliability indicator in an election process to select a primary storage node candidate from a plurality of secondary storage nodes. |
EVIDENCE-BASED REPLACEMENT OF STORAGE NODESTECHNICAL FIELDThe present disclosure generally relates to the field of electronics. More particularly, some embodiments of the invention generally relate to evidence-based failover of storage nodes for electronic devices, e.g. in network-based storage systems.BACKGROUNDStorage servers, in both data centers and in cloud-based deployments, are commonly configured with multiple storage nodes, one of which functions as a primary storage node and two or more of which function as secondary storage nodes. In the event of a failure in the primary storage node one of the secondary storage nodes assumes the role of the primary storage node, a process commonly referred to as "failover" in the industry.Some existing failover procedures utilize an election process to choose which node will assume the role of the primary node. This election process is performed without regard to the reliability of a potential successor which may result in spurious subsequent failovers and system instability.Accordingly, techniques to improve failover processes in storage servers may find utility.BRIEF DESCRIPTION OF THE DRAWINGSThe detailed description is provided with reference to the accompanying figures. The use of the same reference numbers in different figures indicates similar or identical items.Fig. 1 is a schematic, block diagram illustration of a networked environment in which evidence-based replacement of storage nodes may be implemented in accordance with various examples discussed herein.Fig. 2 is a schematic, block diagram illustration of a memory architecture in which evidence- based replacement of storage nodes may be implemented in accordance with various examples discussed herein.Fig. 3 is a schematic, block diagram illustrating an architecture in which evidence-based replacement of storage nodes may be implemented in accordance with various examples discussed herein. Fig. 4 is a schematic, block diagram illustrating an architecture for an electronic device in which evidence-based replacement of storage nodes may be implemented in accordance with various examples discussed herein.Fig. 5 is a flowchart illustrating operations in a method to implement evidence-based replacement of storage nodes in accordance with various embodiments discussed herein.Figs. 6-10 are schematic, block diagram illustrations of electronic devices which may be adapted to implement evidence-based replacement of storage nodes in accordance with various embodiments discussed herein. DESCRIPTION OF EMBODIMENTSIn the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments of the invention may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments of the invention. Further, various aspects of embodiments of the invention may be performed using various means, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software"), or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, or some combination thereof.Fig. 1 is a schematic, block diagram illustration of a networked environment in which evidence-based replacement of storage nodes may be implemented in accordance with various examples discussed herein. Referring to Fig. 1, an electronic device(s) 1 10 may be coupled to one or more storage nodes 130, 132, 134 via a network 140. In some embodiments electronic device (s) 1 10 may be embodied as a mobile telephone, tablet, PDA or other mobile computing device as described with reference to electronic device(s) 1 10, below. Network 140 may be embodied as a public communication network such as, e.g., the internet, or as a private communication network, or combinations thereof.Storage nodes 130, 132, 134 may be embodied as computer-based storage systems. Fig. 2 is a schematic illustration of a computer-based storage system 200 that may be used to implement storage nodes 130, 132, or 134. In some embodiments, system 200 includes a computing device 208 and one or more accompanying input/output devices including a display 202 having a screen 204, one or more speakers 206, a keyboard 210, one or more other I/O device(s) 212, and a mouse 214. The other I O device(s) 212 may include a touch screen, a voice-activated input device, a track ball, and any other device that allows the system 200 to receive input from a user. The computing device 208 includes system hardware 220 and memory 230, which may be implemented as random access memory and/or read-only memory. A file store 280 may be communicatively coupled to computing device 208. File store 280 may be internal to computing device 208 such as, e.g., one or more hard drives, CD-ROM drives, DVD-ROM drives, or other types of storage devices. File store 280 may also be external to computer 208 such as, e.g., one or more external hard drives, network attached storage, or a separate storage network.System hardware 220 may include one or more processors 222, video controllers 224, network interfaces 226, and bus structures 228. In one embodiment, processor 222 may be embodied as an Intel ® Pentium IV® processor, or an Intel Itanium® processor available from Intel Corporation, Santa Clara, California, USA. As used herein, the term "processor" means any type of computational element, such as but not limited to, a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processor or processing circuit.Graphics controller 224 may function as an adjunction processor that manages graphics and/or video operations. Graphics controller 224 may be integrated onto the motherboard of computing system 200 or may be coupled via an expansion slot on the motherboard.In one embodiment, network interface 226 could be a wired interface such as an Ethernet interface (see, e.g., Institute of Electrical and Electronics Engineers/IEEE 802.3-2002) or a wireless interface such as an IEEE 802.1 la, b or g-compliant interface (see, e.g., IEEE Standard for IT-Telecommunications and information exchange between systems LAN/MAN— Part II: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment 4: Further Higher Data Rate Extension in the 2.4 GHz Band, 802.11G-2003).Bus structures 228 connect various components of system hardware 228. In one embodiment, bus structures 228 may be one or more of several types of bus structure(s) including a memory bus, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 1 1-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).Memory 230 may include an operating system 240 for managing operations of computing device 208. Memory 230 may include a reliability register 232 to which may be used to store reliability information collected during operation of electronic device 200. In one embodiment, operating system 240 includes a hardware interface module 254 that provides an interface to system hardware 220. In addition, operating system 240 may include a file system 250 that manages files used in the operation of computing device 208 and a process control subsystem 252 that manages processes executing on computing device 208.Operating system 240 may include (or manage) one or more communication interfaces that may operate in conjunction with system hardware 220 to transceive data packets and/or data streams from a remote source. Operating system 240 may further include a system call interface module 242 that provides an interface between the operating system 240 and one or more application modules resident in memory 230. Operating system 240 may be embodied as a UNIX operating system or any derivative thereof (e.g., Linux, Solaris, etc.) or as a Windows® brand operating system, or other operating systems.Fig. 3 is a schematic, block diagram illustrating an architecture in which evidence-based replacement of storage nodes may be implemented in accordance with various examples discussed herein. In some examples, the storage nodes may be divided into a primary storage node and two or more secondary storage nodes. In the example depicted in Fig. 3, the storage nodes are divided into a primary storage node 310 and two secondary storage nodes 312, 314. In operation, write operations from a host device are received in the primary node 310. The write operations are then replicated from the primary node 310 to the secondary nodes 312, 314. One skilled in the art will recognize that additional secondary nodes could be added. The example depicted in Fig. 3 depicts two additional secondary nodes 316, 318.In some examples one or more of the storage nodes 130, 132, 134 may incorporate one or more reliability monitors which receive reliability information from at least one component of a storage device (e.g., a disk drive, solid state drive, RAID array, dual in-line memory module (DIMM), or the like) in the storage node and a reliability monitoring engine which receives reliability information collected by the reliability monitor(s) and generates one or more reliability indicators for the storage node(s) 130, 132, 134 from the reliability information. The reliability indicator(s) may then be incorporated into an election process for a failover routine.Fig. 4 is a schematic, block diagram illustrating an architecture for an electronic device in which evidence-based replacement of storage nodes may be implemented in accordance with various examples discussed herein. Referring to Fig. 4, in some embodiments a central processing unit (CPU) package 400 which may comprise one or more processors 410 coupled to a control hub 420 and a local memory 430. Control hub 420 comprises a memory controller 422 and a memory interface 424. Local memory 430 may include a reliability register 432 analogous to register 232 may be used to store reliability information collected during operation of electronic device 400. In some examples the reliability register may be implemented in non-volatile hardware registers. Memory interface 424 is coupled to a remote memory 440 by a communication bus 460. In some examples, the communication bus 460 may be implemented as traces on a printed circuit board, a cable with copper wires, a fiber optic cable, a connecting socket, or a combination of the above. Memory 440 may comprise a controller 442 and one or more memory device(s) 450. In various embodiments, at least some of the memory banks 450 may be implemented using volatile memory, e.g., static random access memory (SRAM), a dynamic random access memory (DRAM), nonvolatile memory, or non-volatile memory, e.g., phase change memory, NAND (flash) memory, ferroelectric random-access memory (FeRAM), nanowire-based non-volatile memory, memory that incorporates memristor technology, three dimensional (3D) cross point memory such as phase change memory (PCM), spin-transfer torque memory (STT-RAM) or NAND flash memory. The specific configuration of the memory device(s) 450 in the memory 440 is not critical.In the example depicted in Fig. 4 a reliability monitor (RM) logic 446 is incorporated into controller 446. Similarly, reliability monitoring engine (RME) logic 412 is incorporated into processor(s) 410. In operation, the reliability monitor(s) 446 and the reliability monitoring engine 412 cooperate to collect reliability information from various components of the electronic device and to generate at least one reliability indicator for the electronic device.One example of a method for evidence-based elective replacement of storage nodes for electronic devices will be described with reference to Figs. 4 and 5. Referring to Fig. 5, at operation 510 one or more of the reliability monitors 446 may collect reliability information including, but not limited to a failure count (or failure rate) for the storage device, or a failure count (or failure rate) for the storage device. As used herein, the term "fault" refers to any type of fault event for the storage device including read or write errors in the memory of the storage device or hardware errors in components of the storage device. The term "failure" refers to a fault which affects the proper functioning of the storage device.The reliability monitor 446 may also collect information pertaining to an amount of time the storage device spent in a turbo mode or an amount of time the storage device spent in an idle mode. As used herein the phrase "turbo mode" refers to an operating mode in which the device increases the voltage and/or operating frequency when there is power available and sufficient thermal headroom available to support an increase in operating speed. By contrast the phrase "idle mode" refers to an operating mode in which voltage and/or operating speed are reduced during time periods in which the storage device is not being utilized.The reliability monitor 446 may also collect information pertaining to voltage information for the storage device. For example, the reliability monitor 446 may collect an amount of time spent at high voltage (i.e., Vmax), an amount of time spent at low voltages (Vmin), and voltage excursions such as a change in current flow over a change in time (dl/dT) events, voltage histograms, average voltage over predetermined periods of time, etc.The reliability monitor 446 may also collect temperature information for the storage device. Examples of temperature information may include the maximum temperature, minimum temperature, and average temperature over specified periods of time, temperature cycling information (e.g., min/max and average temperature over very short periods of time). Temperature differentials beyond a certain threshold - can be indicators of thermal stressIn other examples information from machine check registers that log corrected and uncorrected error information from all over the chip may be used to determine whether a system has experienced high frequencies of corrected or uncorrected errors as another potential indication of reliability issues. Corrected and uncorrected error information for storage device can include error correction code (ECC) corrected/detected errors, errors detected on solid state drives (SSDs), cyclical redundancy code (CRC) checks or the like.In further examples voltage/thermal sensors may be used to monitor for voltage droop, i.e., the drop in output voltage as it drives a load. Voltage droop phenomenon can result in timing delays and speed paths which can result in functional failure/incorrect output (i.e., errors). Circuits are designed to factor in a certain amount of droop, and robust circuits and power delivery systems mitigate or tolerate a certain amount of droop. However, certain data patterns or patterns of simultaneous or concurrent activity can create droop events beyond the tolerance levels designed and result in problems. Monitoring droop event characteristics such as amplitude and duration may impart information relevant to the reliability of a component.At operation 515, the reliability data collected by the reliability monitor(s) 446 is forwarded to the reliability monitoring engine 412, e.g., via the communication bus 460.At operation 520 the reliability monitoring engine 412 receives the reliability data from the reliability monitor(s) 446 and at operation 525 the data is stored in a memory, e.g., in local memory 430.At operation 530 the reliability monitoring engine 412 generates one or more reliability indicators for the storage device(s) using the reliability information received from the reliability monitor(s) 446. In some examples the reliability monitoring engine 412 may apply a weighting factor to one or more elements of the reliability information. For example, fault events may be assigned a higher weight than failure events. Optionally, at operation 535 the reliability monitoring engine(s) 412 may predict a likelihood of failure for the storage device 130, 132, 134 using the reliability storage.At operation 540 one or more of the reliability indicators are used in an election process for a failover routine. For example, referring to Fig. 3, in some examples reliability indicators may be exchanged between nodes or may be shared with a remote device, e.g., a server. During a failover process in which the primary node 310 is taken offline or otherwise becomes the secondary node, the reliability indicators may be used in an election process to determine which of the secondary nodes 312, 314, 316, 318 will assume the role of the primary node.Since much of the reliability data is accumulated over time, a single failure, or even periodic reliability issues in the actual detection hardware will not materially affect the final cumulative assessment of the component. Rather, such issues may show up as anomalies in the various reliability detection mechanisms. The selection algorithm may use a combination of evaluations from each of these sources to determine the most reliable system. This combination can be done in a complex fashion taking into account magnitudes of anomalies as well as frequencies of issues observed, hysteresis of degradation trends and the like, or can simply be a weighted average of the most recent accumulated behavior weighted based on system defaults or user preference as to which reliability issues should be deemed worse than others.In some examples, each secondary node 312, 314, 316, 318 may query the reliability information from for all other secondary nodes 312, 314, 316, 318 and independently determine the most reliable secondary node 312, 314, 316, 318 available. As long as this algorithm is the same on each secondary node 312, 314, 316, 318, each secondary node 312, 314, 316, 318 should independently select the same secondary node 312, 314, 316, 318 as being the best, most reliable candidate for election to assume the role of the new primary node. In the case of an error or fault in the selection algorithm on any one secondary node 312, 314, 316, 318, a majority voting scheme may be employed such that the secondary node 312, 314, 316, 318 chosen by the majority of the pool as being the most reliable would be the one selected as the new primary node.As described above, in some embodiments the electronic device may be embodied as a computer system. Fig. 6 illustrates a block diagram of a computing system 600 in accordance with an embodiment of the invention. The computing system 600 may include one or more central processing unit(s) (CPUs) 602 or processors that communicate via an interconnection network (or bus) 604. The processors 602 may include a general purpose processor, a network processor (that processes storage communicated over a computer network 603), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 602 may have a single or multiple core design. The processors 602 with a multiple core design may integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 602 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors. In an embodiment, one or more of the processors 602 may be the same or similar to the processorOs 102 of Fig. 1. For example, one or more of the processors 602 may include the control unit 120 discussed with reference to Figs. 1-3. Also, the operations discussed with reference to Figs. 3-5 may be performed by one or more components of the system 600.A chipset 606 may also communicate with the interconnection network 604. The chipset 606 may include a memory control hub (MCH) 608. The MCH 608 may include a memory controller 610 that communicates with a memory 612 (which may be the same or similar to the memory 130 of Fig. 1). The memory 412 may store data, including sequences of instructions, that may be executed by the CPU 602, or any other device included in the computing system 600. In one embodiment of the invention, the memory 612 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory may also be utilized such as a hard disk or a solid state drive (SSD). Additional devices may communicate via the interconnection network 604, such as multiple CPUs and/or multiple system memories.The MCH 608 may also include a graphics interface 614 that communicates with a display device 616. In one embodiment of the invention, the graphics interface 614 may communicate with the display device 616 via an accelerated graphics port (AGP). In an embodiment of the invention, the display 616 (such as a flat panel display) may communicate with the graphics interface 614 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display 616. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display 616.A hub interface 618 may allow the MCH 608 and an input/output control hub (ICH) 620 to communicate. The ICH 620 may provide an interface to I/O device(s) that communicate with the computing system 600. The ICH 620 may communicate with a bus 622 through a peripheral bridge (or controller) 624, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 624 may provide a data path between the CPU 602 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 620, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 620 may include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices. The bus 622 may communicate with an audio device 626, one or more disk drive(s) 628, and a network interface device 630 (which is in communication with the computer network 603). Other devices may communicate via the bus 622. Also, various components (such as the network interface device 630) may communicate with the MCH 608 in some embodiments of the invention. In addition, the processor 602 and one or more other components discussed herein may be combined to form a single chip (e.g., to provide a System on Chip (SOC)). Furthermore, the graphics accelerator 616 may be included within the MCH 608 in other embodiments of the invention.Furthermore, the computing system 600 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 628), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic storage (e.g., including instructions).Fig. 7 illustrates a block diagram of a computing system 700, according to an embodiment of the invention. The system 700 may include one or more processors 702-1 through 702-N (generally referred to herein as "processors 702" or "processor 702"). The processors 702 may communicate via an interconnection network or bus 704. Each processor may include various components some of which are only discussed with reference to processor 702-1 for clarity. Accordingly, each of the remaining processors 702-2 through 702-N may include the same or similar components discussed with reference to the processor 702-1.In an embodiment, the processor 702-1 may include one or more processor cores 706-1 through 706-M (referred to herein as "cores 706" or more generally as "core 706"), a shared cache 708, a router 710, and/or a processor control logic or unit 720. The processor cores 706 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as cache 708), buses or interconnections (such as a bus or interconnection network 712), memory controllers, or other components.In one embodiment, the router 710 may be used to communicate between various components of the processor 702-1 and/or system 700. Moreover, the processor 702-1 may include more than one router 710. Furthermore, the multitude of routers 710 may be in communication to enable data routing between various components inside or outside of the processor 702-1.The shared cache 708 may store data (e.g., including instructions) that are utilized by one or more components of the processor 702-1, such as the cores 706. For example, the shared cache 708 may locally cache data stored in a memory 714 for faster access by components of the processor 702. In an embodiment, the cache 708 may include a mid-level cache (such as a level 2 (L2), a level 3 (L3), a level 4 (L4), or other levels of cache), a last level cache (LLC), and/or combinations thereof. Moreover, various components of the processor 702-1 may communicate with the shared cache 708 directly, through a bus (e.g., the bus 712), and/or a memory controller or hub. As shown in Fig. 7, in some embodiments, one or more of the cores 706 may include a level 1 (LI) cache 716-1 (generally referred to herein as "LI cache 716"). In one embodiment, the control unit 720 may include logic to implement the operations described above with reference to the memory controller 122 in Fig. 2.Fig. 8 illustrates a block diagram of portions of a processor core 706 and other components of a computing system, according to an embodiment of the invention. In one embodiment, the arrows shown in Fig. 8 illustrate the flow direction of instructions through the core 706. One or more processor cores (such as the processor core 706) may be implemented on a single integrated circuit chip (or die) such as discussed with reference to Fig. 7. Moreover, the chip may include one or more shared and/or private caches (e.g., cache 708 of Fig. 7), interconnections (e.g., interconnections 704 and/or 112 of Fig. 7), control units, memory controllers, or other components.As illustrated in Fig. 8, the processor core 706 may include a fetch unit 802 to fetch instructions (including instructions with conditional branches) for execution by the core 706. The instructions may be fetched from any storage devices such as the memory 714. The core 706 may also include a decode unit 804 to decode the fetched instruction. For instance, the decode unit 804 may decode the fetched instruction into a plurality of uops (micro-operations).Additionally, the core 706 may include a schedule unit 806. The schedule unit 806 may perform various operations associated with storing decoded instructions (e.g., received from the decode unit 804) until the instructions are ready for dispatch, e.g., until all source values of a decoded instruction become available. In one embodiment, the schedule unit 806 may schedule and/or issue (or dispatch) decoded instructions to an execution unit 808 for execution. The execution unit 808 may execute the dispatched instructions after they are decoded (e.g., by the decode unit 804) and dispatched (e.g., by the schedule unit 806). In an embodiment, the execution unit 808 may include more than one execution unit. The execution unit 808 may also perform various arithmetic operations such as addition, subtraction, multiplication, and/or division, and may include one or more an arithmetic logic units (ALUs). In an embodiment, a co-processor (not shown) may perform various arithmetic operations in conjunction with the execution unit 808.Further, the execution unit 808 may execute instructions out-of-order. Hence, the processor core 706 may be an out-of-order processor core in one embodiment. The core 706 may also include a retirement unit 810. The retirement unit 810 may retire executed instructions after they are committed. In an embodiment, retirement of the executed instructions may result in processor state being committed from the execution of the instructions, physical registers used by the instructions being de-allocated, etc.The core 706 may also include a bus unit 714 to enable communication between components of the processor core 706 and other components (such as the components discussed with reference to Fig. 8) via one or more buses (e.g., buses 804 and/or 812). The core 706 may also include one or more registers 816 to store data accessed by various components of the core 706 (such as values related to power consumption state settings).Furthermore, even though Fig. 7 illustrates the control unit 720 to be coupled to the core 706 via interconnect 812, in various embodiments the control unit 720 may be located elsewhere such as inside the core 706, coupled to the core via bus 704, etc.In some embodiments, one or more of the components discussed herein can be embodied as a System On Chip (SOC) device. Fig. 9 illustrates a block diagram of an SOC package in accordance with an embodiment. As illustrated in Fig. 9, SOC 902 includes one or more Central Processing Unit (CPU) cores 920, one or more Graphics Processor Unit (GPU) cores 930, an Input/Output (I/O) interface 940, and a memory controller 942. Various components of the SOC package 902 may be coupled to an interconnect or bus such as discussed herein with reference to the other figures. Also, the SOC package 902 may include more or less components, such as those discussed herein with reference to the other figures. Further, each component of the SOC package 902 may include one or more other components, e.g., as discussed with reference to the other figures herein. In one embodiment, SOC package 902 (and its components) is provided on one or more Integrated Circuit (IC) die, e.g., which are packaged into a single semiconductor device.As illustrated in Fig. 9, SOC package 902 is coupled to a memory 960 (which may be similar to or the same as memory discussed herein with reference to the other figures) via the memory controller 942. In an embodiment, the memory 960 (or a portion of it) can be integrated on the SOC package 902.The I/O interface 940 may be coupled to one or more I/O devices 970, e.g., via an interconnect and/or bus such as discussed herein with reference to other figures. I/O device(s) 970 may include one or more of a keyboard, a mouse, a touchpad, a display, an image/video capture device (such as a camera or camcorder/video recorder), a touch screen, a speaker, or the like.Fig. 10 illustrates a computing system 1000 that is arranged in a point-to-point (PtP) configuration, according to an embodiment of the invention. In particular, Fig. 10 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to- point interfaces. The operations discussed with reference to Fig. 2 may be performed by one or more components of the system 1000. As illustrated in Fig. 10, the system 1000 may include several processors, of which only two, processors 1002 and 1004 are shown for clarity. The processors 1002 and 1004 may each include a local memory controller hub (MCH) 1006 and 1008 to enable communication with memories 1010 and 1012. MCH 1006 and 1008 may include the memory controller 120 and/or logic 125 of Fig. 1 in some embodiments.In an embodiment, the processors 1002 and 1004 may be one of the processors 702 discussed with reference to Fig. 7. The processors 1002 and 1004 may exchange data via a point-to-point (PtP) interface 1014 using PtP interface circuits 1016 and 1018, respectively. Also, the processors 1002 and 1004 may each exchange data with a chipset 1020 via individual PtP interfaces 1022 and 1024 using point-to-point interface circuits 1026, 1028, 1030, and 1032. The chipset 1020 may further exchange data with a high-performance graphics circuit 1034 via a high-performance graphics interface 1036, e.g., using a PtP interface circuit 1037.As shown in Fig. 10, one or more of the cores 106 and/or cache 108 of Fig. 1 may be located within the processors 902 and 904. Other embodiments of the invention, however, may exist in other circuits, logic units, or devices within the system 900 of Fig. 9. Furthermore, other embodiments of the invention may be distributed throughout several circuits, logic units, or devices illustrated in Fig. 9.The chipset 920 may communicate with a bus 940 using a PtP interface circuit 941. The bus 940 may have one or more devices that communicate with it, such as a bus bridge 942 and I/O devices 943. Via a bus 944, the bus bridge 943 may communicate with other devices such as a keyboard/mouse 945, communication devices 946 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 803), audio I/O device, and/or a storage storage device 948. The storage storage device 948 (which may be a hard disk drive or a NAND flash based solid state drive) may store code 949 that may be executed by the processors 902 and/or 904.The following examples pertain to further embodiments.Example 1 is a controller comprising logic, at least partially including hardware logic, configured to receive reliability information from at least one component of a storage device coupled to the controller, store the reliability information in a memory communicatively coupled to the controller, generate at least one reliability indicator for the storage device, and forward the reliability indicator to an election module.In Example 2, the subject matter of Example 1 can optionally include an arrangement in which the reliability information includes at least one of a failure count for the storage device, a failure rate for the storage device, an error rate for the storage device, an amount of time the storage device spent in a turbo mode, an amount of time the storage device spent in an idle mode, voltage information for the storage device, or temperature information for the storage deviceIn Example 3, the subject matter of any one of Examples 1-2 can optionally include an arrangement in which the logic to generate a reliability indicator for the storage device further comprises logic to apply a weighting factor to the reliability information.In Example 4, the subject matter of any one of Examples 1-3 can optionally include logic to predict a likelihood of failure based upon the reliability information.In Example 5, the subject matter of any one of Examples 1-4 can optionally include an arrangement in which the election module comprises logic to receive the reliability indicator and use the reliability indicator in an election process to select a primary storage node candidate from a plurality of secondary storage nodes.Example 6 is an electronic device comprising a processor and a memory, comprising a memory device and a controller coupled to the memory device and comprising logic to receive reliability information from at least one component of a storage device coupled to the controller, store the reliability information in a memory communicatively coupled to the controller, generate at least one reliability indicator for the storage device, and forward the reliability indicator to an election module.In Example 7, the subject matter of Example 6 can optionally include an arrangement in which the reliability information includes at least one of a failure count for the storage device, a failure rate for the storage device, an error rate for the storage device, an amount of time the storage device spent in a turbo mode, an amount of time the storage device spent in an idle mode, voltage information for the storage device, or temperature information for the storage deviceIn Example 8, the subject matter of any one of Examples 6-7 can optionally include an arrangement in which the logic to generate a reliability indicator for the storage device further comprises logic to apply a weighting factor to the reliability information.In Example 9, the subject matter of any one of Examples 6-8 can optionally include logic to predict a likelihood of failure based upon the reliability information.In Example 10, the subject matter of any one of Examples 6-9 can optionally include an arrangement in which the election module comprises logic to receive the reliability indicator and use the reliability indicator in an election process to select a primary storage node candidate from a plurality of secondary storage nodes.Example 1 1 is a computer program product comprising logic instructions stored on a nontransitory computer readable medium which, when executed by a controller coupled to a memory device, configure the controller to receive reliability information from at least one component of a storage device coupled to the controller, store the reliability information in a memory communicatively coupled to the controller, generate at least one reliability indicator for the storage device, and forward the reliability indicator to an election module.In Example 12, the subject matter of Example 11 can optionally include an arrangement in which the reliability information includes at least one of a failure count for the storage device, a failure rate for the storage device, an error rate for the storage device, an amount of time the storage device spent in a turbo mode, an amount of time the storage device spent in an idle mode, voltage information for the storage device, or temperature information for the storage deviceIn Example 13, the subject matter of any one of Examples 1 1-12 can optionally include an arrangement in which the logic to generate a reliability indicator for the storage device further comprises logic to apply a weighting factor to the reliability information.In Example 14, the subject matter of any one of Examples 11-13 can optionally include logic to predict a likelihood of failure based upon the reliability information.In Example 15, the subject matter of any one of Examples 1 1-14 can optionally include an arrangement in which the election module comprises logic to receive the reliability indicator and use the reliability indicator in an election process to select a primary storage node candidate from a plurality of secondary storage nodes.Example 16 is a controller- implemented method comprising receiving reliability information from at least one component of a storage device coupled to the controller, storing the reliability information in a memory communicatively coupled to the controller, generate at least one reliability indicator for the storage device, and forwarding the reliability indicator to an election module.In Example 17, the subject matter of Example 16 can optionally include an arrangement in which the reliability information includes at least one of a failure count for the storage device, a failure rate for the storage device, an error rate for the storage device, an amount of time the storage device spent in a turbo mode, an amount of time the storage device spent in an idle mode, voltage information for the storage device, or temperature information for the storage deviceIn Example 18, the subject matter of any one of Examples 16-17 can optionally include applying a weighting factor to the reliability information.In Example 19, the subject matter of any one of Examples 16-18 can optionally include predicting a likelihood of failure based upon the reliability information.In Example 20, the subject matter of any one of Examples 16-19 can optionally include selecting a primary storage node candidate from a plurality of secondary storage nodes.In various embodiments of the invention, the operations discussed herein, e.g., with reference to Figs. 1-10, may be implemented as hardware (e.g., circuitry), software, firmware, microcode, or combinations thereof, which may be provided as a computer program product, e.g., including a tangible (e.g., non-transitory) machine-readable or computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. Also, the term "logic" may include, by way of example, software, hardware, or combinations of software and hardware. The machine-readable medium may include a storage device such as those discussed herein.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment.Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments of the invention, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter. |
An unlocked, digital sequencer circuit having flexibly ordered leading and trailing edges on the output signals. The sequencer circuit of the invention includes a dual-input latch that detects only leading edges on a first input termfinal and only trailing edges on a second input terminal. A delay line provides successively delayed input signals. Two delayed input signals are coupled to the first and second input terminals of each of two or more dual- input latches that provide a set of sequencer output signals. The sequence o f the output signal edges depends on which delayed input signals are selected to drive each dual-input latch. In one embodiment, the selection of delayed inp ut signals to drive the first and second input terminals of the dual-input latches is programmable. Thus, the sequence of the leading edges on the outp ut signal is programmable, and the sequence of the trailing edges is independently programmable. |
A sequencer circuit, comprising: a triggering input terminal providing a triggering input signal; a first sequencer output terminal providing a first sequencer output signal derived from the triggering input signal; a second sequencer output terminal providing a second sequencer output signal derived from the triggering input signal; a first dual-input latch having a first input terminal on which only leading edges are detected, a second input terminal on which only trailing edges are detected, a third input terminal coupled to the triggering input terminal, and an output terminal coupled to the first sequencer output terminal; a second dual-input latch having a first input terminal on which only leading edges are detected, a second input terminal on which only trailing edges are detected, a third input terminal coupled to the triggering input terminal, and an output terminal coupled to the second sequencer output terminal; a delay line having an input terminal coupled to the triggering input terminal and a plurality of output terminals providing signals delayed from the triggering input signal; and a plurality of interconnections coupling each of the first and second input terminals of the first and second dual-input latches to one of the output terminals of the delay line.The sequencer circuit of Claim 1, wherein the plurality of interconnect lines is programmable.The sequencer circuit of Claim 2, wherein: the sequencer circuit forms a portion of a programmable logic device; and the plurality of interconnect lines is controlled by values stored in configuration memory cells of the programmable logic device.The sequencer circuit of Claim 3, wherein the programmable logic device is a CPLD.The sequencer circuit of Claim 1, wherein: the sequencer circuit forms a portion of an integrated circuit; and the first and second sequencer output signals are used to control a power up sequence for the integrated circuit.The sequencer circuit of Claim 5, wherein the integrated circuit is a programmable logic device.The sequencer circuit of Claim 1, wherein the delay line comprises a plurality of inverters coupled in series, and the output terminals of the delay line are coupled to output terminals of different ones of the inverters.The sequencer circuit of Claim 1, wherein the leading edges are rising edges and the trailing edges are falling edges.The sequences circuit of Claim 1, wherein the leading edges are falling edges and the trailing edges are rising edges.The sequences circuit of Claim 1, further comprising: a third sequences output terminal providing a third sequences output signal derived from the triggering input signal; and a third dual-input latch having a first input terminal on which only leading edges are detected, a second input terminal on which only trailing edges are detected, a third input terminal coupled to the triggering input terminal, 16 and an output terminal coupled to the third sequencer output terminal, and wherein the plurality of interconnections further couples each of the first and second input terminals of the third dual-input latch to one of the output terminals of the delay line.The sequencer circuit of Claim 1, wherein the plurality of interconnections couple each of the first and second input terminals of the first and second dual-input latches to different ones of the output terminals of the delay line. 17 |
CA 02464890 2008-07-08 74842-45 AS'YNCHRONOUS SEQUENCER CIRCUIT WITH FLEXIBLY ORDERED OUTPUT SIGNAL EDGES FIELD OF THE INVENTION The invent.ion relates to digital signal sequencing circuits. More particularly, the invention relates to an unclocked digital signal sequencer having flexibly ordered output signal edges. BACICGRCN= OF THE INKENTYON As integrated circuits (ICs) evolve, operating speeds are continually increasing. Therefore, the amount of time available for exchanging data between different ICs is growing ever shorter. in order to achieve a robust zC, is, circuit designers must take into account the fo].lowing issues. Firstly, race Conditions sometimes occur, where two or more signals are "racing', to arrive at a common destination, e.g., the input terminals of a given circuit. The destination circuit may be designed under the assumption that the signals will arrive at the input terminals of the circuit in a certain order. (While this design technique is preferably avoided, sometimes allowing a race condition can improve the overa].l performance of the circuit.) However, under some manufacturing or operating conditions, the supposedly "slower" signal can actually win the race, i..e., arrive prior to the supposedly faster" signal, Some of these conditions include extreme processing corners, temperatures, and power high voltage values. when such a signal reversal occurs, a temporary glitch can appear in an internal signal or an output signal of the circuit. When the circuit is a state machine, for example, a signal glitch can send the entire state machine into a wrong state. Secondly, sometimes pulses or edges on control signals must occur in a particular order foz' a circuit to functa.on properly. For example, consider a circuit that exchara,ges data stored in blocks A and B. First, the data.from block 1 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 A is latched in a temporary latch. Second, the data from block B is stored in block A. Third, the data from the temporary latch is stored in block B. These three steps must occur in this precise order, or data is lost. This order may be ensured, for example, by providing three enable signals that can only occur in the proper order. A clock signal is often used to ensure that signals become active in a particular sequence. For example, Fig. 1A shows a simple sequencer circuit that uses a clock to produce three sequential signals that can be used as sequential enable signals. Sequencer circuit 100 includes three flip-flops 101-103 connected in series and having outputs A1-A3, respectively. The flip-flops are reset by a reset signal RST and clocked by a clock signal CK. The input DIN to the first flip-flop in the series (101) is created by ANDing (in AND-gate 111) an enable signal EN with the inverted output of flip-flop 101, inverted by inverter 112. Fig. 1B is a timing diagram for sequencer circuit 100 of Fig. lA. While reset signal RST is high, the three flip-flops are reset and the three flip-flop output signals are all held low. When reset signal RST is low and enable signal EN goes high, input signal DIN goes high (time Ti). On the next rising edge of clock signal CK (time T2), the output signal Al of the first flip-flop 101 goes high. Signal Al feeds back through inverter 112 and AND-gate 111 and flip-flop input signal DIN goes low. At the next rising edge of clock signal CK (time T3), flip-flop output signal Al goes low in response to the low value on signal DIN, while flip-flop output signal A2 goes high. At the next rising edge of clock signal CK (time T4), flip-flop output signal A2 goes low and flip-flop output signal A3 goes high. At the next rising edge of clock signal CK (time T5), flip-flop output signal A3 goes low. While quite reliable, clock sequencer circuit 100 of Fig. lA cannot be used for all circuits and applications. The delay between sequencer output signals A1-A3 is necessarily limited by the speed of the available clock 2 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 signal CK, which can materially slow the operation of the circuit controlled by the sequencer output signals. Also, at times there is no reliable clock signal available, for example, during an IC power up sequence. An IC power up sequence includes many steps that must be performed in a predetermined sequence. However, during the earlier steps the power high level can be below that required for generating a reliable clock. This situation can be exacerbated in a programmable logic device, where clock signals are generally routed using programmable routing resources. These programmable routing resources cannot route a clock signal until the power ramps up sufficiently to reliably configure the device. Therefore, a programmable logic device might have to provide a separate and non-programmable clock signal to control the power-up sequence. Even in non-programmable devices, if a clock is used to control the power-up sequence additional loading is added to the clock circuitry. Because clock speed is frequently a gating item in IC design, additional loading of the clock signals is to be avoided. Additionally, the various circuits in a device are preferably powered up at the same time. If a clocked sequencing circuit is used to control the power up sequence, the skew on the clock signal between the various circuits must be taken into account and preferably neutralized. Therefore, unclocked sequencing circuits are sometimes used, e.g., for controlling power up sequences. Fig. 2A shows a known unclocked sequencing circuit. Sequencing circuit 200 is a simple delay chain that includes five inverters 201-205 coupled in series. The output of the first inverter 201 provides output signal B1. The output of the third inverter 203 provides output signal B2. The output of the fifth inverter 205 provides output signal B3. Fig. 2B is a timing diagram for sequencer circuit 200 of Fig. 2A. There are two inverters between each pair of 3 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 output signals, so when input signal IN goes low, each of output signals B1-B3 goes high in turn. The sequence of the rising edges on signals B1-B3 is guaranteed. However, there are some drawbacks to this circuit as well. As is clearly shown in Fig. 2B, the output signals occur in a set order, and with set delays between the output signals. Fig. 3A shows a third known sequencer circuit 300 that uses inverters with different trip points to generate output signals at various points of a changing edge of an input signal. By using three inverters with different triggering voltage levels, a slow input signal SIN is detected at three different points in the leading edge of the input signal. These three different points determine the sequence in which the output signals change state. Sequencer circuit 300 includes inverters 301, 311-313, and TP1-TP3. Input signal IN is inverted by slow inverter 301 to provide slow input signal SIN. Slow input signal SIN is monitored by inverters TP1-TP3, each of which trips at a different point on the leading edge of a pulse in slow input signal SIN. The outputs of inverters TP1-TP3 are optionally inverted by inverters 311-313, respectively, to provide sequential output signals C1-C3. Fig. 3B is a timing diagram for sequencer circuit 300 of Fig. 3A. When input signal IN goes low, slow inverter 301 starts to change state. Gradually, slow input signal SIN rises. At time t1, inverter TP1 is tripped, causing output signal Cl to go high. At time t2, slow input signal SIN has risen to the point where inverter TP2 is tripped, and output signal C2 goes high. Similarly, at time t3, inverter TP3 is tripped and output signal C3 goes high. When input signal IN goes high again, slow input signal SIN gradually falls. As signal SIN falls back past the trip points of the three inverters TP1-TP3, their respective output signals return to the low state in reverse sequence. A limitation to prior art unclocked sequencer circuits, including those shown in Figs. 2A and 3A, is that 4 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 gates in the circuit must be carefully sized, while processing, operating temperature, and the power high level must all be carefully controlled for the circuits to function predictably. If changes are made in any of these factors, or in the circuits controlled by a sequencing circuit (e.g., altering the loading of the sequencer output signals), then the sequencer circuit must be resimulated. Often, changes must be made to adapt the circuit to the new conditions. A limitation common to all of the sequencing circuits previously described is that the order of the trailing edges on the output signals is fixed. For example, in the circuits of Figs. 1A and 2A, the order of the trailing edges is always the same as the order of the leading edges. In the circuit of Fig. 3A, the order of the trailing edges is the reverse of the order of the leading edges. A sequencing circuit would be much more flexible if the leading and trailing edges of the output signals could occur independently and in any order. For example, given that capability, events controlled by the sequencer output signals could be made either completely sequential or concurrent (overlapping). It is desirable to provide a sequencer circuit that addresses one or more of the limitations described above. SUNMARY OF THE INVENTION The invention provides an unclocked, digital sequencer circuit having flexibly ordered leading and trailing edges on the output signals. The sequencer circuit of the invention includes a dual-input latch that detects only leading edges on a first input terminal and only trailing edges on a second input terminal. A third input terminal provides a triggering input signal. When the triggering input signal is in one state (e.g., low), all trailing edges are ignored. When the triggering input signal changes state (e.g., goes high), the next leading edge (e.g., the next high edge) on the first input terminal is detected and changes the state of the dual-input latch. 5 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 The next trailing edge (e.g., the next falling edge) on the second input terminal is then detected and returns the dual-input latch to its previous state. One embodiment of the invention also includes a delay line, e.g., a series of inverters coupled in series. The triggering input signal drives the first inverter, while alternating inverters in the series (e.g., the second, fourth, and sixth inverters) provide successively delayed input signals. Two of these delayed input signals are coupled to the first and second input terminals of each of two or more dual-input latches. The output terminals of the dual-input latches provide a set of sequencer output signals. The order of the output signal'edges depends on which delayed input signals are selected to drive each dual-input latch. The order of the leading edges can be made different from the order of the trailing edges simply by using appropriately delayed input signals to drive the first and second terminals of the dual-input latches. Some embodiments of the invention use high pulses on the input and output signals. In other words, a leading edge is detected when the input signal transitions from low to high, and a trailing edge is detected when the input signal transitions from high to low. In one such embodiment, the dual-input latch is implemented using three NAND gates. Two of the NAND gates are cross-coupled. Of these two NAND gates, a first NAND gate provides the sequencer output signal and is also driven by a third NAND gate NANDing the triggering input signal with a signal from the first input terminal. The second cross-coupled NAND gate is also driven by a signal from the second input terminal. In other embodiments, other implementations of the dual-input latch are used to detect and generate high pulses. other embodiments of the invention use low pulses on the input and output signals. In other words, a leading edge is detected when the input signal transitions from high to low, and a trailing edge is detected when the input signal transitions from low to high. In one such 6 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 embodiment, a dual-input latch is implemented using NOR gates. The latch is otherwise similar to the NAND gate latch described above. In other embodiments, other implementations of the dual-input latch are used to detect and generate low pulses. In one embodiment, the selection of delayed input signals applied to the first and second input terminals of the dual-input latches is programmable. Thus, the sequence of the output signals is programmable. Further, the sequence of the leading edges is programmable, and the sequence of the trailing edges is independently programmable. This embodiment is particularly applicable to programmable logic devices, but is not limited thereto. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example, and not by way of limitation, in the following figures, in which like reference numerals refer to similar elements. Fig. lA is a block diagram of a first known sequencing circuit that uses a clock to order output signals. Fig. 1B is a timing diagram for the sequencing circuit of Fig. 1A. Fig. 2A is a block diagram of a second known sequencing circuit that does not require a clock. Fig. 2B is a timing diagram for the sequencing circuit of Fig. 2A. Fig. 3A is a block diagram of a third known sequencing circuit that uses inverters having different trip points to order output signals. Fig. 3B is a timing diagram for the sequencing circuit of Fig. 3A. Fig. 4A is a circuit diagram of a first dual-input latch according to one embodiment of the invention. Fig. 4B is a circuit diagram of a second dual-input latch according to another embodiment of the invention. Fig. 4C is a flow chart demonstrating the functions performed by the dual-input latch of the invention. 7 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 Fig. 5A is a circuit diagram of a digital sequencer circuit according to one embodiment of the invention. Fig. 5B is a timing diagram for the sequencer circuit of Fig. 5A. Fig. 6A is a block diagram of a generalized digital sequencer circuit according to one embodiment of the invention. Fig. 6B is a first flow chart demonstrating the functions performed by the sequencer circuit of the invention. Fig. 6C is a second flow chart demonstrating the functions performed by the sequencer circuit of the invention. Fig. 7 is a block diagram of a programmable sequencer circuit according to one embodiment of the invention. DETAILED DESCRIPTION OF THE DRAWINGS In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Fig. 4A shows a dual-input latch 400 used with some embodiments of the invention. Dual-input latch 400 comprises three NAND gates. Two NAND gates (402 and 403) are cross-coupled. The second input of NAND gate 403 comes from NAND gate 401, which combines two input signals IN and LE. (In the present specification, the same reference characters are used to refer to terminals, signal lines, and their corresponding signals.) The second input of NAND gate 402 is a second input to the latch, called TE. The signal name LE stands for "leading edge", because the latch only detects leading edges on this input signal. In the embodiment of Fig. 4A, input pulses are high pulses, so leading edges are rising edges. The signal name TE stands for "trailing edge",.because the latch only detects trailing edges on this signal. In this embodiment, trailing edges are falling edges. 8 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 Fig. 4B shows a second dual-input latch 410 that can be used with other embodiments of the invention. Dual- input latch 410 is used when input pulses are low pulses, i.e., leading edges are falling edges and trailing edges are rising edges. Dual-input latch 410 is similar to dual- input latch 400, except that NAND gates 401-403 are replaced with NOR gates 411-413, respectively. The dual-input latches of Figs. 4A and 4B function as shown in Fig. 4C. As shown in step 421, as long as trigger input signal IN is inactive (e.g., low for latch 400, high for latch 410), any leading edge on input signal LE is ignored. In step 422, a leading edge is detected on input signal LE when signal IN is active. In response, a first value is latched into the dual-input latch (step 423). In step 424, a trailing edge is detected on input signal TE. In response, a second value is latched into the dual-input latch (step 425). In summary, when enabled by signal IN, a leading edge on a first signal LE causes the output of the latch to change state. A trailing edge on a second signal TE then returns the latch to its previous value. As can be seen from the embodiments of Figs. 4A and 4B, the state of trigger input signal IN is not relevant to the detection of a trailing edge. Therefore, if the pulse on trigger input signal IN goes away prior to the detection of a trailing edge, the circuit still functions as desired. Fig. 5A shows a digital sequencer circuit according to one embodiment of the invention. The sequencer circuit includes a delay line 500, three dual-input latches 400a- 400c, and interconnections connecting various outputs of the delay line to various inputs of the dual-input latches. Delay line 500 includes a series of inverters 501-510. The input to delay line 500 is input signal IN. In the delay line, every two inverters an output signal is extracted, generating delayed input signals delayl-delay5. Delayed input signals delayl-delay5 are sequentially delayed versions of each other, as shown in Fig. 5B. 9 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 The delay Tdl between input signal IN and the first delayed input signal delayl is controlled by the design (e.g., sizing) of inverters 501 and 502. The delay Td2 between delayed input signals delayl and delay2 is controlled by the design of inverters 503 and 504. In the pictured example, minimally sized inverters are used to implement inverters 503-510. Thus, the delay between each pair of delayed input signals is about the same (i.e., Td2), depending on signal loading. Inverters 501 and 502 can be independently sized to ensure that the IN signal arrives before the first LE signal goes high. However, any of these delays can be controlled by the designer to move the edges of the delayed input signals, as desired. In the embodiment of Fig. 5A, the pulses are high pulses, as shown in Fig. 5B. Therefore, the NAND gate implementation of Fig. 4A is used for the dual-input latches. However, other dual-input latches can be used in the various sequencer circuits shown herein, including the NOR gate implementation of Fig. 4B (for low pulses), and other dual-input latches designed for use with high and low pulses. The use of latch 400 is purely exemplary, and is not intended to imply that the circuits and methods of the invention are limited to using this particular latch. Fig. 5B shows the order of the edges on the output signals Dl-D3 for the sequencer circuit of Fig. 5A. Clearly, the selection of the delayed input signals delayl- delay5 to provide the LE and TE inputs for each latch determines the order of the output edges. For example, note that output signal D1 has a rising edge at time L1, because signal delayl supplies the LE input to latch 400a. Similarly, the falling edge of signal D1 occurs at time T5, because signal delay5 supplies the TE input to latch 400a. This simplicity of cause and effect provides a significant advantage compared to known sequencer circuits. The order of the edges of the various output signals can be altered very easily, simply by selecting different delayed input signals to drive the latches. This easy of alteration can provide a significant savings in design time CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 compared to known sequencer circuits, which often require careful redesign and resimulation when the sequence of the output signals is altered. Fig. 6 shows a more generalized block diagram of a sequencer circuit according to another embodiment of the invention. A triggering input signal IN is provided to a delay line 601, which provides a sequence of delayed input signals DLY1, DLY2, ..., DLYn. Delay line 601 can be implemented as a series of inverters, as in delay line 500 of Fig. 5A, or any other delay line implementation can be used. Interconnect block 602 provides various ones of the delayed input signals to dual-input latches 603a, 603b, ..., 603n. Dual-input latches 603a-603n are also driven by input signal IN. The dual-input latches function as shown in Fig. 4C and described above. Each dual-input latch provides an output signal OUT1, OUT2, ..., OUTn having a leading edge determined by a first one of the delayed output signals and a trailing edge determined by a second one of the delayed output signals. Fig. 6B is a flow diagram showing a sequence of steps performed by the sequencer circuit of Fig. 6a, for example, as implemented in Fig. 5A. In step 611, a trigger input signal (e.g., IN) is detected. In step 612, a series of delayed input signals (e.g., delayl, delay2...) is generated from the trigger input signal. In step 613, a leading edge is detected on a first one of the delayed input signals (e.g., delayl) while the trigger input signal is active. In response, a first value is latched (step 614), e.g., into dual-input latch 400a. In step 615, a leading edge is detected on a second one of the delayed input signals (e.g., delay2) while the trigger input signal is active. In response, a second value is latched (step 616), e.g., into dual-input latch 400b. In step 617, a trailing edge is detected on a third one of the delayed input signals (e.g., delay4). In response, a third value is latched (step 618), e.g., into dual-input latch 400a. In step 619, a trailing edge is detected on a fourth one of the delayed input signals 11 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 (e.g., delay5). In response, a fourth value is latched (step 620), e.g., into dual-input latch 400b. Also, and concurrently with many of the above steps, the latched values are provided as the output signals of the sequencer circuit (step 621). The steps shown in Fig. 6B can occur in many different sequences, thereby providing a great deal of flexibility. For example, Fig. 6C shows the same series of steps as in Fig. 6B, performed in a different order. In the embodiment of Fig. 6C, steps 613-614, 617-618, and 621 (designated DIL1) are performed by a first dual-input latch, while steps 615-616 and 619-621 (designated DIL2) are performed by a second dual-input latch. Thus, steps 613-614 and 617- 618 can be performed concurrently with, or in an overlapping manner with respect to, steps 615-616 and 619- 620. Further, the first, second, third, and fourth delayed input signals can be selected from any of the sequentially delayed signals provided by the delay line. Also, two or more of the first, second, third, and fourth delayed input signals can be the same signal. Note that Fig. 5A provides only one implementation of the generalized sequencer circuit shown in Fig. 6A. Many other sequencer circuits can be implemented using the block diagram of Fig. 6A. They may have, for example: differently implemented delay lines; different numbers of delays in the delay line; varying numbers of delayed input signals provided by the delay line; varying delays between the delayed input signals; varying numbers of dual-input latches; differently implemented dual-input latches; dual- input latches responding to low pulses rather than high pulses; and different interconnections between the delayed input signals and the inputs to the dual-input latches. These and other variations are encompassed by the invention. Another variation of the novel sequencer circuit provides a programmable interconnect block. This embodiment is particularly applicable to programmable logic 12 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 devices (PLDS), where the programmable nature of the interconnect block allows a designer to modify the sequence of edges on the output signals simply by reconfiguring the PLD. For example, the programmable sequencer circuit of Fig. 7 can be used in a CPLD device, where the functionality of the sequencer circuit can be changed by reprogramming the EEPROM cells that configure the device. Fig. 7 shows a sequencer circuit having a programmable interconnect block. The pictured embodiment is similar to that of Fig. 5A, except for the programmable interconnection block. Therefore, only the interconnection block 700 is described here. Appropriately programmed, the circuit of Fig. 7 can be used to implement the sequencer circuit of Fig. 5A. Each dual-input latch requires two delayed input signals, a leading edge signal LE and a trailing edge signal TE. Each of these signals LE, TE is provided by a multiplexer 721-726. The multiplexer is controlled by one or more select signals. In this embodiment, the select signals are stored in programmable memory cells 730. (The programmable memory cells are shown in Fig. 7 as boxes containing an "X".) Each multiplexer 721-726 selects among the available delayed input signals to provide the desired signals to the input terminals of each dual-input latch 400a-400c. The various embodiments of the invention provide many advantages not found in prior art circuits. For example, being digital, the circuits of the invention are easy to simulate. Changes to the sequencer circuits or to the circuits driven by the sequencer circuits do not necessitate extensive resimulation. Any order of the output signals can be achieved. The amount of delay between edges on the output signals is easily controlled by increasing or decreasing a number of delays on the delay line, i.e., selecting different delayed input signals. The order of the output signal edges does not vary with power supply, temperature, or process variations. The circuits 13 CA 02464890 2004-04-26 WO 03/041275 PCT/US02/34296 are technology independent, i.e., easily moved from process to process. Those having skill in the relevant arts of the invention will now perceive additional modifications and additions that may be made as a result of the disclosure herein. For example, the above text describes the circuit of the invention in the context of ICs including programmable logic devices. However, the invention can also be applied to other systems and other ICs. Further, delay lines, inverters, NAND gates, NOR gates, dual-input latches, interconnection blocks, multiplexers, and memory cells other than those described herein can be used to implement the invention. Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance the method of interconnection establishes some desired electrical communication between two or more circuit nodes. Such communication may often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art. Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents. 14 |
A host Virtual Machine Monitor (VMM) operates blindly, without the host VMM having the ability to access data within a guest virtual machine (VM) or the ability to access directly control structures that control execution flow of the guest VM. Guest VMs execute within a protected region of memory (called a key domain) that even the host VMM cannot access. Virtualization data structures that pertain to the execution state (e.g., a Virtual Machine Control Structure (VMCS)) and memory mappings (e.g., Extended Page Tables (EPTs)) of the guest VM are also located in the protected memory region andare also encrypted with the key domain key. The host VMM and other guest VMs, which do not possess the key domain key for other key domains, cannot directly modify these control structures nor accessthe protected memory region. The host VMM, however, using VMPageIn and VMPageOut instructions, can build virtual machines in key domains and page VM pages in and out of key domains. |
1.A processor for secure public cloud computing includes:A core for executing a first instruction for paging a first virtual machine (VM) client page into a key domain, the execution of the first instruction includes using a A message authentication code (MAC) in an extended page table entry (EPTE) of a page to verify the first VM client page and replace the MAC in the EPTE with a host physical address of the VM client page; andAn encryption engine configured to decrypt the first VM client page in response to the first instruction.2.The processor of claim 1, wherein:The core is further configured to execute a second instruction to page the first VM client page out of the key domain; andThe encryption engine is further configured to encrypt the first VM client page in response to the second instruction.3.The processor of claim 1, wherein the core is further configured to execute a third instruction to create the key domain, the key domain comprising a plurality of protected memory locations to store a plurality of VM guest pages, Including the first VM client page.4.The processor of claim 3, wherein execution of the third instruction includes decrypting an encrypted key domain key to provide to an encryption engine to decrypt the plurality of VM client pages.5.The processor of claim 1, wherein the first instruction is used to specify a first guest physical address to indicate a start of a guest physical address range of the first VM.6.The processor of claim 5, wherein the first instruction is used to specify a second guest physical address to indicate an end of the guest physical address range of the first VM.7.The processor of claim 3, wherein the first instruction is used to specify a host physical address of a first protected memory location to store the first VM guest page.8.The processor of claim 7, wherein the first instruction is used to specify permission for accessing the first protected memory location.9.The processor of claim 1, wherein the second instruction is used to specify a first client physical address to indicate a start of a guest physical address range of the first VM, and to specify a second client physical address to indicate the first The end of the guest physical address range of the first VM.10.The processor of claim 9, wherein the second instruction is used to specify permission for accessing the first VM guest page.11.A system for secure public cloud computing includes:Processor; andA memory coupled to the processor; whereinThe processor is to execute an untrusted host virtual machine monitor to manage execution by a processor of at least one guest virtual machine;The untrusted host virtual machine monitor is to receive an encrypted key domain key, an encrypted client code image encrypted by the key domain key, and an encrypted client control structure encrypted by the key domain key. The key domain key is inaccessible to the untrusted host virtual machine monitor;The untrusted host virtual machine monitor is to issue a creation instruction to the processor to create a first key domain, the first key domain including an area of the memory to be encrypted by the key domain key , The untrusted host virtual machine monitor is to additionally verify the encrypted client control structure;In response to receiving the creation instruction, the processor is to create the first key domain and decrypt the encrypted key domain key to generate the key domain key; andThe untrusted host virtual machine monitor is to issue a page-in instruction to the processor to build a first guest virtual machine in the first key domain.12.The system of claim 11, wherein:The untrusted host virtual machine monitor is to issue a startup instruction to the processor to start the first guest virtual machine in the first key domain; andIn response to receiving the startup instruction, the processor is to switch to the first key domain, decrypt the encrypted client control structure to generate a client control structure including client processor state information, and decrypt the encrypted client control structure. Client code mirroring to generate a client code mirroring, and performing the client code mirroring in the first key domain using the client processor state information.13.The system of claim 12, whereinIn response to an event that triggers an exit condition of the first guest virtual machine, the processor is to switch from the first key domain to a second key domain.14.The system of claim 13, wherein the client control structure specifies a protected location of the memory, and the processor is to store the client processor state information at the protected location of the memory.15.The system of claim 14, whereinIn response to the event that triggers the exit condition of the first guest virtual machine, the processor further saves the client processor state information of the first guest virtual machine in the memory of the memory. In a protected positionThe untrusted host virtual machine monitor is to issue a restart instruction to the processor to restart the first guest virtual machine; andIn response to receiving the restart instruction, the processor is to switch to the first key domain, and retrieve the client processor state information of the first guest virtual machine from the protected location of the memory And using the client processor state information to perform the client code mirroring in the first key domain.16.A method for secure public cloud computing includes:The encrypted key domain key received by the untrusted host virtual machine monitor, the encrypted client code image encrypted by the key domain key, and the encrypted client control structure encrypted by the key domain key. The key domain key is inaccessible to the untrusted host virtual machine monitor;The untrusted host virtual machine monitor sends a creation instruction to a processor to create a first key domain, where the first key domain includes an area of memory to be encrypted by the key domain key, and the unavailable The host virtual machine monitor is to additionally verify the encrypted client control structure;Creating the first key domain by the processor in response to receiving the creation instruction, and decrypting the encrypted key domain key to generate the key domain key; andA page-in instruction for constructing a first guest virtual machine in the first key domain is issued by the untrusted host virtual machine monitor to the processor.17.The method of claim 16, further comprising:Issuing, by the untrusted host virtual machine monitor, a start instruction to the processor to start the first guest virtual machine in the first key domain; andThe processor switches to the first key domain in response to receiving the startup instruction, decrypts the encrypted client control structure to generate a client control structure including client processor state information, and decrypts the encrypted client control structure. Client code mirroring to generate a client code mirroring, and performing the client code mirroring in the first key domain using the client processor state information.18.The method of claim 17, further comprising:The processor switches from the first key domain to the second key domain in response to an event that triggers an exit condition of the first guest virtual machine.19.The method of claim 18, wherein the client control structure specifies a protected location of the memory, and the processor is to store the client processor state information at the protected location of the memory.20.The method of claim 19, further comprising:In response to the event triggering the exit condition of the first guest virtual machine, storing, by the processor, the guest processor state information of the first guest virtual machine in the memory of the memory In a protected positionIssuing a restart instruction to the processor by the untrusted host virtual machine monitor to restart the first guest virtual machine; andIn response to receiving the restart instruction, the processor switches to the first key domain, and retrieves the client processor state information of the first guest virtual machine from the protected location of the memory And using the client processor state information to perform the client code mirroring in the first key domain. |
A secure public cloud with extended paging and storage integrityCross-reference to related applicationsThis application claims priority from U.S. Provisional Patent Application No. 62 / 719,979, entitled "Secure Public Cloud Using Extended Paging and Memory Integrity," filed on August 20, 2018 under the names of David Durham, Siddhartha Chhabra, Geoffrey Strongin, and Ronald Perez. , Whose disclosure is incorporated herein by reference.Technical fieldEmbodiments relate to the security of public clouds and, in particular, enable consumers of public cloud services to ensure that consumer processes and consumer private data executing in the cloud are protected from others, including public cloud service providers Business).Background techniqueThe term "cloud computing" is used to describe network-based computing (typically via the Internet). According to Wikipedia, "Cloud computing provides shared processing resources and data to computers and other devices as needed. Cloud computing is used to enable shared pools of configurable computing resources such as networks, servers, storage, applications, and services A ubiquitous, on-demand access model that can be provisioned and released quickly with minimal management effort. Cloud computing and storage solutions provide users and businesses with a variety of capabilities to store and process their third-party data centers Data. Cloud computing relies on resource sharing to achieve consistency and economy of scale, similar to utilities on the network (such as the power grid). " (Source: Wikipedia, https://en.wikipedia.org/wiki/Cloud_computing, accessed August 11, 2016, citations omitted.)The current availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization, service-oriented architectures, and autonomous and public computing have led to the growth of cloud computing. As computing needs increase, companies can scale up by requesting additional resources from cloud service providers, and then scale down again as demand decreases.Cloud computing provides resources as a service. "Cloud computing providers provide their" services "according to different models. The three standard models of each National Institute of Standards and Technology (NIST) are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and software As a service (SaaS). These models provide more and more abstractions; thus, they are often depicted as layers in a stack, where infrastructure as a stack is used as the bottom layer; platform as a service is used as the middle layer; and software as a service is used as As the top layer. These layers can be implemented independently of each other. For example, people can provide SaaS implemented on physical machines (bare metal) without using the underlying PaaS or IaaS layer; and instead, people can run programs on IaaS and directly Access it without having to package it as SaaS. "(Source: Wikipedia, https://en.wikipedia.org/wiki/Cloud_computing, accessed August 11, 2016, citations omitted.)"The NIST definition of cloud computing defines the service model as follows:Software as a Service (SaaS). The capability provided to consumers is to use a provider's application running on a cloud infrastructure. Applications are accessible from various client devices through a thin client interface such as a web browser (eg, web-based email) or a program interface. Consumers do not manage or control the underlying cloud infrastructure, including networks, servers, operating systems, storage devices, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.Platform as a Service (SaaS). The ability provided to consumers is to deploy applications created or acquired by consumers using programming languages, libraries, services, and tools supported by the provider onto the cloud infrastructure. Consumers do not manage or control the underlying cloud infrastructure, including networks, servers, operating systems or storage devices, but can control the possible configurations of deployed applications and possibly application hosting environments.Infrastructure as a Service (SaaS). The capabilities provided to consumers are to provide processing, storage, networking, and other basic computing resources, where consumers can deploy and run arbitrary software, which can include operating systems and applications. Consumers do not manage or control the underlying cloud infrastructure, but can control the operating system, storage devices, and deployed applications; and may have limited control over selecting networking components, such as host firewalls. "(Source: Wikipedia, https://en.wikipedia.org/wiki/Cloud_computing, accessed August 11, 2016, citations omitted.)One enabling technology for cloud computing is virtualization. "Virtualization software divides physical computing devices into one or more" virtual "devices, each of which can be easily used and managed to perform computing tasks. Hardware virtualization is the virtualization of a computer as a complete hardware platform, its components Some of the logical abstractions or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of the computing platform from the user and instead presents another abstract computing platform, "often referred to as a" virtual machine. " (Source: Wikipedia, https://en.wikipedia.org/wiki/Hardware_virtualization, accessed August 11, 2016, citations omitted.) Software that controls virtualization is called a "supervisor" or "virtual machine monitor" ". The provision and execution of a hypervisor / virtual machine monitor for creating virtual machines on behalf of consumers is an example of a service provided by a public cloud service provider.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram illustrating a typical virtual machine environment.FIG. 2 is a block diagram illustrating a virtual machine environment according to an embodiment of the present invention.FIG. 3 is a block diagram of a cloud service environment according to an embodiment of the present invention.FIG. 4 is a diagram showing a device that can be used to implement an embodiment of the present invention.5 is a flowchart of a method performed by a consumer of a cloud service according to one embodiment of the present invention.FIG. 6 is a flowchart of a method performed by a cloud service provider according to an embodiment of the present invention.FIG. 7 is a diagram illustrating components of a consumer domain mirroring according to one embodiment of the present invention.FIG. 8 is a diagram illustrating a data physical address according to an embodiment of the present invention.FIG. 9 is a diagram illustrating a virtual-to-physical memory mapping according to one embodiment of the present invention.FIG. 10 is a diagram illustrating another virtual-to-physical memory mapping according to one embodiment of the present invention.FIG. 11 is a diagram illustrating initial steps performed by a cloud service provider to provide a domain image to a consumer according to one embodiment of the present invention.FIG. 12 is a diagram illustrating a message between a consumer and a cloud service provider for providing a domain image to a consumer according to one embodiment of the present invention.FIG. 13 is a diagram illustrating a consumer providing encrypted domain mirroring according to one embodiment of the present invention.14 is a diagram illustrating messages between components of a cloud service environment for encrypting a domain mirror and establishing a key domain according to one embodiment of the present invention.15 is a diagram illustrating messages between components of a cloud service environment for loading a consumer's encrypted domain image into the memory of a server supporting a key domain according to one embodiment of the present invention.FIG. 16 is a diagram illustrating initialization of a key domain according to an embodiment of the present invention.17 is a flowchart of an operation method of a CPU of a server supporting a key domain in performing a key domain creation operation according to an embodiment of the present invention.FIG. 18 is a diagram illustrating verification of domain mirroring according to one embodiment of the present invention.FIG. 19 is a diagram illustrating messages between components of a cloud service environment for verifying a domain image according to one embodiment of the present invention.20 is a flowchart of an operation method of a CPU of a server supporting a key domain in performing a hash key domain operation according to an embodiment of the present invention.FIG. 21 is a diagram illustrating switching between key domains according to one embodiment of the present invention.22 is a diagram illustrating messages between components of a cloud service environment when executed within a key domain according to one embodiment of the present invention.23 is a flowchart of a method of operating a CPU of a server supporting a key domain in performing a key domain switching operation according to an embodiment of the present invention.24 is a flowchart of a method of operating a CPU of a server supporting a key domain in performing traversal of a paging structure in response to a page miss, according to one embodiment of the present invention.FIG. 25 is a diagram showing the growth of a domain mirror image according to one embodiment of the present invention.FIG. 26 is a diagram illustrating messages between components of a cloud-based environment for a growing domain manager (VMMlet) according to one embodiment of the present invention.FIG. 27 is a diagram showing messages between components for running a domain manager (VMMlet) for a cloud service provider requesting more memory pages from a memory manager according to one embodiment of the present invention surroundings.FIG. 28 is a diagram illustrating messages between components of a cloud service environment that requests additional memory pages when scheduling a VM on a single CPU, according to one embodiment of the present invention.FIG. 29 is a diagram illustrating a running domain manager (VMMlet) according to one embodiment of the present invention.30 is a diagram illustrating a plurality of virtual machines in a key domain managed by a domain manager (VMMlet) and in a second key domain managed by another domain manager (OSlet) according to one embodiment of the present invention .FIG. 31A is a diagram illustrating determining an integrity row position and a slot from a physical memory address according to one embodiment of the present invention.FIG. 31B is a diagram showing data rows stored in a data memory address space and integrity values stored in an integrity data address space.FIG. 32 is a diagram showing a system that can be used to implement an embodiment of the present invention.FIG. 33 is a diagram showing a system that can be used to implement an embodiment of the present invention.FIG. 34 is a diagram showing a system that can be used to implement an embodiment of the present invention.FIG. 35 illustrates an environment in which an untrusted consumer virtual machine operates in a protected environment in which actions taken by a virtual machine monitor of an untrusted cloud service provider can be verified.Figure 36 shows the data flow of a virtual machine monitor (host VMM) accessing a virtual machine control structure for a guest virtual machine running in a protected key domain.FIG. 37 illustrates a process of an agent editing a virtual machine control structure for a guest virtual machine running in a protected key domain on behalf of a virtual machine monitor action.FIG. 38 illustrates an interrupt handler for a guest virtual machine to protect its virtual machine control structure from being modified by a damaged virtual machine monitor.FIG. 39 illustrates the interrupt handler-driven operation of FIG. 38.FIG. 40 shows the operation of the virtualization exception handler / shim when saving the state of the processor registers when exiting the virtual machine.Figure 41 illustrates the creation of a key domain and the installation of an encrypted client code image in its encrypted memory along with its encrypted control structure (s).FIG. 42 illustrates an alternative process for creating a key domain and installing an encrypted client code image into its encrypted memory along with its encrypted control structure (s).FIG. 43 illustrates one embodiment of a process for a host VMM to verify a proxy VMCS provided by a consumer.44 is a data flow diagram illustrating a data flow for a host VMM requesting an agent to modify an extended page table of another guest virtual machine.FIG. 45 is a diagram showing loading a virtual machine control structure for a guest virtual machine, verifying the virtual machine control structure, starting the guest virtual machine, executing the guest virtual machine code image, and exiting the guest virtual machine to return control to the host virtual machine monitor. flow chart.FIG. 46 illustrates a process for updating a customer VM image for a consumer.Figure 47 illustrates the process of adding pages to a consumer's VM workload for a consumer.FIG. 48 illustrates a key domain architecture according to an embodiment of the present invention.49A and 49B illustrate a method of creating and using a key domain according to an embodiment of the present invention.FIG. 50 illustrates a KeyID within a data physical address according to an embodiment of the present invention.FIG. 51 illustrates the use of the VMPageIn and VMPageOut instructions according to an embodiment of the invention.52A and 52B illustrate a method of executing VMPageIn and VMPageOut instructions according to an embodiment of the present invention.Figure 53 illustrates end-to-end provisioning of a secure VM according to an embodiment of the invention.detailed descriptionIn today's known virtualization environments, the host virtual machine monitor (VMM / Supervisor (hereinafter simply referred to as "VMM" or "host VMM") has full control over the guest virtual machines (VMs) managed by the host VMM .Host VMM can read / write guest VM memory, modify guest VM control flow (single step, rewind, repeat, debug), read / modify guest VM register status, read / modify guest VM control structure, etc. However, this complete control over the execution of the guest VM poses a security risk: the host VMM is compromised, and the guest VM may be modified such that the secrets and data of consumers residing within the guest VM are exposed.In a typical virtualization environment, by switching from one virtual machine to another, the data structure related to the execution state of the virtual machine is modified by the VMM. These data structures can include virtual machine control structures (VMCS) and memory maps (for example, page tables and extended page tables (EPT)). VMCS is a data structure in memory that exists once for each logical processor of each guest VM, and the guest VM is managed by the host VMM. In a multi-processor system, each processor executing a guest VM at the same time can have a unique VMCS. With each change of the execution context between different VMs, the VMCS is restored for the currently executing VM, thereby defining the state of the virtual processor of the VM. When the execution context is switched from the guest VM (VMExits) back to the host VMM, the same VMCS structure is used to restore the host's processor state from the VMCS host state area.The operating system for the guest VM will use its own page tables to form its own memory map between virtual and guest physical memory addresses (GPA). The VMM then uses an extended page table (EPT) to map the customer's physical address (GPA) to the actual physical address (PA) used by hardware to access physical memory. However, VMM uses these VMM-controlled memory maps to damage guest VMs.The disclosure given in this article introduces a new model for host VMM operation, where the host VMM operates "blindly" without the ability to access data within the guest VM, or the ability to directly access control structures that control the execution process of the guest VM. The guest VM executes in a protected area of the memory, which area and even the host VMM cannot access. In one embodiment, the protected area of the memory in which the guest VM executes is implemented as a key domain, which is encrypted with a key domain key provided by the consumer. The key domain is described in detail below with reference to Figures 1-30. In another embodiment, the protected area of the memory is implemented using a range register, where the designated register prevents the host VMM (and other software) from accessing the protected memory area of the guest VM. For the purposes of this application, the protected memory area of the guest VM will be described with respect to the key domain, although the techniques described herein are applicable to protected memory areas implemented using other technologies, so that the consumer's guest VM is not accessible to the host VMM .Virtualized data structures related to the execution state of the guest VM (such as VMCS) and memory mapping are also located in the protected memory area (key domain). These virtualized data structures are encrypted with a key domain key. The host VMM and other guest VMs that do not own the key domain keys of other key domains cannot directly modify these control structures and cannot access the protected memory area.In order to enable the host VMM to manage the execution of the guest VM without directly modifying the control structure of the guest VM, another type of guest VM is introduced, which is referred to herein as a "client agent VM", or "agent" for short. The host VMM launches an agent to operate within the protected key domain in which the guest VM performs, working with the guest VM to protect the guest VM from tampering. In one embodiment, the virtualization environment implements a policy that enables agents to access and modify control structures that control the execution processes and register states of other guest VMs on behalf of the host VMM. By modifying the control structure of other guest VMs, the agent can perform tasks such as loading consumer-supplied images into the guest VMs and creating or modifying additional VMCS and EPT function. It should be noted that the functionality provided by the host VMM in a traditional virtualization environment is instead implemented by the agent when requested by the host VMM, making the agent an intermediary for the host VMM.Furthermore, using the agent as an intermediary between the host VMM and the guest VM allows the agent to verify that the VMM will not misconfigure the guest VM to leak confidential data, inject code or data, or modify the execution process of the guest VM. In addition, the technology disclosed herein enables mutual authentication, in which the host VMM can be assured that the guest VM cannot affect the state of the host VMM, while the guest VM is assured that the host VMM cannot access or affect the state of the guest VM.Thus, in one embodiment, the EPT control structure may also be placed in the protected memory of the guest VM that is not accessible by the host VMM. In order to prevent the guest VM from undermining the security of the host VMM by maliciously modifying the EPT, the running guest VM should not be able to modify its own EPT. It is possible to give another trusted VM with another VMCS access to modify the EPT of another guest VM, but not to give the ability to modify its own EPT. Alternatively, in an embodiment with a single guest VM, the guest VM may use its own memory encryption key (key domain key) to represent the host VMM encrypted memory structure. The guest VM then returns the resulting ciphertext to the host VMM so that it can be installed into the correct memory location on behalf of the guest VM under the control of the host VMM.Using a HashKD (HashKD) instruction, the host VMM can verify that the data structures (such as EPT) created by the customer match the expectations of the host VMM, and do not allow the guest VM to access the host or other guest VM's memory space. The HashKD instruction does not reveal the memory content or secret of the guest VM, but generates a representative SHA hash value that the host VMM can use to verify the memory content without calling the guest VM. For example, if the HashKD instruction produces a value that matches the expected hash value of the extended page table (EPT), the host VMM is assured that the guest VM is properly configured with memory and that the guest VM can be safely started.In one implementation consistent with this disclosure, Intel® Virtualization Technology (VT) and Trusted Execution Technology (TXT) and Protected Memory Range or Memory Encryption Technology (not initially accessible to TXT) by VMM Such as Intel® Total Memory Encryption (TME), Integrity TME (TMEi), or Memory Encryption Engine (MEE)). This embodiment removes the virtual machine monitor (VMM) / supervisor code of the public cloud service provider from the trusted code base (TCB) of the customer virtual machine (VM) / workload. These technologies protect consumer workloads from access by the host VMM and still enable the host VMM to maintain full control of the platform and manage guest virtual machines running on the platform.Memory encryption technology protects guest VM workloads from physical attacks and prevents host VMM from accessing VM (encrypted) memory. Cloud service provider software, administrators, and anyone with physical access to the cloud service provider's servers cannot access or modify protected customer VMs.The present disclosure prevents the inclusion of a customer's virtual machine by protecting the consumer's data from being accessed by a cloud service provider, a hosted VMM, by another customer VM, by an administrator or other person with physical access, by the government, etc. Of consumer data. The protection provided using the technology described in this article effectively provides the same level of confidentiality and security as consumers running the same workloads in a private cloud (on-premises). Establish a mutually trusting relationship between consumers and public cloud service providers by enabling consumers to verify that the process of public cloud service providers running in the cloud has not compromised consumer code and data. Similarly, the process by which a public cloud service provider can verify consumers running in the cloud has not compromised the code and data of the public cloud service provider.Referring now to FIG. 1, a block diagram showing components of a typical virtual machine environment 100 is shown. A typical implementation of a virtual machine environment provided in a server of a cloud service provider is shown. Running on the server hardware 110 is a virtual machine monitor (VMM) layer 120. In the typical virtual machine environment 100 shown, the VMM layer 120 is computer software or firmware that creates and runs virtual machines (VMs) (such as VM1 1301, VM2 1302, and VM3 1303) on server hardware 110 of a cloud service provider . Each of the VMs VM1 1301, VM2 1302, and VM3 1303 is shown in FIG. 1 as independent blocks, representing different VMs all under the control of the common VMM layer 120. The VMM layer 120 provides VMM-controlled VMs with access to server resources, such as server hardware 110.The VMM layer 120 uses data structures such as a VM control structure (VMCS) 124 and an extended page table (EPT) 126 to control the execution of the VM. VMCS is a data structure in storage. Each VM exists once, and it is managed by VMM. With each change in the execution context between different VMs, the VMCS is restored for the current VM, and the state of the virtual processor of the VM is defined. The extended page table (EPT) is used to start the virtual processors of the VM, with privileges as "unrestricted customers".The software or firmware of the VMM layer 120 is provided by the cloud service provider and is part of the Trusted Computing Base (TCB) of each VM. According to Wikipedia, "A computer system's trusted computing base (TCB) is a collection of all hardware, firmware, and / or software components that are critical to its security. In a sense, a defect or vulnerability that occurs within the TCB. May compromise the security attributes of the entire system. In contrast, parts of computer systems outside the TCB must not be able to misbehave in a way that will leak more of any privileges than they are granted ... modern operating systems Trying to reduce the size of the TCB, making exhaustive checks of its code base (with the help of manual or computer-aided software audits or program verification) feasible. "(See Wikipedia, https://en.wikipedia.org/wiki/Trusted_computing_base , Accessed August 9, 2016.)In the normal virtual machine environment 100 of FIG. 1, the VMM 122 provided by the cloud service provider is in the TCB of each of the VMs VM1 1301, VM21302, and VM3 1303. The VMM 122 is included in the TCB to prevent a particular VM, such as VM1 1301, from viewing, measuring, or trusting the VMM 122 that controls that particular VM. The cloud service provider can change the VMM 122 at any time without the knowledge of the VM VM1 1301 owner. Furthermore, there is no password encryption separation between VMs. If the VMM has been damaged, the damaged VM can access private data in the second VM via the damaged VMM, and the VMM is still trusted by the second VM.In order to receive the VMM that controls the consumer's process / VM is a trusted consumer, most known technologies use hardware to measure software / firmware running on remote machines in the cloud (in this case, VMM 122), and turned back to prove to the consumer that the software / firmware running on the remote machine in the cloud is the version of the software / firmware that the consumer expected. Since the VMM of the public cloud service provider is included in the consumer's TCB, the consumer cannot independently evaluate the trustworthy proof made by the public cloud service provider.FIG. 2 is a block diagram of a virtual machine environment 200 according to an embodiment of the present invention. In this environment, the concepts of key domains and domain managers were introduced. The key domain is a cryptographically separate part of the memory, where access to data stored in a memory location belonging to the key domain requires the associated key domain key to be used to decrypt the data. The domain manager can use the key domain to cryptographically separate data belonging to different owners; in a cloud service environment, the domain manager can use the key domain to cryptographically separate different consumptions belonging to cloud services (such as banking services) Data.For example, in the virtualized environment 200 of FIG. 2, the key domains KD1 2501 and KD2 2502 are used to separate data belonging to different virtual machines VM1 2301 and VM2 2302. The data belonging to each of the virtual machines VM1 2301 and VM2 2302 may include, for example, a consumer secret (such as a bank account number, social security number, etc.) belonging to each virtual machine VM1 2301 and VM2 2302. As another example, the data belonging to each of the virtual machines VM1 2301 and VM2 2302 may include secret computer code (also referred to as a code image) to be executed to protect each corresponding virtual machine within the cloud service provider's environment Or mirror for short).The respective domain managers (VMMlets 2221 and 2222), on behalf of their respective host owners VM1 2301 and VM22302, perform a similar role as a virtual machine monitor (VMM, such as VMM 122 of FIG. 1). The domain manager (VMMlet) provides VMM functionality within the VM, rather than a completely separate VMM layer as shown in Figure 1. A domain manager (VMMlet) is a privileged code with the ability to create, exit, and resume execution of a VM. These privileges may be referred to as "VMxroot" functionality and include the ability to execute commands such as virtual machine control structure (VMCS) save / restore, general register (GPR) save / restore, and / or VMexit / VMresume. Furthermore, the domain manager (VMMlet) controls key resources, such as interrupt descriptor tables (IDT), advanced programmable interrupt controller (APIC) instructions, and paging data structures such as page tables and extended page tables (EPT). In some embodiments, the domain manager (VMMlet) portion may consist only of data structures that control the VM, such as VMCS, its associated data structures, and EPTs associated with the VM.A domain manager (VMMlet) restricts access to its host VM to a corresponding cryptographically separate part of the storage called a key domain. The content of each physical storage location belonging to the key domain is hardware-encrypted using the public key domain key. When hardware writes data to a memory location that belongs to the key domain, the key domain key is used to encrypt the data; when hardware reads data from a memory location that belongs to the key domain, the key domain key is used to decrypt the data.In one embodiment, the key domain key is created by a consumer who owns the key domain and is provided directly and securely to the server hardware of the cloud service provider. In other embodiments, a consumer may translate a key provided by another entity, such as a server of a cloud service provider, into another key used to encrypt a memory location belonging to a key domain. In still other embodiments, different keys may be used to encrypt different IP blocks (sets of memory locations) belonging to the key domain; for example, different keys may be used to The key encrypts the IP block including the code of the consumer VM image. Although there are other embodiments within the scope of the present invention, in order to simplify the description of the embodiments herein, this application describes each of the key domains that are encrypted by a key domain key created by a consumer who owns the key domain. The contents of three physical memory locations.If the content of a physical memory location belonging to the key domain is decrypted with the wrong key domain key, the resulting plaintext will be corrupted. Furthermore, if the memory is integrity protected and the content of the physical memory location belonging to the key domain is decrypted with the wrong key domain key, the resulting plaintext will not satisfy the Integrity criteria. Although the scope of the present invention does not require that memory locations belonging to the key domain be integrity protected, memory integrity protection can be used to enhance the security of the techniques described herein.In one embodiment, the unused physical address bits (or other metadata passed through the cache) are used to define the key domain. For example, because there may be fewer physical memory locations installed in the system than a physical memory location that can be addressed using a 64-bit physical memory address, the unused most significant address bits can be used in different key domains. Choose between. Two different key domain addresses can be aliases to the same physical memory location. However, when data from the physical memory location is read into the cache, the cache independently maintains the key domain address under full address resolution (for example, including a full 64-bit physical memory address). The key domain address that is uniquely identified when considering the unused physical address bits of a full 64-bit physical memory address determines the key domain to which the physical memory location belongs. By identifying the key domain to which the physical storage location belongs, a key domain key that can be used to decrypt the content of the physical storage location is also identified.The memory manager can choose between different address values for aliases of the same physical memory location; that is, the memory manager can choose between different key domains based on the address alias. In one embodiment, a key domain key created by the owner (consumer) of the key domain is used to calculate an integrity check value (ICV, such as a key-controlled hash message authentication code (HMAC)). The memory manager can access the integrity check value table (or its authorized portion) to determine if the correct key domain key was used to access the data. If the wrong key domain key is used to decrypt the data, the resulting plaintext will be corrupted and will not match the corresponding integrity check value in the integrity check value table.In one embodiment, when data is read into a cache line, the data is compressed to provide space for key domain identifiers / selectors and / or integrity check values (i.e., unused address bits are embedded into Cache line). When writing to memory, the key domain identifier / selector can also be included in the compressed data. When the memory is read for compressed data rows, the actual unused address bits of the specified key domain are compared with the key domain identifier / selector value embedded in the compressed data cache. If the key field values match, the data is decompressed and forwarded to the cache. Compression is an integrity optimization to avoid having to consult the integrity check value table every time the data is accessed in memory. Furthermore, compressing the key field into the cache line alleviates the need for some caches to include the key field identifier as metadata. Although some embodiments of the invention may compress data written to a cache line or memory, there is no need to compress the data to implement the invention.If the key domain value does not match when comparing the actual unused address bits of the specified key domain with the key domain identifier / selector value embedded in the compressed data cache, determine which key domain is currently is authorized. If the address used to read the memory corresponds to the current key domain, the data is cleared (ie, the data bits are set to zero) and a cache eviction of the old key domain address is performed. (Although both key domain addresses are aliases to the same physical memory location, the cache holds the key domain address independently under full address resolution.)Referring again to FIG. 2, each of VM1 2301 and VM2 2302 is shown to have its own domain manager (VMMlet) 2221 and 2222. The domain manager VMMlet1 2221 is displayed inside VM1 2301, and the domain manager VMMlet 22222 is displayed inside VM2 2302 to indicate that the code of each corresponding domain manager (VMMlet) is included in the code of the corresponding VM. When a consumer requests a service that requires virtualization, the cloud service provider provides the consumer with a code image that implements the functionality of a domain manager (VMMlet). The domain manager (VMMlet) image provided by the cloud service provider is incorporated into the consumer's domain (VM) image.A consumer who owns VM1 2301 can measure and verify the domain manager (VMMlet) 2221 code before incorporating VMMlet1 2221 into the consumer's domain (VM1 2301) image. By placing the consumer's VM under the control of the entire software stack of the consumer's VM image, including the domain manager (VMMlet), the consumer can measure, verify, and trust for instantiation to run within the consumer's VM Domain Manager (VMMlet) image. Finally, the consumer creates a domain boot image (including a domain manager image) related to the storage location based on the physical address, encrypts the domain boot image with the consumer's own key domain key, and provides the encrypted domain boot image to the startup Domain startup mirrored cloud service provider server.In one embodiment, the consumer creates an encrypted domain boot image in a certified SGX (Intel® Software Protection Extension) enclave on a cloud service provider server. In this embodiment, the domain boot image is encrypted with a key domain key inside the enclave, and the encrypted domain boot image (and any associated ICV values) is written to a memory outside the enclave.When a cloud service provider receives an encrypted domain boot image (including a domain manager image) from a consumer, the cloud service provider can measure, verify, and trust the consumer encrypted domain boot image including the same domain management provided to the consumer Device mirroring. In one embodiment, the cloud service provider's server hardware provides a mechanism for measuring the domain manager portion of the consumer-encrypted domain boot image (creating its hash), so the cloud service provider can then prove that it includes The domain manager image in the consumer-encrypted domain boot image is the same as the domain manager image provided by the cloud service provider (and is therefore trusted by the cloud service provider). In one embodiment, the hash function that measures the domain manager image is location dependent, so that the domain manager image must be loaded into the correct memory location of the memory of the cloud service provider server to be properly decrypted. For example, even if the contents of two different memory locations are the same (eg, all zeros), only a domain manager image loaded into the correct memory location will produce the expected location-dependent hash result. The properties of the location-dependent hash verification feature provide a security advantage: When attempting to change the behavior of a domain manager image, an adversary cannot rearrange the encrypted portion of the domain manager image in memory.In this collaborative model, the domain manager image is verified by both the consumer and the cloud service provider. Consumers can trust the domain manager image provided by the public cloud service provider, and trust that the cloud service provider's hardware will enhance the security and confidentiality of the consumer virtual machine (VM). This verification is important to the security of the VM, as the domain manager (VMMlet) has full vmxroot privileges, including performing such things as virtual machine control structure (VMCS) save / restore, general register (GPR) save / restore, and / or vmexit The ability to use commands like / vmresume. Furthermore, the Interrupt Descriptor Table (IDT), Advanced Programmable Interrupt Controller (APIC) instructions, and paging data structures such as the page table and / or extended page table (EPT) are encrypted in the key field. In some embodiments, the domain manager image consists only of VM control structures (such as VMCS) and associated data (such as EPTs that control the consumer's VM behavior), and does not include key domains that can reside in the consumer Code or data used by external VMX root operations.This collaborative model enables consumers to trust privileged software provided by cloud service providers by moving measurement and verification to consumers. Consumers can ensure the security of consumers' own workloads in the cloud guaranteed by the server hardware of the cloud service provider. The cloud service provider can then re-verify that the correct domain manager image is used. This model greatly simplifies the hardware requirements for a truly secure public cloud foundation. No changes are required to the operating system (OS) portion of the virtual machine (VM). Most of the implementation complexity is included in the design of the domain manager (VMMlet), which is software that can be easily patched, updated, measured, and certified. In one implementation, the hardware instructions are used to create a key domain, switch between key domains, and calculate a hash value of the content corresponding to the memory location of the key domain and compare the hash value with the validity for the key domain. The expected hash value of the content is compared to verify the content of the key domain.Referring again to FIG. 2, the processor (included in the hardware 210) responds to commands issued by the memory manager 240 using SwitchKD (Switch Key Domain) instructions on VMs 2301 and 2302 and their respective key domains KD1 2501 and KD22502. Switch between. The result of switching from one key domain to another (e.g., from key domain KD2 2502 to KD12501) is that control over specific physical memory aliases is passed to VMs authorized to access the current key domain KD1 2501 ( 2301). Different hardware key domains accessed via key domain keys prevent information leakage of consumer private data across VMs, and even prevent adversaries from accessing external physical memory manager 240. The key domain identifier / selector (eg, part of the physical address) remains separate from the VM memory area in the cache. In one embodiment, instead of the switch key domain instruction, the VMX root vmlaunch / vmresume instruction switches the key domain to a key domain including the VMCS identified by the key domain identifier in the address provided by the vmptrld instruction, vmptrld The instruction loads the pointer to the current VMCS from the address specified in the vmptrld instruction. Vmexit will then switch back to the VMX root key domain or shared memory area.In one embodiment, a portion 212s of the memory 212 is shared and used to communicate across key domain cryptographic encryption boundaries. In other words, shared memory is unencrypted and can be used to pass messages between VMs, otherwise these VMs can only access memory locations that belong to the key domain authorized by each specific VM. Shared memory is shown as having a physical address, one of which is described herein as "k bits" and is disabled. The k bits are used to determine whether the current key domain is used to restrict VM access to memory locations that belong to a key domain (such as one of the key domains KD1 2501 or KD2 2502) or to allow key domains across shared memory 212s Share unencrypted information. The k-bit indicates to the CPU whether the key domain indicated in the physical address should be set to the shared key domain (plaintext /! k) or the currently active key domain (encrypted).The above embodiments have been described with respect to a domain manager (VMMlet) that manages virtual machines, although the present invention is not limited thereto. Similar key domain models can be used to support processes or containers; although there is no corresponding VMM, the OS kernel (or microkernel) serves a similar purpose. Each process or container image in each key domain will have a collaborative OS kernel component (herein referred to as a domain manager or OSlet) measured by the cloud service provider. A domain manager (OSlet) responds to memory manager commands, interrupts, scheduling, resource management, and the like in a manner similar to a domain manager (VMMlet).Referring now to FIG. 3, a block diagram of a cloud service environment according to an embodiment of the present invention is shown. As shown in Figure 3, the network 300 can be used to allow consumers to request services, including virtualization services, from public cloud service providers. As seen, the network 300 can correspond to any type of communication network and can include many different types of computing devices interconnected via a given network, such as the Internet 320.The cloud storage device 310 can be provided as part of a data center including various computing devices, storage devices, and the like. As one example, the cloud storage device 310 can be a storage device including a plurality of storage components, such as a magnetic disk, an optical, or a semiconductor-based storage device. The cloud storage device 310 can serve, for example, as a repository for master copies of various applications, including virtual machine monitor (VMM) applications that instantiate virtual machines to provide services in response to consumer requests. In the embodiment shown in FIG. 1, the master copy of the VMM application is stored in the form of a VMM image 312. VMM image 312 is a software image including a software stack designed to provide a virtual machine platform in the form of a virtual machine monitor (VMM).Thus, as further seen in FIG. 3, one or more public cloud service provider servers, such as public cloud provider servers 3151 and 3152, can be coupled to cloud storage at the same location, for example as part of the same data center.装置 310。 Device 310. In various embodiments, a public cloud service provider server can be used to service consumer service requests, including virtualization requests. For example, each public cloud service provider server may host one or more virtual machines on behalf of a consumer. In the example shown in FIG. 3, the public cloud provider server 3151 hosts two virtual machines, VM1 3401 and VM2 3402. Similarly, the public cloud provider server 3152 hosts two virtual machines, VM1 3403 and VM2 3404.As shown in FIG. 3, there can be various consumer devices, such as cloud service consumer devices 3301 and 3302. Such a cloud service consumer device may be a personal device for a given user, such as a smartphone, tablet, desktop computer, and the like. Alternatively, the cloud service consumer device may be a server of an organization for consuming the cloud service. In addition, cloud service consumer devices can be emulated via software. In other words, the emulator or simulator can use software to emulate the cloud provider's hardware, so that consumers can run the cloud provider's hardware emulator on the consumer's device.Each of the cloud service consumer devices 3301 and 3302 provides a corresponding cloud service consumer 3311 and 3312 and a corresponding VM image 3321 and 3322. The cloud service consumers 3311 and 3312 may be, for example, client components of a cloud service application for requesting a cloud service. Cloud service consumers such as cloud service consumers 3311 and 3312 are referred to herein as "consumers". The VM images 3321 and 3322 may be stored in a storage device (not shown) coupled to the respective cloud service consumer devices 3301 and 3302. These VM images are provided by the consumer to the cloud service provider and are used to create a secure VM, such as VM1 3401, running on the cloud provider's server 3151.When a secure VM has been established on the cloud service provider's server according to the techniques described herein, the consumer can then use the VM along with the consumer's key to create an additional VM on behalf of the consumer. Thus, once a consumer VM can be securely established in the cloud of a cloud service provider, the VM can then perform all operations of the consumer device in FIG. 3, including creating additional secure VMs.Similarly, consumers can establish secure VMs with multiple cloud service providers, and these secure VMs can use the consumer's secret key to securely interact via a secure communication channel.FIG. 4 is a diagram illustrating a device according to an embodiment of the present invention. A device 400 for securing a public cloud environment according to an embodiment is shown. Device 400 may include any computing device and / or data platform, such as a laptop, personal digital assistant (PDA), media content player, imaging device, mobile Internet device (MID), any smart device such as a smart phone, a smart tablet , Smart TV, computer server, etc., or a combination of them. In addition, the device 400 may include computing functionality (e.g., personal digital assistant / PDA, laptop, smart tablet), communication functionality (e.g., wireless smartphone), imaging functionality, media playback functionality (e.g., Smart TV / TV), etc. or any combination thereof (e.g., Mobile Internet Device / MID).The illustrated device 400 includes a memory 412. The memory 412 may be external to the processor 411 (eg, external memory), and / or may be coupled to the processor 411, such as through a memory bus. Further, the memory 412 may be implemented as a main memory. The memory 412 may include, for example, a volatile memory, a non-volatile memory, or the like, or a combination thereof. For example, the memory 412 may include dynamic random access memory (DRAM) configured as one or more memory modules, such as, for example, dual inline memory modules (DIMMs), small outline DIMMs (SODIMMs), etc., read-only memory (ROM) (for example, programmable read-only memory (PROM), erasable PROM (EPROM), electrical EPROM (EEPROM), etc.), phase change memory (PCM), etc., or a combination thereof.The memory 412 may include an array of memory cells arranged in rows and columns, divided into individually addressable storage locations. Thus, access to the memory 412 may involve using addresses of the storage location, such as, for example, a row address identifying a row including the storage memory location and a column address identifying a column including the storage memory location. In addition, a device internal to the device 400 and / or a device external to the device 400 may implement access to the memory 412. Access to the memory 412 may involve, for example, direct memory access (DMA).The memory 412 may be protected using encryption and integrity checks. In one embodiment, an encryption technique called a tunable block cipher is used. The fine-tunable block cipher accepts a second input, called fine-tuning, along with the plain or cipher text input to be encrypted. Fine-tuning, along with the key, selects the arrangement calculated by the password. For example, the spinner function can use the physical memory address as a spinner on the block cipher to bind unencrypted data to the physical memory address. The fine-tuning function 445 may include, for example, an XTS (Fine-tune codebook mode with ciphertext stealing based on XOR-Encryption-XOR / XEX) algorithm, Liskov, Rivest, and Wagner (LRW) algorithms, or a combination thereof .Regarding the integrity of the memory 412, in one embodiment, hardware capabilities based on memory encryption with integrity are used, which is described in US Patent No. 9,213,653 B2 "Memory Integrity", referred to hereinafter as having integrity Total memory encryption engine or TMEi. In another embodiment, memory encryption with integrity is provided by a memory encryption engine (MEE), as described in US Patent No. 8,819,455 "Parallelized Counter Tree Walk for Low Overhead Memory Replay Protection". However, the invention is not limited to these implementations, as any cryptographic encryption mechanism that can provide memory encryption using a memory location-dependent ("fine-tuned") password can be used. Furthermore, any memory integrity mechanism can be used to enhance the security provided by encryption alone, although a memory integrity mechanism is not necessary for the implementation of the present invention.The processor 411 may include any type of processor such as, for example, a microprocessor, an embedded processor, a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), a visual processing unit (VPU) , A network processor, a device that executes code to implement the techniques described herein, and the like, or a combination thereof. The processor 411 may include one or more cores, such as, for example, core 416 and core 418. The cores 416, 418 may include single-threaded cores, multi-threaded cores including more than one hardware thread context (or "logical processor") per core, or the like, or a combination thereof. Cores 416, 418 may include instruction decoders to identify and / or decode instructions (e.g., from instruction registers) to activate appropriate circuitry to execute instructions, verify that instruction streams (e.g., opcodes, etc.) will be calculated, and so on, or Their combination.For example, the cores 416, 418 may execute one or more instructions, such as a read instruction, a write instruction, an erase instruction, a move instruction, an arithmetic instruction, a control instruction, etc., or a combination thereof. The cores 416, 418 may, for example, execute one or more instructions to move data (e.g., program data, opcodes, operands, etc.) between a register (not shown) and the memory 412 to read data from the memory 412 to The data is written to the memory 412 to perform arithmetic operations (e.g., addition, subtraction, bitwise operation, comparison, etc.) using the data to perform control operations (e.g., branches, etc.) associated with the data, etc., or a combination thereof. The instructions may include any code representation such as, for example, binary code, octal code and / or hexadecimal code (e.g., machine language), symbol code (e.g., assembly language), decimal code, alphanumeric code, high-level programming language code, etc., or Their combination. Thus, for example, a hex code can be used to represent an operation code (such as an operation code) of the x86 instruction set, including a byte value "00" for addition operations, a byte value "8B" for movement operations, and Byte value "FF" for increment / decrement operations, and so on.The processor 411 may include internal storage, such as, for example, one or more levels of processor cache. On the same chip, the processor cache may not be encrypted and / or may share the same die with the processor 411. In addition, the processor cache may be integrated on one or more of the cores 416, 418. The illustrated processor 411 includes a cache 413 that can store data (eg, instructions, operands, program data, etc.) utilized by one or more components of the processor 411. The cache 413 may include any type of cache, such as, for example, an instruction cache, a data cache, a single-level cache, a multi-level cache, a shared cache, a strict inclusion cache, an exclusive cache, etc., or a combination thereof . For example, the cache 413 may include multiple levels of cache, such as level 2 (L2), level 3 (L3), level 4 (L4) or other levels of cache, last level cache (LLC), and / or the like The combination. The cores 416, 418 may check whether the data is located in the cache 413 to execute one or more instructions and / or other data (e.g., program data, etc.), where a cache miss may cause the data to be in a fixed size block (e.g. The cache line) is transferred from the memory 412 to the cache 413.For example, each core 416, 418 may be coupled to a respective memory via a respective memory controller such as a memory controller 417, coupled to a shared memory via a shared memory controller, coupled to a respective memory via a shared memory controller, etc., or they The combination. In addition, a shared cache may be coupled with a shared memory controller, multiple caches may be coupled with multiple corresponding memory controllers, and so on, and combinations thereof. For example, the memory controller 417 may be shared between cores 416, 418, may be coupled with a cache 413 (e.g., shared multi-level cache), and the cores 416, 418 may be coupled with memory 412 (e.g., shared DRAM). The memory controller 417 may be coupled with a memory 412 (eg, external memory, DRAM, etc.).The processor 411 further includes a memory encryption engine 415. The illustrated memory encryption engine 415 includes an encryptor 441 that can encrypt unencrypted data. The unencrypted data may include, for example, plaintext data, plaintext data, etc., or a combination thereof. The plaintext data can be subjected to encoding in a special format (e.g. Hypertext Transfer Markup Language (HTML), Rich Text Format (RTF), etc.) and read by an appropriate program (e.g., word processor, text editor, etc.) without Need to decrypt. The plaintext data may include pre-encrypted data, such as, for example, plaintext data to be encrypted before transmission and / or storage. In addition, the plaintext data may include post-decrypted data, such as, for example, data resulting from decryption of the received and / or retrieved data.In addition, the plaintext data may include data that can be encoded in any format, such as audio / video data (e.g., Motion Picture Experts Group (MPEG) data, etc.), image data (e.g., Joint Photographic Experts Group (JPEG) data, etc.), financial Data (such as automatic transfer machine (ATM) transaction data, etc.), etc., or a combination thereof. The plaintext data may include program data such as, for example, at least a portion of a program, an operating system (OS), an application, a virtual machine (eg, a virtual machine monitor (VMM) code, etc.), or the like, or a combination thereof. The plaintext data may also include, for example, instructions including opcodes, operands, etc., or a combination thereof.Unencrypted data may include multiple bits. The multiple bits may include one or more represented in any code (such as binary code, octal code, hexadecimal code, symbol code, decimal code, alphanumeric code, higher-level programming language code, etc., or a combination thereof) Bits (for example, bytes, etc.). For example, the memory reference instruction may include bits for an operation code, bits for an address, and the like, where the bits of the memory reference instruction may be a hexadecimal code (e.g., machine language), a symbol code (e.g., assembly language), etc. Or a combination of them. In addition, multiple bits can be translated into and / or from binary code, where the binary code can be executed by the cores 416, 418, can be sorted in the memory 412, can be fetched from the memory 412, etc., or a combination thereof .The encryptor 441 may include any type of cipher to generate ciphertext data, such as, for example, a block cipher in any desired mode of operation. A block cipher may include a fixed block size, where the block cipher may be repeatedly implemented to encrypt data larger than the block size. For example, block ciphers may include Advanced Encryption Standard (AES) in a Propagated Cipher Block Chaining (PCBC) mode of operation. In addition, the block cipher may include a scalable block size.In one example, the block cipher is Threefish, which can be implemented to obtain a scalable block size of any length (eg, 256 bits, 512 bits, 1024 bits, etc.). For example, Threefish can utilize a key (eg, 128 bits) that can include memory addresses and / or locations, and a key that can be the same width as the block. Threefish can use several rounds (eg, 72 rounds) to encrypt 256-bit and 1024-bit blocks, use several rounds (eg, 80 rounds) to encrypt 1024-bit blocks, and so on. Threefish can take advantage of functions MIX, including addition, constant rotation, and XOR. For example, words can be arranged after each set of MIX functions (eg, 2, 4, or 8 by block size). Sub-keys can be injected into the system, for example, every several rounds (eg, 4 rounds), where the sub-keys can be generated from the key, trim, and counter value parts. This key and spinner can give extra words at the end (for example, "exclusive OR" of all other words).The illustrated memory encryption engine 415 also includes a decryptor 442, which can decrypt the ciphertext data to generate unencrypted data. The decryptor 442 may include a reversal of the encryptor 441. For example, the decryptor 442 may include a reverse of AES-PCBC. In addition, the decryptor 442 may include a reversal of Threefish. For example, the subkeys can be applied in reverse order, where each round includes a reverse word arrangement followed by a reverse MIX function. Thus, unencrypted data (such as plaintext data) can be implemented as input to the encryptor 441 to generate unencrypted data (such as ciphertext data) when the unencrypted data is to be stored in the memory 412 (such as a write instruction). An unreadable copy, in which the decryptor 442 can be implemented to decrypt the ciphertext data and generate unencrypted data when the ciphertext data is retrieved from the memory 412 (eg, a read instruction).The memory encryption engine 415 may include a cache line monitor to identify a cache line corresponding to a released address alias from a plurality of address aliases, and a dump clears the identified cache line. The memory encryption engine 415 may also include an integrity check value selector 443 to determine the integrity check value applied to the unencrypted and / or encrypted data rows (eg, aliased by at least one of multiple address aliases). The memory encryption engine 415 may also include a memory initializer to write to a location in the memory without first reading data previously stored at that location in the memory. The memory encryption engine 415 may include an allocator to assign / bind a dump cleared cache line to a data line physical address.The memory encryption engine 415 may further include a cache line interpreter to determine a data physical memory address for each cache line, as illustrated in FIG. 31A, which includes: a data line byte; a data line physical address, including an integrity line Slot selector and integrity row index; and a key field selector formed by unused address bits of the data physical memory address. The integrity row index identifies the integrity row address location in memory, and the integrity row slot selector identifies the integrity row slot in the integrity row address, where the integrity row slot value is stored and used to determine whether the address alias effective.The memory encryption engine 415 may further include an alias manager to determine a data line physical address of a plurality of cache lines identifying the alias address, where the alias address is an alias to a single memory location. The memory encryption engine 415 may include an integrity check value calculator to set a key field selector of the cache line with a valid integrity value to designate the cache line as a currently valid address alias. The memory encryption engine 415 may include a data retriever for reading an encrypted data line from a data line physical address of a data physical memory address of a cache line, and a decryptor 428 for decrypting the encrypted data line. The decrypted data line may identify the data line physical address, integrity line index, and integrity line slot selector for the decrypted data line (eg, as illustrated in FIG. 31A). The memory encryption engine 415 may include a slot value interpreter for reading the integrity row slot value stored in the integrity row slot and a means for confirming the integrity row slot value and decrypting data (e.g., a data row). A comparator (eg, integrity verifier 444) that matches between a key domain selector of a data physical memory address. The integrity verifier 444 may determine a mismatch / match between the plain text of the integrity value (e.g., a copy stored in the integrity check line) and the plain text of the data line (e.g., a copied portion of the data line), which indicates complete Incorrect destruction or validity of data and / or data rows. The integrity verifier 444 may further compare the hash value of the data with the expected hash value of the data.The memory encryption engine 415 and / or the alias manager may combine the alias bits (e.g., integrity row slot selector, integrity row index, key domain selector, and / or valid integrity value or some combination thereof) with Data line bytes are stored in separate locations (e.g., alias bit cache lines and / or alias bit memory locations), and the memory encryption engine 415, data retriever, and / or alias manager can retrieve the alias bits and associate them with the request (E.g., a request for data identified by the physical address of the corresponding data row) to make sure that specific access control policies match. In the event that the alias bit fails to compare with the request (e.g., a mismatch result), the memory encryption engine 415 and / or the alias manager may report the mismatch (e.g., issue an alert) as one or more of an error or failure .The memory encryption engine 415 data retriever (or core 416, 418) may read the encrypted data line from the data line physical address of the data physical memory address of at least one cache line of the plurality of cache lines. The decryptor 442 may decrypt the encrypted data line, wherein the decrypted data line identifies a data line physical address, an integrity line index, and an integrity line slot selector of the data line for decryption. A comparator (e.g., integrity verifier 444) may identify a mismatch between the stored integrity row slot value and the key domain selector of the data physical memory address of the decrypted data row, and the memory encryption engine 415 and In response to the mismatch identification, the comparator can cause the memory encryption engine 415 or its component dump to clear the cache line and report the mismatch as one or more of an error or failure.The memory encryption engine 415 may further include an integrity value embedder to embed a data line byte with a valid integrity value of the data physical memory address for each cache line. The memory encryption engine 415 may also include a compressor to compress data line bytes embedded with valid integrity values. The encryptor 441 can encrypt compressed data line bytes embedded with a valid integrity value. The memory encryption engine 415 may further include a data line writer to write the encrypted and compressed data line physical address, the effective integrity value to the key domain selector, the data line physical address, and the embedded effective integrity value. A location in memory identified by a data line byte.The memory encryption engine 415 and / or the compressor may determine that the data line bytes of a particular cache line are incompressible and instead attempt to alias bits (e.g., integrity line slot selector, integrity line index, key field Selectors and / or valid integrity values, or some combination of them) are embedded into a data line with data line bytes, and the valid integrity value can be stored separately (e.g., in another cache line and / or memory location, for example) In a separate location).When the ciphertext is to be retrieved from the memory 412 (eg, a read operation), the illustrated ciphertext discussed herein may be decrypted to generate unencrypted data. The illustrated memory encryption engine 415 may further include a fine-tuning function 445 to use the physical memory address as a fine-tuning of the block password to bind unencrypted data with the physical memory address. The fine-tuning function 445 may include, for example, an XTS (Fine-tune codebook mode with ciphertext stealing based on XOR-Encryption-XOR / XEX) algorithm, Liskov, Rivest, and Wagner (LRW) algorithms, or a combination thereof . The fine-tuning function 445 may, for example, extend the original physical memory address, XOR the address with the unencrypted data, and use the key to run the result through the encryptor 441 to bind the unencrypted data to the address.The illustrated memory encryption engine 415 may further include a decoder 447 to decode unencrypted data and identify one or more instructions. For example, when substantially the entire data line (e.g., a 64-byte cache line) is fetched from the memory 102 and decrypted, the uncorrupted unencrypted data (e.g., valid plaintext) may include an opcode. Thus, when the decoder 447 decodes the plaintext data, the decoder 447 can identify the operation code of the instruction set, such as, for example, the x86 instruction set and the like.The illustrated memory encryption engine 415 may further include a key / trim value selector 448 to select a key from a plurality of keys (e.g., a key domain) and / or from a plurality of trims for a physical location in the memory 412 (For example, fine-tuning field). For example, the illustrated memory encryption engine 415 may include a function detector to determine a function (e.g., program, middleware, operating system, firmware, virtual machine, VMM, operating system (OS) kernel, etc.) or a portion of a function (e.g., Part of the program, etc.) for the first time, or for the first time being given access to a physical location in the memory 412. In response, when a function (and / or a portion thereof) is given access, the key / fine-tuning value selector 448 may select a key and / or fine-tuning (e.g., a key from a key domain, Different keys from the same key domain, different keys from different key domains, fine-tuning from the fine-tuning domain, different fine-tuning from the same fine-tuning domain, different fine-tuning from different fine-tuning domain, etc.).The key / trim value selector 448 may select a key based on a value determined from bits of a physical memory address of a data row, such as unused address bits. The key field of a particular physical memory location may be defined by a number of unused address bits to be selected to determine the value. For example, a specific physical memory location can belong to a specific key domain at the same time, where unused address bits can be used to define the key domain (e.g., a key domain includes 16 for a single physical memory location utilizing four unused address bits Keys). Thus, the physical memory location can use different keys at different points based on the domain to which the location is mapped. The key / fine value selector 448 may derive the key by, for example, encrypting the value (e.g., 0001, 0010, etc.) with a secret master key (e.g., in a trusted execution environment) that can be protected by the device 400. In addition, the key / fine value selector 448 may derive the key by, for example, retrieving the key from the array of protected keys using the value as a pointer to the array.Also, the key / trim value selector 448 can select a trim by setting a bit of a physical memory address to be used by the trim function 445 as a trim. In this regard, fine-tuning for the XTS mode will include unused address bits and used address bits of the physical memory address. Thus, when the unused address bits are selected / changed by the key / trim value selector 448, different ciphertexts will be generated by different addresses for trimming (even if they actually refer to the same physical memory location).The illustrated memory encryption engine 415 also includes logic 449, which may utilize components of the processor 410, such as, for example, cores 416, 418, encryptors 441, decryptors 442, etc. to maintain (e.g., secure, verify, test, etc.) memory 412 security and integrity.Memory corruption from components (such as internal or external devices, accelerators, etc.) can be detected when these components access memory with addresses that may involve specific key domains or aliases and fine-tuning. These devices can use the current and correct addresses to access the memory. Similarly, and conversely, software that corrupts the memory of such devices can also be detected when incorrect or non-current addresses are used.Although not illustrated in FIG. 4, the device 400 may include other elements on a chip having a processor 411. For example, the processor 411 may include input-output (IO) control logic integrated with the memory encryption engine 415. In addition, the device 400 may include, for example, an IO module, sometimes referred to as a south bridge of a chipset, which functions as a host device and may communicate with, for example, a front / rear image sensor (for example, a two-dimensional camera, a three-dimensional camera, etc.), a microphone , Displays (e.g. screens), motion sensors (e.g. accelerometers, gyroscopes, etc.), mass storage devices (e.g. hard drives / HDDs, optical discs, flash memories, etc.), network interfaces (e.g., Cellular phones, WiFi, WiMax Global Positioning System (GPS), spread spectrum (for example, 900 MHz), other radio frequency (RF), etc.). The processor 411 and the IO module may be implemented as, for example, a system on a chip (SoC).In addition, although the examples have shown separate components for illustrative purposes, it should be understood that one or more of the components of the device 400 may be combined and may reside in the same and / or different physical and / or virtual locations , And so on, or a combination of them. For example, logic 449 may include one or more of the components of memory encryption engine 415 to perform its corresponding functionality, and these components may reside in the same or different locations as cores 416, 418, memory 412, etc., or a combination thereof. Further, one or more components of the memory encryption engine 415 may be implemented with computer program code, such as a software value selector that may interface with one or more components of the memory encryption engine 415 implemented in logical hardware.Some of the functionality provided by the device 400 may be delivered by a system-on-chip (SoC) IP block on the memory / DRAM (dynamic random access memory) side of the processor cache, enabling this functionality to be used at the host Software running on a processor (such as a central processing unit / CPU) core and on other IP blocks and accelerators such as general-purpose graphics processing units (GPGPU) and integrated graphics (such as Intel® processor graphics).The illustrated device 400 uses unused physical address bits (and / or other metadata passed through the cache) to manipulate a cryptographically encrypted memory integrity value pointer (e.g., to implement one or more access control policies) such that software The memory allocation routine can control the assignment of pointers (eg, "malloc" and "free"). The device 400 may generally use unused address bits as a key field. For example, the system may have less external physical memory installed than can actually be addressed by a 64-bit physical memory address, so the most significant address bits can be used to choose between different "key domains" because the cache can still These addresses are delivered to the device 400 with full resolution of physical memory addresses. The illustrated device 400 can use 5-level paging and 64-bit addressing for virtual memory to allow software memory allocators / managers (e.g., memory manager 240 of FIG. 2) to differ in aliases to the same physical memory location Choose between address values. The software memory allocator / manager can control the integrity value table (or its authorized portion) to determine which alias is currently valid, so that the use of invalid aliases / addresses by the software can then cause a failure in the hardware that can be reported to Software monitor to handle memory violations.5 is a flowchart of a method performed by a consumer of a cloud service according to one embodiment of the present invention. In a "request service from a cloud service provider" block 502, a consumer requests a service from a cloud service provider. For example, the request may be for a virtualized service, or the request may be to perform a transaction, and the cloud service provider will build a virtual machine or other process for the transaction to perform the transaction.The cloud service provider identifies a server or set of servers that support the key domain to serve consumer requests. In the "Receive domain manager image and storage location-related address information from cloud service provider" box 504, the consumer receives the domain manager image and storage location-related address information from the cloud service provider, which is also referred to herein as a repair variable information. The memory location related address information accurately identifies the physical location in the memory of the server (s) that is servicing the consumer's request. The address information related to the memory location may include the physical address of the page in the memory of the server (s) serving the request of the consumer, the physical address of the page table, control register information (such as CR3 value), interrupt Descriptor table register information and so on. The domain manager image may include a page table structure (s) that maps the linear / virtual address of the domain manager image to the physical address where the domain manager image will be located in the storage of the cloud service provider server.Control is then passed from the "Receive domain manager image and storage location-related address information from the cloud service provider" box 504 to the "Measure domain manager image" box 506, where the consumer measures the domain manager image to ensure the domain manager The mirror is not damaged. Consumers can verify domain manager images using known whitelisting techniques, such as calculating a hash of the domain manager image and comparing the hash value to the master hash value of the master domain manager image (which is known to be uncorrupted) Compare; source code can be checked and recompiled to match the image; government certification of the image can be verified; it can be confirmed that the image is consistent with open source software, etc. If the image does not leak consumer data, the image is considered trustworthy. For example, if a consumer's secret key is used to protect all communications, and a file / memory page is encrypted and integrity checked when saved and / or restored to or from a storage device, then the mirroring can be considered trustworthy.From the "Measuring Domain Manager Mirror" box 506, control passes to a "verification" decision point 508. If the domain manager image is not verified, control passes to an "error" box 522, where the consumer handles a situation where the cloud provider's domain manager image has not been verified. In this case, the consumer may choose not to use the services of that particular public cloud service provider.If the domain manager image is verified at the "verification" decision point 508, then control passes to the "create domain boot image" box 510. At block 510, the consumer creates a domain boot image that will be executed on the cloud service provider's server to "boot" the key domain. The activation key domain may include, for example, creating a key domain, having the hardware use the key domain key to encrypt data stored in a memory location belonging to the key domain, and storing data in a memory location belonging to the key domain ( Such as code to be executed to initially establish the key domain).In one embodiment, the consumer uses the storage location-related address information provided by the cloud service provider in the "Receive domain manager image and storage location-related address information from the cloud service provider" box 504 to modify the provider offer Domain Manager image as part of the code to be executed to launch the key domain. For example, the consumer may modify the page table mirrored by the domain manager so that given the physical memory address where the domain manager mirror is located, the physical address in the page table is updated (repaired). Once the paging structure is updated, all linear / virtual addresses used by the code, data, and programs mirrored by executing the domain manager will be mapped to the correct corresponding physical memory addresses on the cloud service provider's server. In one embodiment, the consumer uses the consumer's key domain key to encrypt the repaired domain manager image, and uses the consumer's key domain key to create an integrity check value (ICV) for the encrypted repair domain manager image ).In one embodiment, the consumer creates a domain boot image that includes an encrypted repair domain manager image for distribution to a cloud service provider server. The consumer also includes the secret key in the domain boot image for paging, migration, attestation, communication, and other functions provided by the domain process being executed (e.g., VM, operating system, etc.). When the domain boot image is encrypted, the corresponding page table structure included in the domain boot image is also encrypted.Because the domain boot image is encrypted (and integrity checked) using a memory-related "fine-tuning" password, the adversary cannot move parts of the domain boot image around the memory. The page table maps the programs and data of the domain boot image to the correct physical memory address on the cloud service provider's server. Therefore, given a domain boot image is cryptographically bound to the correct physical memory location, the program behaves It cannot be changed maliciously. In other words, if the domain boot image is not loaded into the correct physical storage location on the cloud service provider's server, the domain boot image cannot be decrypted correctly. Furthermore, the integrity check value can detect any attempt to modify the contents of the domain boot image and / or where the domain boot image is loaded into memory.Control is passed from the "create domain boot image" box 510 to the "verify the certificate of the server / group supporting the key domain and obtain the public key of the server / group supporting the key domain" box 512.In block 512, the consumer verifies the certificate of the identified cloud service provider server / group and obtains the identified public key of the server / group supporting the key domain.Control passes from block 512 to "exchange key domain keys with a verified server (s) supporting key domains" block 514. The consumer exchanges the key domain key with the key domain-supporting server (s) verified in block 512. One aspect of key domain key exchange is that the key domain key is provided directly by the consumer in encrypted form directly to the hardware of the server supporting the key domain (such as the memory encryption engine 415 of FIG. 4). Because the software of the server supporting the key domain does not receive the key domain key, the server software supporting the key domain cannot decrypt the contents of the key domain without requesting hardware to perform the decryption. In one embodiment, the consumer uses the public key of the server / group obtained in block 512 to encrypt the consumer's key before providing the encrypted key domain key to the hardware of the server supporting the key domain. Domain key.In another embodiment, the key domain key may be negotiated between the consumer and server hardware. Key domain keys can be generated directly in hardware (e.g. microcode, firmware, CSME, SMM), where the server hardware can provide its unique (or group) identity and public key / CERT, and then the Diffie Hellman key exchange (or RSA ) Can complete key domain key exchange with consumers. This embodiment requires the consumer to be online to perform a key exchange when domain mirroring is initiated.This key exchange enables a virtual machine running on an authenticated key domain-enabled server to access domain boot image data encrypted with the consumer's key domain key without exposing the key domain key itself. Encrypted messages are passed through a cloud service provider's software stack on a server that supports key domains. Server hardware that supports key domains provides password-encrypted endpoints for commands. For example, the consumer can encrypt the key for the key domain with the server's public key and send the encrypted key domain key to the cloud service provider. The cloud service provider may then issue instructions on the server hardware supporting the key domain, such as a Create Key Domain (CreateKD) instruction to create a new key domain. In addition, providers can use the same Create Key Domain (CreateKD) instruction to re-create the key domain, for example, if the VM has been suspended and will be restarted.Control is passed from the "Exchange Key Domain Key with Authenticated Key Domain (s) Supported Server (s)" box 514 to "Encryption of Key Domain Supported Servers Including Authentication Used to Exchange Key Domain Keys" Domain Manager Mirrored Domain Start Mirroring "box 516. Once the key domain key is established (or before, when it is the consumer's key), the consumer uses the key domain key to encrypt the domain boot image, including for the consumer to exchange the key domain secret with Domain manager image of a particular server with a key. Given the address information related to the memory location provided by the cloud service provider as the repair variable information, the consumer encryption domain initiates the image. In one embodiment, an encryption technique called a tunable block cipher is used. The fine-tunable block cipher accepts a second input, called fine-tuning, along with the plain or cipher text input to be encrypted. Fine-tuning, along with the key, selects the arrangement calculated by the password. When encrypting the domain boot image of the consumer, the physical memory address of the server supporting the key domain is used as a fine-tuning so that the resulting encrypted boot image memory locations are related. The encrypted boot image is described as being memory location dependent because the encrypted boot image must be loaded into the correct physical memory address of the cloud service provider server before it can be properly decrypted.In one embodiment, the mirroring is initiated using a XEX-based spinner codebook mode (XTS) encryption domain with ciphertext stealing. Consumers use page address fine-tuning and key domain keys to start mirroring in memory location-related XTS mode encrypted domains. The correct physical address to which the domain boot image will be loaded is included in the XTS trim of each encrypted block. In other embodiments, other fine-tunable passwords may also be used, such as Liskov, Rivest, and Wagner (LRW) or counter mode passwords.The consumer can also use the key domain key to calculate the integrity check value (ICV, such as key-controlled hash message authentication code (HMAC)) of the domain image. In one embodiment, the integrity check value is also memory location dependent, such that the address / memory location of the corresponding data row in memory is taken into account when verifying the integrity of the data. In the case where the consumer knows the address location of the ICV table on the server corresponding to the encrypted boot image of the consumer, the consumer may include the ICV value in the encrypted boot image. Using fine-tuning that indicates the correct server memory address of the ICV table, the ICV value table may also be encrypted with a key domain key. The cloud service provider server then loads the ICV portion of the encrypted boot image into the correct slot of the ICV table at those same server memory addresses of the ICV table.From "Domain Encryption of Domain Manager Mirror Image Encrypting Key Domain-Supported Server for Authentication of Key Domain Key Exchange" box 516, control is passed to "Establish Key Domain with Server Supporting Key Domain" Block 518. In block 518, the consumer sends a request to create a key domain to a server that supports the key domain. The request may include an encrypted key domain key that is used as an input value for a Create Key Domain (CreateKD) instruction to be executed by a processor of a server that supports the key domain. The key domain selector / identifier to be used is a local decision made by the cloud service provider's memory manager because the cloud service provider's memory manager needs to manage a restricted key domain name space. The consumer does not need to know the key domain selector / identifier, and the value of the key domain selector / identifier can be changed by the cloud service provider to avoid local conflicts. The actual key domain key provides security for the consumer's VM image, and the key domain selector / identifier tells the cloud service provider server's hardware key domain key which slot / register is currently stored locally .From block 518, control passes to "send encrypted domain boot image to server (s) supporting key domain (s)" box 520. The encrypted domain boot image is sent to the cloud service provider, and the cloud service provider's software stack on the server supporting the key domain loads the domain boot image into the memory at the correct physical memory address (i.e., k of the memory Bit Off (ie, unencrypted) area).FIG. 6 is a flowchart of a method performed by a cloud service provider according to an embodiment of the present invention. Control begins with "provide a consumer with a domain manager image in response to a consumer request for a service" box 602. The consumer's request may be specific to a virtualized service, or the consumer's request may be to perform a transaction that the cloud service provider will perform for the consumer via a virtual machine or other process.Control proceeds from "Provide domain manager image to consumer in response to consumer request for service" box 602 to "Allocate space for domain manager image and provide storage location-related address information to requesting consumer" box 604 . In this box, the cloud service provider allocates space in storage for the domain manager image and notifies the requesting consumer of the address information related to the storage location of the allocated storage space. The address information related to the memory location may specifically include a physical address of a page in the memory, a physical address of a page table, control register information, interrupt descriptor table register information, and the like. The memory location related address information may also include expected entry points. As an alternative embodiment, the cloud service provider can create a domain image that the consumer can re-verify as being correct.As mentioned above with reference to the "Request a service from a cloud service provider" block 502 in FIG. 5, the cloud service provider may identify a set of servers that can provide key domain capabilities. For example, each server in a group of servers can use the same key, called a group key, such as a group public authentication key for Direct Anonymous Proof / Enhanced Privacy Identifier (DAA / EPID). DAA is a digital signature algorithm that supports anonymity. Unlike traditional digital signature algorithms, where each entity has a unique public verification key and a unique private signing key, the DAA provides a public group public verification key associated with many (usually millions) unique private signing keys key. The DAA was created to make it possible for the device to prove to the outside what type of device it is (and optionally what software is running on the device), without the need to provide the device identity, ie to prove that the device is authentic and reliable in a group Without revealing which member it is. EPID enhances the DAA by providing an additional utility that is able to revoke the key given a signature created by the private key, even if the key itself is still unknown.From block 604, control proceeds to "exchange key domain key with consumer" block 606, where the server that supports the key domain obtains the key domain key from the consumer. The key domain key is provided by the consumer as an encryption key, where the consumer's key domain key has been encrypted with the public key of a server that supports the key domain. In one embodiment, the memory manager of the server supporting the key domain causes the encrypted key domain key to be written into a slot / register of the server supporting the key domain, and the memory encryption engine (such as the memory of FIG. 4) The encryption engine 415) reads the encrypted key domain key from the slot / register and decrypts the key domain key using the private key of the server supporting the key domain.From block 606, control proceeds to "load the domain boot image into the allocated space in memory in response to the consumer providing the domain boot image" box 608. When a consumer provides a VM workload to a server that supports key domains, the consumer provides a domain boot image encrypted with the consumer's key domain key. The server supporting the key domain loads the domain boot image into the physical memory space allocated in block 604. The domain boot image is installed in the physical storage on the server of the cloud service provider at a physical storage location that communicates with the consumer via address information related to the storage location. A shared, unencrypted memory location may be made available by a cloud service provider (e.g., using a portion of a physical address, such as k bits) to initially load the encrypted boot image into memory.Because multiple servers can share the same public key, identifying address information related to memory locations may require resolving memory conflicts between multiple servers. In one embodiment, a memory location conflict between multiple servers in a group is resolved because the location-dependent image in the server's memory is a consumer's domain-initiated image, which can be temporary. That is, a domain boot image is used to launch a consumer's larger domain (VM) image, which can be paged anywhere in storage selected by the cloud service provider. After launching the consumer's larger domain image, the location-dependent portion of the encrypted image can be removed from the memory (the function of launching the consumer's larger domain image has been performed). Thus, the storage usage can be managed by the cloud service provider, which makes room for the location-related boot image, uses the location-related boot image to boot the rest of the domain image into variable memory, and then releases the Space occupied by boot images (for example, to make room for different domain boot images for different key domains that happen to overlap those same memory locations).Under software control, consumer domain mirroring can be initiated in multiple stages. The first stage is to execute the domain boot image, which is encrypted by the consumer based on the address information related to the memory location. The second phase is to start the rest of the consumer's domain image, which does not need to be loaded into the specific physical storage location on the cloud service provider's server.From block 608, control proceeds to "Create and Initialize Key Domain in Memory" block 610. In one embodiment, a server that supports key domains receives a request from a consumer to create a key domain. The request may include an encrypted key domain key that can be used as an input value for a Create Key Domain (CreateKD) instruction to be executed by a server that supports the key domain. The CreateKD instruction can also initialize the new key domain by stalling the processor core, dumping the cache and translation lookaside buffer (TLB) of the old key domain, and initializing the memory encryption engine with the new key of the key domain. Initializing the memory encryption engine with the new key domain key may include writing the key domain key to a memory slot / register accessible by the memory encryption engine hardware. Alternatively, these initialization functions may be performed via separate initialization key domain (InitKD) instructions.From block 610, control proceeds to "measurement domain initiation mirroring" block 612. The cloud service provider verifies that the expected domain manager image exists in the consumer's encrypted domain boot image. This verification ensures that privileged code such as VMX root components and data structures are included in the consumer's encrypted domain boot image.In one embodiment, the memory manager uses a hash key domain (HashKD) instruction to verify that the pages of the domain-initiated image include the provider's domain manager (VMMlet) image. "Secure Hash" functions, such as the Secure Hash Algorithm 3 (SHA3) defined by the National Institute of Standards and Technology (NIST), are used to calculate the domain manager image of a provider within the domain boot image used for encryption Hash value. The secure hash algorithm uses a hash function to transform the data. The hash function can be an algorithm that includes bitwise operations, modular addition, and compression functions. The hash function then produces a fixed-size string, which looks nothing like the original input string. These algorithms are designed as one-way functions, which means that once the original input data has been transformed into a hash value, it is practically impossible to transform the hash value back to the original input data.The cloud service provider can verify that a domain manager image exists in the consumer's domain boot image by constructing a domain manager image based on the consumer's encrypted domain boot image in local storage. The cloud service provider can then execute the same verification function (ie, hash function) used by the HashKD instruction on the contents of the local memory location used to construct the domain manager image. If the verification function (hash) value of the contents of the local memory location matches the result of the hash KD instruction, the cloud service provider can be assured that the provider's domain manager image is properly merged into the consumer's encrypted domain startup Part of the mirror.In one embodiment, the HashKD instruction can provide a hash value for a cache line, or in another embodiment, the HashKD instruction can provide a hash value for up to one page of memory at a time.In one embodiment, the HashKD instruction only provides a hash value so that no consumer secrets in the domain boot image are leaked to the cloud service provider. Consumer secrets can be in the non-root part of the VMX of the domain boot image, for example, as part of an operating system running on a domain manager (VMMlet). Providing the hash value only as a result of the HashKD instruction enables the cloud service provider to verify only the portion of the provider (the domain manager image portion) of the encrypted domain boot image. Consumer-modified parts (including consumer secrets) that initiate mirroring independently of the crypto domain verify the provider's part to prevent consumer secrets from being disclosed to the cloud service provider.From block 612, control proceeds to a "verification" decision point 614. If the domain-initiated mirroring measurement has not been verified, control passes to "error" box 626, where the cloud service provider can report to the consumer that the verification has failed. If the mirroring measurement is verified at the "verification" decision point 614, then control passes to the "Perform consumer's domain-initiated mirroring and verification entry point" box 616.In the "Perform Consumer's Domain Boot Image and Validate Entry Point" box 616, the server domain's stack supporting the key domain will execute the consumer's domain boot image at the expected entry point (e.g., via a memory location related address ("Repair Variable" ) Information provided to consumers). The memory manager VM loads consumer-encrypted domain boot images into unencrypted memory pages (where k bits are disabled). A new key domain is initiated.In one embodiment, a processor of a server supporting a key domain executes a Switch Key Domain (SwitchKD) instruction, providing as input the destination key domain identifier / selector, entry point address, and control register information. Further, in one embodiment, a keyed hash message authentication code (HMAC) calculated by the consumer (e.g., using a key domain key or a derivative thereof) is used to verify that the entry point address and control register information are correct.Prior to performing domain boot mirroring, a server that supports key domains can turn off interrupts. In one embodiment, the first instruction executed after switching the key domain is a special ENDBRANCH-like instruction representing the expected entry point for key domain switching. The destination domain manager (VMMlet) code after the ENDBRANCHKD instruction verifies that the VMM is in protected mode. The target domain manager (VMMlet) code also verifies that the control registers and interrupt descriptor table registers are correct. The destination domain manager (VMMlet) code then re-enables interrupts and resumes execution from the saved state.In one embodiment, the HMAC function is used to implement the SwitchKD instruction to verify the consumer's domain boot image. This implementation is the preferred embodiment of SwitchKD because it is the most flexible. Consumers can use secrets established using server hardware (e.g., a key domain key or a derivative thereof) to calculate HMAC (e.g., SHA3 HMAC) (e.g., authentication Registers for processors such as instruction pointers, stack pointers, CR0, CR3, CR4, IDTR, GDTR, LDTR, any MSR, etc. that can affect VM security). The HMAC implementation of the SwitchKD instruction can be dynamically established by the consumer's domain mirroring, and multiple entry points can be supported by calculating multiple HMACs. Each unique valid entry point in the consumer's domain mirroring has one HMAC. This flexibility of using HMAC to dynamically define a new entry point allows the server to start mirroring from the original encrypted domain, perform the original crypto domain boot mirror at a fixed initial entry point, and then start mirroring the domain internally (from within the key domain) To a new dynamically assigned memory location (according to the provider's memory management policy), and a new entry point location established for the new dynamically assigned memory location. Now, the original domain boot image can be released by the cloud service provider and the static storage location to which the original domain boot image is cryptographically bound, leaving only dynamically reassigned VM images in storage and managed by the provider's storage Location defined by the browser software. In this way, even if multiple initial boot images of different consumers happen to overlap in memory, they can still be loaded sequentially, shifted to dynamic memory locations, and the memory locations of this domain boot image are released for the next consumer domain boot image , And so on, where each dynamic mirror is created, each ongoing domain mirror uses the key domain key of the consumer for the new entry point to recalculate the HMAC.Alternatively, when creating a new key domain (CreateKD), the entry point values (instruction pointer register, stack pointer register, control register, interrupt descriptor table register, etc.) can be established by the consumer with a server that supports the key domain, And verified by the cloud service provider.When a server that supports the key domain performs domain boot mirroring, the page table is referenced by the processor's control register (ie, CR3), which specifies the physical address of the root of the page table structure. When switching to the key domain, the control register must be set to the correct value. In one embodiment, a Switch Key Domain (SwitchKD) instruction includes a keyed hash parameter, such as a SHA3 HMAC. The keyed hash parameter is used to ensure that the processor of the cloud service provider server is operable to use the correct page table structure within the image when performing the domain-initiated image (and thus, all memory mappings are correct). The key control hash parameter is used to confirm that the state of the cloud service provider's server processor is correct when entering the domain boot image, because the processor will compare the state of the processor control register, instruction pointer, and stack of the cloud service provider server Pointers, etc. to verify the keyed hash parameter (HMAC).From the "execute consumer's domain launch image and verify entry point" box 616, control proceeds to "load the rest of the consumer's domain image into memory" box 618. A server that supports key domains loads the rest of the consumer's domain image into memory. The rest of the consumer's domain image may include, for example, the rest of the domain image 2532 of FIG. 25, including the operating system (s), application (s), scripts, or other code.From the "Load the rest of the consumer's domain image into memory" box 618, control then proceeds to "the additional page of the consumer's domain image is verified using the secret key included in the domain boot image" box 620. A running verified domain image can now use the secret key from the domain boot image to authenticate additional pages of the consumer's domain image. For example, a domain boot image can include secret keys for paging, migration, attestation, communication, and other functions.From the "Use the secret key included in the domain boot image to verify the additional page of the consumer's domain image" box 620, control proceeds to the "Perform security operations in the consumer key domain" box 624. Once the consumer domain (VM) image has been properly executed and the corresponding key domain has been switched, the domain manager can finish loading the operating system and request additional resources (storage page, IO Resources, etc.). Save and restore storage operations (for example, involving VM control structures, control registers, etc.) remain in the key domain, are performed directly by the storage encryption engine hardware, and are not exposed to the cloud service provider. Because the domain manager image is derived from the cloud service provider's software, once verified, the executing domain manager will obey storage manager commands and cooperate with other domain managers. In addition, like a normal VMM, the domain manager will protect the server's hardware and resources from the rest of the less privileged code (such as the operating system, applications, etc.) of the consumer domain.FIG. 7 is a diagram illustrating components of a consumer domain image (eg, a consumer VM image) according to one embodiment of the present invention. The consumer domain image 710 includes a domain manager portion 712 provided by a static provider and a portion 714 provided by a dynamic consumer. In one embodiment, the domain manager portion 712 provided by the static provider corresponds to a domain manager (VMMlet), which is a privileged code that instantiates and manages consumer virtual machines. The domain manager part 712 provided by the static provider may also issue a command to create a key domain to the hardware of the cloud provider service, and provide the consumer's encrypted key domain key for use in encrypting the newly created key domain. Memory location. The domain manager part 712 provided by the static provider may also issue a command to switch to a different key domain to the hardware of the cloud provider service, and provide the consumer's encrypted key domain key for controlling the key domain to be switched to. The virtual machine managed by the domain manager (VMMlet) can then be operated within the currently active key domain. Domain Manager (VMMlet) privileged code can be measured and verified by consumers, thereby enabling consumers to trust Domain Manager (VMMlet) privileged code as part of their trusted computing base.To establish the consumer domain image 710 in the memory of the cloud provider server, the consumer creates an encrypted domain boot image that is executed in the memory of the cloud provider server. The domain boot image may include only the basic code required: (1) to cause the cloud service provider server hardware to create a new key domain or switch to an existing key domain in the storage of the cloud service provider server; and (2) Make some baseline code operate within the key domain. For example, a domain boot image may create a new virtual machine or make an existing virtual machine access data in a memory location of a key domain established by the code portion provided in (1).The domain boot image is created by the consumer because it will appear in the specified storage location of the storage of the cloud service provider's server. For example, the consumer may use the storage location-related password to encrypt the domain boot image with the consumer's key domain key to cryptographically bind the domain boot image password to a designated storage location in the storage of the cloud service provider server. Once the encrypted domain boot image is loaded into the storage location specified by the storage location-related password, the executing encrypted domain boot image can then guide the dynamic loading of additional domain image code (such as part of the 714 code provided by the dynamic consumer) to the consumer In the domain mirror 710. In one embodiment, the portion 714 provided by the dynamic consumer corresponds to lower-privileged code of the consumer domain image, such as an operating system, applications, and the like.In one embodiment, the consumer's encrypted domain boot image includes at least a domain manager (VMMlet) privilege code. In at least one embodiment, the consumer's encrypted domain boot image also includes some consumer-provided code.Because the domain boot image is encrypted by the consumer using the consumer's key domain key in the consumer's own environment, the encryption static portion 712 being executed can be described as being performed "outside" the key domain. Because only the consumer knows the key domain key, the cloud service provider cannot create code, add code to the consumer's encrypted domain startup image, or modify the consumer's encrypted domain without destroying the consumer's encrypted domain startup image Start mirroring.Once the code included in the consumer's domain boot image begins to execute in the key domain on behalf of the consumer, the executing consumer's domain boot image code can take over and extend the consumer's domain image 710. Extending the consumer's domain image 710 includes, for example, dynamically adding new code to the consumer's domain image 710 (such as a dynamic consumer provided portion 714). New code and / or modifications can be added to the consumer's domain image 710 from within the key domain using a protocol determined by the consumer (e.g., the consumer's domain image 710 can only be verified after the new extension code segment is verified Extension).When the consumer's domain image 710 is written into the memory from within the key domain, the data from those memory write operations is encrypted by the memory encryption engine and fine-tuned with the memory address. Therefore, read and write operations performed from within the key domain are also location-dependent, as they are created from code executed within the key domain. This operation can be described as being performed "inside the key domain" by the memory encryption engine. In other words, cloud service provider software executing outside the key domain cannot modify or rearrange this dynamically created part of the consumer domain image.In one embodiment, a consumer domain image that has been dynamically extended can be converted into a static version of the consumer domain image. For example, when execution of a virtual machine instantiated from a consumer domain image has been suspended and is about to resume, a transition from a dynamic to a static consumer domain image may be performed. A copy of the dynamic consumption domain image may be captured when the virtual machine is suspended, the copy of the dynamic consumption domain image may be dumped to memory, and the ciphertext bound to the address from the memory may be saved. The consumer may recalculate any integrity check values associated with the memory address and recreate the consumer domain image to merge those integrity check values. When the virtual machine is to be restarted, the re-created consumer domain image may be restarted as a static consumer domain image.As described with reference to FIGS. 5 and 6, the encrypted domain boot image created by the consumer includes a consumer domain manager (VMMlet) image, which is a modified version of a domain manager (VMMlet) image provided by a cloud service provider. The provider-supplied Domain Manager (VMMlet) image is modified to incorporate address information related to the storage location of the designated server for the cloud service provider. The consumer domain manager image is statically bound to the designated server and the storage address of the designated server, which means that the consumer domain manager image must be installed and executed at the designated storage address of the designated server for proper operation.The cloud service provider executes the consumer's encrypted domain boot image (including the consumer's domain manager (VMMlet) image), which will cause the initial static domain manager image to be installed at the specified static storage address of the specified server. The initial static domain manager image is executed as a consumer domain manager (VMMlet) on the cloud service provider's server. A consumer domain manager (VMMlet) manages virtual machines on behalf of consumers by having the code of the consumer VM image loaded into memory and executed as a consumer domain (VM). The consumer domain (VM) performs operations on the data in the server's memory through the server's memory encryption engine. Memory footprints for consumer domain (VM) images grow and shrink dynamically as the content of consumer domain (VM) images dynamically changes.FIG. 8 is a diagram illustrating a data physical address 870 according to one embodiment of the present invention. The data physical address 870 may be used to determine keys or fine-tune, as discussed above.As described above, the unused physical address bits 874 (also known as alias bits 874) of the data physical address 870 (or, alternatively, other metadata passed through the cache) can be used to define the key domain. For example, because the physical memory installed in the system will likely be less addressable than using a 64-bit physical memory address, the unused most significant address bit 874 can be used to choose between different key domains. As mentioned above, the term "key domain" refers to a set of memory locations encrypted with a public key domain key. The unused bits of the data physical address 870 may be used, for example, to determine which key and / or trim to use when encrypting and / or decrypting the memory of a physical memory address. Based on the unused address / alias bit 874, different keys can be selected for the same data physical address 870. For example, the encryption technology XTS (XEX-based fine-tuning codebook mode with ciphertext stealing) can use unused address / alias bits 874 for fine-tuning the same physical memory location, where different address aliases can lead to different ciphertexts Even if the data is the same.The remaining bits 876 of the data physical address are used to identify the physical memory address of the location in the memory where the data is stored. Although two key domain addresses can be aliases to the same external memory location, when data from a physical memory location is read into the cache, the cache resolves at full addresses (for example, including full 64-bit physical memory) Address) holds the key domain address independently.Different keys can be selected based on unused address bits (for example, XTS can use alias bits for fine-tuning the same physical storage location), where different address aliases can lead to different ciphertexts, even if the data is the same.Because unused address bits 874 are aliases for the same physical address in the memory used for the key domain when there are unused address bits due to unfilled memory, the key domain selector can be set to unused The value of the address bit used. Alternatively, if data in a physical address in the memory is to be shared (ie, not limited to a specific key domain), the key domain selector can be set to zero.In one embodiment, the "k-bit" field 872 represents one bit of the data physical address 870, in this case, the most significant bit of the data physical address 870. The k bits can be set by the domain manager (VMMlet) or virtual machine (VM) in the page table or extended page table to indicate whether the data generated by the memory access should be encrypted with the corresponding key domain key. When k bits = 0, k bits are said to be disabled and the data generated by the memory access is not encrypted by the key domain key (although it is possible to use a shared key to encrypt the data). When k bits = 1, k bits are said to be enabled, and the result of the memory access is encrypted with a key domain key. The k-bit field 872 can also be used to specify a range of memory that is shared and does not require key domain encryption. In alternative embodiments, k-bits can be additional metadata associated with the cache line, and are carried by the cache rather than by the components of the physical address of the data.In the case when the system has sufficient installed memory such that all address bits of the data physical address 870 (except one k-bit 872) are used, the key-domain address consumes the total filled memory when the k-bits are true / enabled (The physical address bits of the key domain). When the k bits are off / disabled, the key domain selector bit 874 refers to all memory ranges, but as clear text (or shared), makes all filled memories addressable as shared memory.FIG. 9 is a diagram illustrating a virtual-to-physical memory mapping according to one embodiment of the present invention. Today, many computer systems use virtual memory systems to manage memory and allocate memory to various processes running within the system. Virtual memory allows each process running on the system to operate as if it has control over the full address range provided by the control system. The operating system (OS) maps the virtual address space used for each process to the actual physical address space used for the system. The mapping from physical addresses to virtual addresses is usually achieved by using page tables.The term "address space" as used herein refers to a set of addresses in memory corresponding to a given process or virtual machine (VM), and the "address space identifier (ASID)" can be used to identify one or more addresses associated with the ASID Any number, code, or other token of space.FIG. 9 shows the case where there is no alias; that is, sufficient memory is available in the system such that the key field selector address bit 974 is used in conjunction with page address 976 and cache line selector 978 to select the referenced by data line physical address 975 The actual physical memory location. Here, each individual key domain will be located within a non-overlapping range of the physical memory 920.FIG. 9 illustrates a virtual address to physical address mapping according to an embodiment of the present invention. A physical address 924 within a physical page 822 in the physical memory 920 may be addressed using a virtual address 900. As shown, the virtual address 900 includes multiple fields to index the multi-level paging structure 960 to access the physical address 924, which addresses a specific physical page 922 within the physical memory 920. Note that the multi-level paging structure 960 is just one example of a multi-level paging structure for accessing physical memory locations. Although the multi-level paging structure 960 is described with reference to 64-bit virtual addresses, different page table structures can be used for 32-bit virtual addresses, physical address extension (PAE) extended mode addresses, or other types of virtual addresses.In the virtual address 900, an offset field 902 (such as bits 0-11 of a 64-bit address) is used to address a physical address 924 (as shown by a pointer 903) within a physical page 922 of the physical memory 920. A page table entry field 904 (titled "table", such as bits 12-20 of a 64-bit address) addresses a page table entry 932 in the page table 930 (as shown by pointer 962c). Page directory entry 906 (titled "Directory", such as bits 21-29 of a 64-bit address) addresses page directory entry 942 in page directory 640 (as shown by pointer 962b). A page directory pointer 909 (titled "PDP", such as bits 30-38 of a 64-bit address) addresses a page directory pointer entry 952 (as shown by pointer 962a) in the page directory pointer table (PDPT) 950. The base address of the OS paging structure 960 can be accessed using a pointer 961 in a control register such as CR3. In this way, a 64-bit linear address can be used to implement a multi-level page 9 structure to access a physical address.FIG. 9 also shows the components of the data physical address 970 corresponding to the physical address 924 of the physical page 922 of the physical memory 920. The "k-bit" field 972 represents one bit of the data physical address 970, in this case, the most significant bit of the data physical address 970. The k bits can be set by the domain manager (VMMlet) or virtual machine (VM) in the page table or extended page table to indicate whether the data generated by the memory access should be encrypted with the corresponding key domain key. When k bits = 0, k bits are said to be disabled and the data generated by the memory access is not encrypted by the key domain key (although it is possible to use a shared key to encrypt the data). When k bits = 1, k bits are said to be enabled, and the result of the memory access is encrypted with a key domain key. The k-bit field 772 can also be used to specify a memory range that is shared and does not require key domain encryption. In alternative embodiments, k-bits can be additional metadata associated with the cache line, and are carried by the cache rather than by the components of the physical address of the data.In the data physical address 970, the "Unused Address Bits: Key Domain Selector" field 974 may represent a set of unused address bits used to distinguish the key domain. If unused address bits in two data physical addresses have different values, they are aliases for the same physical address in memory. A "page address" field 976 indicates the address of a physical page 922 in the physical memory 920. A "cache line selector" field 978 indicates a cache line within a page referenced by the "page address" field 976. The "Page Address" field 976 and the "Cache Line Selector" field 978 together form a "Data Row Physical Address" field 975, which represents the actual physical location in the physical memory 920. A "cache line byte" field 979 includes the number of bytes in the cache line.Referring now to FIG. 10, another virtual address to physical address mapping is shown in accordance with an embodiment of the present invention. As shown in FIG. 10, the aliased client physical address 1014 within the aliased client physical page 1012 in the aliased physical storage 1010 may be addressed using the virtual address 1000. As shown, the virtual address 1000 includes multiple fields to index the multi-level paging structure 1060 to access the aliased client physical address 1014, and the physical address 924 addresses a specific page 1022 within the physical memory 1020. Note that the multi-level paging structure 1060 is just one example of a multi-level paging structure for accessing physical memory locations. Although the multi-level paging structure 1060 is described with reference to 64-bit virtual addresses, different page table structures can be used for 32-bit virtual addresses, physical address extension (PAE) extended mode addresses, or other types of virtual addresses.The aliased physical storage 1010 also includes an aliased customer physical page 1016, where page 1016 represents a second range of the aliased customer physical storage 610 aliased to the same physical storage location 1022.In virtual address 1000, an offset field 1002 (such as bits 0-11 of a 64-bit address) is used to address the aliased client physical address 1014 (as indicated by pointer 1003) within the aliased client page 1012 of the aliased physical memory 1010示). A page table entry field 1004 (titled "table", such as bits 12-20 of a 64-bit address) addresses the page table entry 1032 in the page table 1030 (as shown by pointer 1062c). Page directory entry 1006 (titled "Directory", such as bits 21-29 of a 64-bit address) addresses page directory entry 1042 in page directory 640 (as shown by pointer 1062b). A page directory pointer 1008 (titled "PDP", such as bits 30-38 of a 64-bit address) addresses a page directory pointer entry 1052 (as shown by pointer 1062a) in the page directory pointer table (PDPT) 1050. The base address of the OS paging structure 1060 can be accessed using a pointer 1061 in a control register such as CR3. In this way, 64-bit linear addresses can be used to implement a multi-level paging structure to access physical addresses.FIG. 11 is a diagram illustrating initial steps performed by a cloud service provider to provide a domain image to a consumer according to one embodiment of the present invention.In the example shown in FIG. 11, the memory manager 1140 of the cloud service provider server including the hardware 1110 allocates space 1114 in the memory 1112 for the domain image 1122 and notifies the requesting consumer of the memory location-related address ("Repair Variables "). The memory location related address ("repair variable") information may include, in particular, the physical address of the page in the memory (such as the physical address of the page constituting the space 1114), the physical address of the page table, control register information, interrupt descriptor table register information, etc. ). As an alternative embodiment, the cloud service provider can create a domain image that the consumer can re-verify as being correct. Specifically, the part of the domain mirroring that needs to be changed is the physical storage page address in the page table, as shown in Figure 9, page table entry 832. The page table entry 932 points to a physical page 922 in the physical memory 920. Domain mirroring can be thought of as a series of pages (for example, 4K bytes each), where each page is given a physical page address (its location in memory). Mirror verification then includes checking that the virtual-to-physical mapping through the page table is correct given the content of the page including the domain mirror.FIG. 12 is a diagram illustrating a message that provides a domain manager image (such as the VMMlet image 1122 of FIG. 11) to a consumer between a consumer 1201 and a storage manager 1240 of a cloud service provider according to one embodiment of the present invention .In response to the consumer's request for the service, the software of the cloud service provider's server (ie, the memory manager 1240) is configured to provide the consumer with a domain manager image (such as the VMMlet image 1122 of FIG. 11). The memory manager 1240 also sends to the consumer address information related to the memory location of the domain manager image, which is also referred to herein as repair variable information. Consumers verify that the domain manager image is valid, or use a third party to verify that the domain manager image is valid.As described with reference to FIG. 11, after determining that the domain manager (VMMlet) image is valid, the consumer uses the memory location-related address information identifying the memory location provided by the cloud service provider as the repair variable information to modify the information provided by the cloud service provider. Provides a verified domain manager image to create a domain boot image to launch the domain manager (VMMlet). Alternatively, the domain manager image may be "repaired" by the cloud service provider so that the domain manager image is ready to run on the allocated storage location.In one embodiment, the consumer can also add the consumer's own components to the domain boot image, such as the consumer's secret key for secure communication. Having a method for secure communication allows the consumer's basic domain-initiated image to securely retrieve the rest of the consumer's domain (VM) image from the consumer using the consumer's secret key. Consumers can also include consumers' own operating systems, applications, etc. in the domain boot image.Finally, when the consumer's domain boot image includes any consumer-provided components, the consumer encrypts the domain boot image. The "repair" domain manager (VMMlet) image and creating an encrypted domain boot image are further described with reference to FIG. 13.FIG. 13 is a diagram illustrating messages between components of a cloud service environment for encrypting a domain boot image and establishing a key domain according to one embodiment of the present invention. As described above, the consumer 1301 modifies the verified domain manager image provided by the cloud service provider to create a domain startup image for the domain manager (VMMlet). The domain boot image is then encrypted using the "fine-tuning" password associated with the memory location and the consumer's key domain key.The consumer 1301 may also use the key domain key to calculate an integrity check value (ICV, such as a key-controlled hash message authentication code (HMAC) value) for the encryption domain boot image. The ICV can be calculated as a position-dependent value and used to verify the content and positioning of the associated memory location of the encrypted domain boot image.The consumer 1301 requests the cloud service provider storage manager 1340 to identify a server in the cloud service provider network that provides key domain management functionality. The cloud service provider storage manager 1340 (in this example, from a server with a CPU 1311) obtains a server certificate of a server that supports the key domain, and provides the server certificate to the consumer 1301. The consumer 1301 verifies that the server certificate is signed by an authority that the server identified by the certification provides key domain management functionality.The consumer 1301 encrypts the consumer's key domain key with the public key of the key domain-supporting server corresponding to the certificate of the key domain-supporting server of the cloud service provider. The consumer 1301 sends the encrypted key domain key, encrypted boot image, and (optional) integrity check value (ICV) to the cloud service provider storage manager 1340, and the cloud service provider storage manager 1340 sends support secrets The CPU 1311 of the server of the key domain provides a Create Key Domain (CreateKD) command. In one embodiment, the cloud service provider memory manager 1340 identifies the key domain address selector to be used for the new key domain, and provides the key address domain selector to the CPU 1311 of the server supporting the key domain. The CPU 1311 of the server supporting the key domain creates and initializes the key domain. Initializing the key domain may include dumping the cache of any previous key domain (identified by the previous key domain address selector), and dump clearing the cache of the translation lookaside for the address mapping of the previous key domain Device. As an alternative to performing the initialization function as part of the key domain creation instruction, the CPU 1311 of the server supporting the key domain may execute the initialization key domain (InitKD) instruction to dump the clear cache and translation lookaside buffer. The CPU 1311 of the server supporting the key domain can also provide the encrypted key domain key and identification new secret to the memory encryption engine 1315 (shown as the total memory encryption engine with integrity in FIG. 13 and designated as TMEi 1315). Key domain address selector for key domains, although alternative embodiments use a memory encryption engine (MEE).14 is a diagram illustrating a consumer providing an encrypted boot image for a domain manager (VMMlet) according to one embodiment of the present invention. As described above with reference to Figures 5, 12, and 13, the consumer uses the key domain key and the address information related to the memory location provided by the cloud service provider to encrypt the domain boot image. In one embodiment, the consumer uses page address fine-tuning and a key domain key to encrypt the domain boot image in XTS mode associated with the memory location.In FIG. 14, the consumer 1410 sends a repaired VMMlet 1462 (which is a modified version of the provider's original VMMlet 1022) to the cloud service provider's memory manager 1440 as part of the encrypted domain (VM) boot image 1460. The cloud service provider's memory manager 1440 loads the repaired VMMlet image 1462 of the encrypted VM boot image 1460 into a previously allocated memory space 1414 (such as space 1014 in FIG. 10) that has been reserved within the shared memory 1412s. Because shared memory 1412 is an unencrypted memory page (where k bits are disabled), memory manager 1440 needs to ensure that the encrypted VM boot image is fully loaded into physical memory 1412 and does not maintain cache residency. Writing the encrypted VM boot image 1460 to the physical memory 1412 can be done by either clearing the encrypted VM boot image 1460 from the cache dump (e.g., using the CLFLUSH instruction) or using uncached / write-through / non-temporal memory access achieve. These techniques for write operations ensure that the consumer's encrypted image data is written directly to the memory encryption engine through hardware 1410 and into memory 1412 (and does not maintain cache residency).15 is a diagram illustrating messages between components of a cloud service environment for loading an encrypted domain image of a consumer 1501 into a memory 1512 of a server supporting a key domain according to one embodiment of the present invention. As described above with respect to FIG. 11, the software of the cloud service provider, such as the memory manager 1540 of the server supporting the key domain, loads the consumer-encrypted domain boot image into an unencrypted memory page of the memory 1512 (where k bits Disabled). The cloud service provider's software (ie, the memory manager 1540) also writes the ICV of the encrypted domain image into the ICV table. The ICV table can be a protected range of the memory 1512 managed and protected by the memory encryption engine (TMEi) engine 1515. A write operation to a memory address range by a software component that is not part of the memory manager 1540 can be intercepted by the memory encryption engine (TMEi) 1515. The memory encryption engine (TMEi) 1515 can also configure the ICV value in the memory 1512.Similarly, the Memory Encryption Engine (TMEi) 1515 prevents software from reading ICV values from this protected memory range to prevent ICV values from being replayed by malware. Once the ICV value has been established, only the memory encryption engine (TMEi) 1515 can read the ICV value. The prevent software replay ICV value prevents playback of dynamic domain mirroring content (eg, the rest of the consumer's domain mirroring provided after the domain initiated mirroring). Static image content (eg, domain boot image) can be replayed because the ICV value of the static image content is provided by the consumer.ICV itself provides integrity checks of data and data locations (addresses), and ICV uses key domain keys or their derivatives for encrypting the data rows that ICV is examining for key control. (For example, HMAC uses secret keys, as does Galois / Counter Mode (GCM) and IPHash.)In one embodiment, a partial copy of the diffused cache line data is XTS encrypted with a key domain key to calculate a secure ICV. In one embodiment, the ICV table entries are encrypted with the same key domain key as the memory location they use for integrity checking. Data in the ICV and ICV-protected memory locations are encrypted with the same key domain key password to ensure that ICV and the data they protect belong to the same key domain.An address selector for the key domain is also provided in unencrypted memory (where k bits are disabled). When an ICV calculated by the consumer using the consumer's key domain key is written to the ICV table, the address also indicates a location in the ICV table that includes the ICV for the key domain. (In other words, for each data line that is mirror-written to the memory from the consumer's encrypted domain, a corresponding integrity check value is written to the ICV table of that data line).No key domain is used to write the consumer's encrypted domain boot image data to the memory 1512 (because the consumer's domain boot image data has been encrypted by the consumer). In other words, the memory manager 1540 can write the consumer's encrypted domain boot image to a shared, unencrypted location in the memory 1512 without requiring any additional encryption. (The memory manager 1540 loads the consumer's encrypted image into the memory 1512 so that when the memory encryption is turned on with the consumer's key (set k bits to 1 or "enabled"), the memory encryption engine (TMEi) 1515 will The consumer's image is properly decrypted when it is read from the memory 1512).The CPU 1511 of the server supporting the key domain obtains the address selector of the key domain from the unencrypted position in the memory 1512 and provides the address selector of the key domain to the memory encryption engine (TMEi) 1515. The memory encryption engine (TMEi) 1515 then writes the encrypted boot image to the memory 1512 of the server that supports the key domain. Similarly, the CPU 1511 of the server supporting the key domain obtains an address indicating a position in the ICV table including the ICV of the key domain. The CPU 1511 provides the location of the ICV table of the ICV including the key domain to the memory encryption engine (TMEi) 1515. The memory encryption engine (TMEi) 1515 then updates the ICV table in the memory 1512 with the ICV of the key domain. The memory manager 1540 either clears these values from the cache 1513 dump (for example, by issuing a command to execute the CLFLUSH instruction), or uses uncached / write-through / non-temporal memory access to ensure that ICV data is written directly Enter memory 1512.Updating the integrity check value (ICV) of the key domain in the memory of the server supporting the key domain is a write-only operation, so that the software of the cloud service provider (the memory manager 1540) cannot read the key domain ICV. This write-only operation prevents playback of dynamic mirrored data. Only the domain boot image ICV can be replayed in-place (the consumer knows the location because the consumer created the encrypted boot image and ICV). This functionality allows the provider to suspend, store, and then restart the VM and reuse the consumer's domain to boot the image without exposing additional ICVs to the cloud service provider's software (or even to the memory manager 1540), such as when The memories are those ICVs that are dynamically created when the application is updated within the current key domain.FIG. 16 is a diagram illustrating initialization of a key domain according to an embodiment of the present invention. The memory manager 1640 of the server supporting the key domain can initialize the new key domain 1650 by issuing an Initial Key Domain (InitKD) command to the domain manager (VMMlet) 1622. The IntKD command causes the CPU 1611 of the server supporting the key domain to execute the IntKD instruction, which halts the core, dumps the cache of the old key domain, and dumps all translation lookaside buffers and addresses that include the old key domain mapping A space identifier (ASID) and initializes the storage encryption engine (TMEi) of the server supporting the key domain with the new key domain key of the key domain address selector.In one embodiment, the initialization of the key domain is one of the actions performed by a Create Key Domain (CreateKD) instruction. The following reference to the CreateKD instruction can refer to the CreateKD instruction: not only the key domain is created based on the key domain key encrypted by the server's public key, but also the old key is cleared by quiescing the core and dumping Domain cache, dump clears all translation lookaside buffers and address space identifiers (ASIDs) including old key domain mappings, and initializes key domain-backed servers with the new key domain key of the key domain address selector Memory Encryption Engine (TMEi) to initialize the new domain.FIG. 17 is a flowchart of an operation method of a CPU of a server supporting a key domain in performing a key domain creation operation according to an embodiment of the present invention. In the "Receive Create Key Domain Command with Encrypted Key Domain Key" box 1710, the server CPU supporting the key domain receives the input parameters KD_Id, local key domain identifier (key domain address selector), and Encrypted_Key, create a key domain command for an encrypted key domain key. Control proceeds to "Use the server's private key to decrypt the encrypted key domain key and decrypt the optional configuration policy" box 1720, where the server's private key is used to decrypt the encrypted key domain key, which is provided to the cloud service Unknown / unexposed secret key. Optionally, the configuration policy can also be decrypted again using the server's private key, or alternatively, the hash value of the policy data can be decrypted using the server's private key. Control proceeds to "decrypted key domain key and policy valid" decision point 1730. Examples of policy data that can be evaluated include the amount of memory the server is expected to have installed, the encryption algorithm the server should use, the number of CPUs inserted, whether hardware debugging is allowed, and so on. This policy data is compared with the current configuration of the server through hardware to ensure that the server's configuration is valid as expected by the consumer before the consumer's key domain key is used. If the decrypted key domain key and configuration policy are invalid, control proceeds to a "return error" box 1740, where the CPU returns an error in response to the create key domain command.At the "decrypted key domain key and policy valid" decision point 1730, if the decrypted key domain key and configuration policy are valid, control proceeds to the "create new key domain" box 1750. In establishing a new key domain, the CPU of the server supporting the key domain prevents other CPUs from using the key domain identifier, or otherwise verifies that other CPUs are not currently using the key domain identifier. For the key domain identifier dump Clear the cache, clear all translation lookaside buffer address space identifiers for the key domain identifier dump, and set the current key domain identifier and key domain key in the storage encryption engine. Control then proceeds to "Assign a new key domain identifier to the ASID tag and start over" box 1760, where the new key domain identifier is assigned a new address space identifier (ASID) tag, and the creation key is issued The process of the domain command restarts. Further, if it was previously halted, re-enable all processors.FIG. 18 is a diagram illustrating verification of domain mirroring according to one embodiment of the present invention. The verification hash function 1846 of the memory manager 1840 of the server supporting the key domain uses a hash key domain (HashKD) instruction to verify that the domain image of the key domain 1850 (eg, VMMlet 1822) is correct. Some values in the domain mirror will be values for machine-specific variables / fixes (such as physical addresses used in page tables). Cloud service providers can virtually reconstruct the hash values (instead of the current address values) of these machine-specific variables. When the resulting HashKD hash value matches the expected value of the mirrored domain manager (VMMlet) provider / static portion of the hash, both the consumer and the cloud service provider agree that the domain-initiated mirroring is correct. Some other mirror locations may include the consumer's secret (key / code / data, for example in the OS part of the domain mirror). These locations can be hashed, but the hash value does not expose the memory plaintext (and thus the secret) to the cloud service provider. The hash value may have a minimum granularity, such as not less than a cache line or not less than a memory page (for example, 4KB).When a secure VM is executed for the first time, the secure VM can turn off the HashKD functionality in its key domain, because HashKD can be used to read the key domain during initialization, and provide the cloud manager with a domain manager (VMMlet) for consumption Visibility of appropriate supply. Otherwise, HashKD functionality may not be needed.FIG. 19 is a diagram illustrating messages between components of a cloud service environment for verifying a domain image according to one embodiment of the present invention. In the first message, cloud service provider software (such as the storage manager 1940 for a server that supports key domains) requests that the consumer's encrypted boot image for the domain manager (VMMlet) is installed on the storage location HashKD function performed. The CPU 1911 executes a hash key domain (HashKD) instruction, and provides a current address selector identifying the key domain to be hashed to the memory encryption engine (TMEi) 1915 via the cache 1913. The memory encryption engine (TMEi) 1915 reads the encrypted data rows from the memory location where the encrypted boot image is installed, and the memory encryption engine (TMEi) 1915 uses the key of the key domain identified by the address selector to decrypt the data rows. The memory encryption engine (TMEi) 1915 sends the decrypted data to the cache 1913, marking the decrypted data with an address and a key domain address selector. The CPU 1911 of the server supporting the key domain creates a hash value for decrypting the data, stores the obtained hash value in a register of the CPU 1911 or a memory location of the memory 1912, and the software of the cloud provider (i.e., memory management (1940) verifies that the hash value matches the expected hash value of the domain image initially provided to the consumer.20 is a flowchart of an operation method of a CPU of a server supporting a key domain in performing a hash key domain operation according to an embodiment of the present invention. In the "Receive Hash Key Domain Command" box 2010, the CPU of the server supporting the key domain receives the hash key domain command with the key domain identifier and the input parameters of the physical address. Control proceeds to "key domain identifier and address valid" decision point 2020, where the CPU of the server supporting the key domain determines whether the key domain identifier and physical address are valid. To make this determination, the CPU of the server supporting the key domain can verify that the physical address points to a filled memory location, and that there is a page table mapping and read permission for the physical address. The CPU of the server supporting the key domain can also verify that the key domain identifier has a corresponding key domain key installed in a memory encryption engine (TMEi) for the key domain identifier. If the key domain identifier and physical address are invalid, control proceeds to a "return error" box 2030, where the error is returned to the issuer of the hash key domain command. If the key domain identifier and physical address are valid at the "key domain identifier and address valid" decision point 2020, control proceeds to "set the key domain identifier in the physical address, set k bits, read The contents of the memory location at the physical address "box 2040. The unused bits of the physical address are set as the key domain identifier, and the k-bit value is set to 1 to indicate that encrypted data will be read from the memory location at the physical address, and the content of the memory location at the physical address is determined by The memory encryption engine (TMEi) reads using the key domain key of the key domain identified by the key domain identifier. When the content of a memory location at a physical address is read, the memory encryption engine (TMEi) uses a key domain key to decrypt the content. The memory encryption engine (TMEi) puts the decrypted content of the memory location at the physical address into the cache, and the CPU of the server supporting the key domain calculates the hash value, hash by hashing the decrypted content in the cache. Column values, such as SHA2 / 3 hash values. Control proceeds to a "return hash value of memory contents" box 2050, where the hash value is returned to the issuer of the HashKD command. The issuer of the HashKD instruction can then determine whether to switch to a verified key domain.FIG. 21 is a diagram illustrating switching between key domains according to one embodiment of the present invention. The switch to the new key domain is initiated by the memory manager 2140 of the server supporting the key domain to switch from one key domain to another key domain. For example, a switch to a new key domain may be initiated in response to a new consumer request for a service received by a server that supports the key domain. The domain manager image has been previously provided to the consumer, and the memory manager 2140 obtains a repaired version of the domain manager image (e.g., VMMlet 2122) that includes the memory location correlation provided by the consumer via unencrypted (k-bit disabled) memory 2112 Address information, such as entry point address 2123. The memory manager 2140 issues a Switch Key Domain (SwitchKD) command to the CPU of the hardware 2110, which causes the domain manager image (VMMlet) 2122 to execute the entry point address 2123 in the memory of the server supporting the key domain, thereby establishing Key domain 2150.In one embodiment, the consumer calculates the HMAC value of the expected processor state before entering the new key domain. HMAC values for the expected processor state include: the expected value of the instruction pointer; the stack pointer; control registers (such as control registers 0, 3, and 4); and special descriptor table registers, such as GDTR, LDTR, which can be mirrored with the included domain manager The crypto domain starts mirroring properly for any MSR as well as IDTR. For example, this HMAC value will ensure proper execution within the key domain (ie, interrupts are turned off), or otherwise, no new key domain is entered and no handover between key domains occurs.The execution of the domain manager (VMMlet) inside the new key domain can be accomplished using the Switch Key Domain (SwitchKD) instruction. The CPU instruction uses a key domain key to determine the HMAC value to verify the current processor state of the processor upon entry. For example, the hash function can be calculated from the same information (including the instruction pointer, stack pointer, control register, and special descriptor table register) as the information used to calculate the HMAC value of the expected processor state. If the calculated HMAC value of the expected processor state does not match the hash value of the CPU's current processor state upon entry, the switch key domain instruction will fail. The key domain selector will remain the same and the execution of the domain manager (VMMlet) will not switch to the new key domain.In another embodiment, the control flow transmission to the new key domain is terminated on an ENDBRANCHKD instruction. The only way to change the control flow to the new key domain of the other key domain of the two is to enter the new key domain at the entry point where the next instruction to be executed is to end the branch key domain instruction. This requirement to change the key domain assures the consumer that control flow is passed through the intended entry point.With the correct execution of the Switch Key Domain (SwitchKD) instruction, the domain manager (VMMlet) is now measured and operating correctly within the key domain. All other functionality is provided by software to load the rest of the consumer's domain (VM) image to perform secure storage, communication, and paging (e.g., for page out, the consumer's domain (VM) image needs to be managed through the domain (VMMlet), migration, input / output (I / O), etc.22 is a diagram illustrating messages between components of a cloud service environment when executed within a key domain according to one embodiment of the present invention. In one embodiment, the consumer calculates a hashed message authentication code (HMAC) value for the expected processor state before entering the new key domain. The HMAC value of the expected processor state includes: the expected value of the instruction pointer; the stack pointer; control registers (such as control registers 0, 3, and 4); and special descriptor table registers such as GDTR, LDTR, any associated MSR, and IDTR. For example, this HMAC value will ensure that the processor state is such that proper execution of the domain manager (VMMlet) occurs within the key domain (ie, the interrupt is turned off).In the next communication, the cloud service provider software (such as the memory manager 2240 on the server supporting the key domain) issues a switch KD command to cause the CPU 2211 of the server supporting the key domain to switch the key domain. The CPU 2211 sets a key domain selector and checks whether the expected processor state HMAC value matches the HMAC value calculated for the current CPU state. If the HMAC values match, switch the key domain, dump the clear instruction pipeline, change the translation lookaside buffer (TLB) address space identifier (ASID) label for the key domain identifier, and perform domain management in the new key domain (VMMlet). When executed in a new key domain with k bits enabled, the address of the key domain selector is used to access the memory location of the key domain. When executed in the new key domain, the memory encryption engine (TMEi) 2215 can read and write key domain encrypted data from the memory 2212, and check and / or update the integrity check value (ICV) of the encrypted data. If the ICV value is consistent with the encrypted data, the encrypted data is decrypted into the cache 2213 for the address and key domain selector.23 is a flowchart of an operation method of a CPU of a server supporting a key domain in performing a key domain switching operation according to an embodiment of the present invention. At "Receive Switch Key Domain Command with Input Parameters for Key Domain Identifier, CPU Status and Expected HMAC Value" box 2310, the CPU receives the switch key domain command. The input parameters of the switch key domain command include the key domain identifier to switch to, the expected CPU state to switch to the new key domain, and the expected HMAC value of the processor state. Control proceeds to "expected HMAC value matching CPU state" decision point 2320. At decision point 2320, it is determined whether the current CPU state and / or the proposed CPU state specified as a parameter of the SwitchKD instruction matches the expected HMAC value. Some CPU state information (such as instruction pointers) is the state set by the SwitchKD instruction as a parameter. If the HMAC also matches the instruction pointer parameter, SwitchKD will set the instruction pointer register in the CPU accordingly, restart in the new key domain, and start execution at the instruction position. Alternatively, all CPU state values can be parameters of SwitchKD, meaning that if HMAC matches the suggested input state, SwitchKD will fill all register states to the input parameters. At decision point 2320, if the expected HMAC value of the processor state does not match the HMAC value calculated for the current CPU state or the proposed CPU state specified as a parameter to the SwitchKD instruction, then an error is returned in "(Do not switch key Domain) "box 2330, the key domain is not switched and an error is returned to the issuer of the command to switch key domains.At the "expected HMAC value matches the CPU state" decision point 2320, if the expected HMAC value of the processor state matches the HMAC value calculated for the current CPU state, then control proceeds to a "switch to new key domain" box 2340. In the "Switch to new key domain" box 2340, the dump clears the CPU pipeline and either sets the address space identifier (ASID) label for the translation lookaside buffer (TLB) to the new key domain identifier, or transfers Store clear TLB. The current key domain is set as the key domain identifier, and the CPU register is set to match the CPU state input parameter value. Control then proceeds to "Branch to execute instructions at the location indicated by the instruction pointer of the CPU state" box 2350. At block 2350, the CPU of the server supporting the key domain branches to execute the instruction at the location as indicated by the instruction pointer of the CPU state provided as an input parameter to the SwitchKD instruction. After the execution of the SwitchKD instruction is completed, the domain manager (VMMlet) will operate in the new key domain.24 is a flowchart of a method of operating a CPU of a server supporting a key domain in performing traversal of a paging structure in response to a page miss, according to one embodiment of the present invention. Control begins with "Processor traverses paging structure on page miss" box 2410. Once a page miss is encountered (where the page the CPU is trying to read or write is not found in the translation lookaside buffer), the CPU of the server that supports the key domain begins to traverse the page structure (such as the OS page structure 860 described with reference to FIG. 9 (Or the OS paging structure 960 described with reference to FIG. 9). For example, the CPU of a server that supports the key domain may begin to read control register 3 (CR3) for a pointer to the base address of the paging structure. Control then proceeds to "paging structure misconfiguration" decision point 2420, where the CPU determines whether the paging structure is configured as expected. For example, the CPU determines whether a page fault to the operating system has occurred, or whether a VMExit to a domain manager (VMMlet) has occurred. These failures are still in the same key domain as the domain (VM) that caused the failure. If the paging structure is not configured as expected, control proceeds to a "hard fault, the CPU reports an error" box 2430, where the CPU caused a hard fault and reports to the process an error that a page miss was encountered.At the "paging structure misconfiguration" decision point 2420, if the paging structure is properly configured, control proceeds to "determine the ASID label assigned to the current key domain" box 2440. The address space identifier (ASID) label assigned to the current key domain is determined, and control proceeds to "K-bit is set" decision point 2450. If the K bit of the ASID tag is not set, control proceeds to "Use the address to leave the TLB filled with K bits off" box 2460. At block 2460, the CPU of the server supporting the key domain causes the translation lookaside buffer (TLB) to be filled with the physical address as is. Keeping the physical address as it is allows data to be read directly from unencrypted memory without using a key domain key.At the "K bit is set" decision point 2450, if the k bit of the ASID tag is set, control proceeds to "Get the current key domain and replace the higher physical address bit with the key domain identifier" box 2470. The current key domain is determined based on the internal processor state set by the SwitchKD instruction, which sets the current key domain as the key domain identifier of the new key domain, as described with reference to Figure 23 of. The higher bits in the physical address are replaced with the key domain identifier / selector of the current key domain, where k bits (which is the highest bit in one embodiment) are enabled. Control then proceeds to "set translation lookaside buffer using address and ASID tags" box 2480, where the physical address of the current key domain (including the key domain selector and enabled k-bits, or k-bit = 1) and ASID The tag sets the translation lookaside buffer.FIG. 25 is a diagram showing the growth of a domain manager (VMMlet) according to one embodiment of the present invention. After the consumer's domain boot image has been loaded and the domain manager (VMMlet) is executing, for example, a domain manager (VMMlet) may be needed to include additional storage to load the remainder of the consumer's VM image. Once the secure domain manager (VMMlet) 2522 with the consumer secret key 2523 is running in the key domain 2550, the consumer can securely pass the rest of the consumer's VM image 2532 to the domain manager (VMMlet) 2522 . The rest of the consumer's VM image 2532 may include, for example, an operating system (s), an application (s), scripts, or other code.The secure communication channel between the consumer and the domain manager (VMMlet) 2522 can be connected via a Transport Layer Security / Secure Sockets Layer (TLS / SSL) connection to the consumer network, using the original Encrypted domain boot image consumer secret key 2523 enabled. In other words, if the original encrypted domain boot image has an operating system with an OpenSSL stack on top of the domain manager (VMMlet) and the consumer's secret key, the OpenSSL software stack can be executed to The network retrieves the rest of the consumer's VM image 2532.The operating system running on the domain manager (VMMlet) 2522 can support full-volume storage encryption, enabling the operating system to securely page through encrypted pages, files, etc. from the k-bit pass (shared) channel. The device 2540 acts as an intermediary. Once the original encrypted domain boot image is loaded into memory and is executing, the domain manager (VMMlet) 2522 may allow other software, such as the operating system, to page through additional information from the consumer using any security method desired.Adding memory pages to the rest of the consumer's VM image 2532 may cause the domain manager (VMMlet) 2522 to require additional memory allocation. In one embodiment, a domain manager (VMMlet) 2522 may grow by requesting more memory from the memory manager 2540. The memory manager 2540 can allocate additional memory to the domain manager (VMMlet) 2522, as shown by the "Assign Memory" action 2501. This additional memory enables consumers to perform write-only operations, such as non-temporal move (MOVNT) operations (a non-cached write combination operation used to write to memory without first reading the memory), from the consumer or Consumer authorized third party writes additional pages in the domain / VM workload image. For example, the consumer can provide the rest of the VM image 2532, including the operating system (s), application (s), scripts, or other code, via a secure connection to the domain (VMMlet) 2522.FIG. 26 is a diagram illustrating messages between components of a cloud service environment for a growing domain manager (VMMlet) according to one embodiment of the present invention. Consumer 2601 sends the remainder of the VM image to cloud service provider software, such as a memory manager 2640 for a server that supports key domains. In one embodiment, the rest of the VM image is passed from the consumer to the running domain manager (VMMlet) via a Transport Layer Security (TLS) / Secure Sockets Layer (SSL) communication session, using the consumer's secret A key, such as key 2523 of Figure 25, is passed from the consumer to the running domain mirrored TLS stack.As described above with reference to FIG. 25, the consumer's secret key is included as part of the consumer's encrypted domain boot image given to the cloud service provider. At the point in time represented by FIG. 26, the consumer's VM runs securely and self-sufficiently, running any software provided by the consumer on top of the VMMlet (similar to the operating system running on top of the VMM). The data packets are sent from the consumer 2601 via the memory manager 2640 through the shared, unencrypted memory (ie, the k-bit disabled memory 2612) to the running domain manager (VMMlet). These data packets can include software-encrypted data streams that can be decrypted and verified by the consumer's software running within a consumer's VM running on top of a domain manager (VMMlet).The cloud provider software can send the data of the rest of the VM image via the CPU 2611 and the memory encryption engine (TMEi) 2615 through the shared, unencrypted memory (ie, with k-bit disabled memory 2612) on behalf of the consumer. The rest of the data in the VM image is shown flowing from the CPU 2611 through the memory encryption engine (TMEi) 2615 to the memory 2612, as illustrated by two "write data" actions. When the rest of the VM image data is provided to the cloud service provider, the cloud service provider software can cause the CPU 2611 to execute SwitchKD instructions to switch to the running domain manager (VMMlet) for the consumer Key domain. Alternatively, the CPU 2611 may provide control to a running domain manager (VMMlet) of a consumer who is running on another thread or on another CPU. These actions performed by the CPU 2611 are shown by "SwitchKD (or KD running on another CPU / thread)" of FIG. 26.A running domain manager (VMMlet) copies data (including the rest of the VM image) from unencrypted memory to encrypted memory, which is part of the key domain of the consuming VM. As shown by the "Read Data" action, the memory encryption engine (TMEi) 2615 reads data from unencrypted memory. In the action of "reading the data of the address with the shared! K KD selector", the domain manager (VMMlet) running on the CPU 2611 reads data from the unencrypted memory location identified by the key domain address selector. The key domain address selector is provided with data in unencrypted memory.A running domain manager (VMMlet) can process data, decrypt data with software, perform integrity checks, and more. For example, a running domain manager (VMMlet) may request the storage encryption engine (TMEi) 2615 to write encrypted data to a memory address using a key domain address selector provided in unencrypted storage. As shown in Figure 26, "k-bit open memory access sets the address as a KD selector; use the MOVNT instruction when writing a new memory address for the first time" action. The CPU 2611 writes the encrypted data and the associated integrity check value to the memory 2612. The address specified in the key domain identifier / selector in (where k bits are enabled, indicating that the data is encrypted with the key domain key before writing the data to memory 2612).Writing data to a memory address establishes the "owner" of the memory location. When switching between key domains, the owner of the memory location changes from the owner of one key domain to the owner of another key domain. When the key domain changes, the corresponding key domain key used to encrypt data stored in the memory location belonging to the key domain changes accordingly.When reading data from a memory location belonging to a key domain, the "current" key domain key is used. After switching key domain instructions, a new "current" key domain key must be established. As mentioned above, the "current" key domain key is established when data is written to a memory location, thereby establishing a new owner of the memory location. When reading data from a memory location, the read operation uses the "current" key domain key. If data is read from a memory location before the owner of the new key domain has written to the memory location, the read operation will use the current key domain key, which has not been changed to reflect the key domain's New owner. The read operation will not be successful because the current integrity check value of the memory location will belong to the previous key domain. The integrity check will fail because the reader cannot read data belonging to another key domain.To alleviate this problem, when establishing a new key domain, the owner of the new key domain writes new data to a memory location within the key domain without first trying to read the memory location. At the time of the write operation, a new integrity check value (ICV) is calculated for the new key domain; thus, the owner of the new key domain will now own the memory content (and be able to read and write the memory location without completeness) Sexual failure).In one embodiment, the MOVNT instruction is used to perform a first write operation on a new memory address. The memory encryption engine (TMEi) writes the encrypted data and ICV to the memory, thereby completing the process of copying the data from the unencrypted memory to the encrypted memory, which is part of the key domain of the consumer VM.The MOVENT instruction is a write combination operation, which means that the MOVENT instruction does not require a read operation to fetch the memory contents because the current contents are not required. The MOVNT instruction can bypass the cache and write directly to memory. As an alternative to using the MOVNT instruction, a running domain manager (VMMlet) can use uncached write operations to copy data from unencrypted memory to encrypted memory, which is part of the key domain of the consumer VM. By writing to a memory address without first reading data from that memory address, a new integrity check value (ICV) is created (via write for ownership).Once the full consumer domain (VM) image is installed, the domain (VM) will operate as a normal VM, using secret keys to establish secure communications with consumers, consumer-authorized third parties, and other authorized VMs. Secure storage is achieved by encrypting full volumes and / or files in the file system with a consumer secret key. Secure communication is achieved via IPSec / TLS and consumer secret keys. Proof is achieved using the consumer's secret key (PKI, etc.). The secure migration of domains (VMs) between servers in the cloud service provider's infrastructure can be achieved by using consumer secret keys to encrypt VM mirror pages (and calculate the integrity check values of those VM mirror pages). The VM image page can then be sent with the consumer's domain manager (VMMlet) page to other servers in the cloud service provider's infrastructure to securely migrate the consumer's VM image from one server to another.FIG. 27 is a diagram showing messages between components of a cloud service provider environment for a running domain manager (VMMlet) to request more memory pages from the cloud manager's memory manager software. In this environment, multiple CPUs share the memory 2712 simultaneously. Cloud provider software (e.g., memory manager) 2740 runs on the first CPU1 of the server that supports the key domain. The virtual machine 2730 runs on the second CPU2 of the server supporting the key domain. The VM 2730 requests additional storage, as shown in the "Request More Storage" action between the VM 2730 and the memory manager 2740. (In fact, the operating system that is part of the VM 2730 running on top of the VMMlet may require more memory. The operating system can cause VMExit to exit the VM 2730, thereby invoking the host VMMlet, which then sends it to the cloud provider's storage manager 2740 requested more memory). A domain manager (VMMlet) running on CPU2 sends a write request on behalf of the consumer's VM2730 via shared unencrypted memory (where k bits are disabled), such as "!" Between VM 2730 and memory encryption engine (TMEi) 2715 k request message "action. The memory encryption engine (TMEi) 2715 passes the memory request to the shared memory 2712 without processing the request because k bits are disabled. The memory manager 2740 on CPU1 reads a request for additional memory written by VM 2730 on CPU2, as indicated by the dashed line from memory 2712 to memory manager 2740 on CPU1. Domain mirroring running on CPU2 (ie, VM 2730) is waiting for a response, such as an interrupt (IPI), as shown in the "Waiting for a response, such as an interrupt (IPI)" action of VM 2730. When free memory locations are provided by the memory manager software 2740 of the cloud service provider, the memory manager 2740 on CPU1 writes the response data to the shared memory 2712 (where k bits are disabled), such as from the memory manager 2740 on CPU1 The dotted line to the shared memory 2712 is shown. VM 2730 on CPU2 reads response data from shared memory 2712 (where k bits are disabled), as shown in the "Read Response Data" action between shared memory 2712 and memory encryption engine (TMEi) 2715. The memory encryption engine (TMEi) 2715 passes the response data to the VM 2730 on CPU2, as shown by the "! K response message" action from TMEi 2715 to VM 2730 on CPU2. The VM 2730 updates the page table in the VM's key domain as shown by the "Update Page Table in the Key Domain of the VM" action between VM 2730 and Memory Encryption Engine (TMEi) 2715. Memory Encryption Engine (TMEi) 2715 writes encrypted data and integrity check values to memory 2712 (where k bits are enabled), such as the "Write Encrypted Data and ICV" between Memory Encryption Engine (TMEi) 2715 and shared memory 2712 "As shown in the action. The domain manager (VMMlet) that hosts VM2730 causes CPU2 to execute MOVENT instructions to write data to the newly allocated memory in the key domain of the VM, such as by "MOVENT to" between VM 2730 and Memory Encryption Engine (TMEi) 2715 "Allocated memory in the key domain of the VM" action. In response, the memory encryption engine (TMEi) 2715 writes the encrypted data and ICV to the newly allocated encrypted memory.FIG. 28 is a diagram showing messages between components of a cloud service environment showing additional VM pages requested while scheduling a VM on a single CPU. The cloud service provider's memory manager 2840 determines the scheduling scheme as to which VM is currently executing. A domain manager (VMMlets) running on the cloud service provider's CPU / core receives timer events and gives time to other VMs based on the memory manager command queue (k-bit disabled shared memory area). The switch key domain (SwitchKD) operation is used to switch to another domain manager (VMMlet).Referring to FIG. 28, the VM 2830 is preparing to request a message from the memory manager 2840 for additional memory in the "preparing a message to the memory manager" action, and in the "cache" between the VM 2830 and the memory encryption engine (TMEi) 2815 In the "! k request" action, the message is placed in the cache 2813 to be read into the unencrypted (k-bit disabled) memory 2812. The memory encryption engine (TMEi) 2815 writes the requested data to the memory 2812 (where k bits are disabled) in a "write request data" action between the memory encryption engine (TMEi) 2815 and the memory 2812. In the "Save Processor State" action, VM 2830 saves the processor state and sets the saved VM processor state in cache 2813 to "KD k-bit on VM is saved state". The VM's saved state processor state is written to the VM's key domain encryption (k-bit enabled) memory. In the "Write Encrypted Data and ICV" action between the Memory Encryption Engine (TMEi) 2815 and the Memory 2812, the Memory Encryption Engine (TMEi) 2815 uses the saved VM processor state as encrypted data to be written to the VM's password with ICV. Key field encryption (k-bit enabled) memory 2812. Saving the processor state of the VM enables the domain manager (VMMlet) to restart execution of the VM at a later time using the saved processor state. In one embodiment, after the processor state has been saved, the domain manager (VMMlet) clears the registers so that the VM's secret is not available after switching to another key domain.To enable the memory manager 2840 to allocate additional memory for the VM 2830, the key domain is switched from the VM 2830 key domain to the memory manager 2840 key domain. In the "SwitchKD to Provider" action between the VM 2830 and the memory manager 2840, the VM 2830 sends a switch key domain instruction to the cloud service provider. In the "Restore Processor State" action, the memory manager 2840 starts to restore the processor state associated with the key domain of the memory manager 2840. In the "read encrypted data and ICV" action between the memory 2812 and the memory encryption engine (TMEi) 2815, the encrypted data and integrity check value of the key domain of the memory manager 2830 are read from the memory 2812. The memory encryption engine (TMEi) 2815 decrypts the data of the current key domain and sends the decrypted data to the cache (assuming the corresponding ICV value is correct). In the "Restore State from KD of Memory Manager" action between the Memory Encryption Engine (TMEi) 2815 and the Memory Manager 2840, the Memory Manager 2840 restores the processor state from the Memory Manager 2840 key domain.In the "read request data" action between the memory 2812 and the memory encryption engine (TMEi) 2815, the memory encryption engine (TMEi) 2815 reads additional memory request data from the k-bit disabled command queue of the memory 2812 to the cache 2813. In the "! K data request in cache" action between the memory encryption engine (TMEi) 2815 and the memory manager 2840, the memory manager 2840 reads the additional memory data request stored in the cache 2813.In the "! K Provide Free Memory Location" action between the memory manager 2840 and the memory encryption engine (TMEi) 2815, the memory manager 2840 sends a message via the unencrypted (k-bit disabled) memory command queue to encrypt the memory The engine (TMEi) 2815 provides free memory locations. In the "write response data" action between the memory encryption engine (TMEi) 2815 and the memory 2812, the memory encryption engine (TMEi) 2815 writes the response data to the memory 2812, including the address of the free memory location allocated for the VM 2830. In response to the request from the VM 2830, the allocation of additional memory has been completed. In the action of "save state in KD of the memory manager", the memory management engine 2840 saves the current processor state in the key domain of the controller. In the "Write Encrypted Data and ICV" action between the Memory Encryption Engine (TMEi) 2815 and the Memory 2812, the Memory Encryption Engine (TMEi) 2815 writes the encrypted data (the saved processor state) and the integrity check value to the memory The memory manager 2840 key domain in 2812. The memory manager 2840 then performs a Switch Key Domain (SwitchKD) operation to switch back to the key domain of the VM.In response to switching to the VM 2830 key domain, in the "Restore Processor State" action, the VM 2830 begins to restore the processor state saved by the VM 2830. In the "Read Encrypted Data and ICV" action between the Memory Encryption Engine (TMEi) 2815 and the Memory 2812, the Memory Encryption Engine (TMEi) 2815 reads the encrypted data (including processor status) of the VM 2830 key domain and the complete Sex check value. In the "Restore State from KD of VM" action between Memory Encryption Engine (TMEi) 2815 and VM 2830, VM 2830 restores the saved processor state from its VM 2830 key domain.While the VM 2830 was previously executing before switching to the memory manager 2840 key domain, the VM 2830 has requested additional storage. In the "Read Response Data" action between the Memory Encryption Engine (TMEi) 2815 and the Memory 2812, the Memory Encryption Engine (TMEi) 2815 reads the response data for requests for additional memory and stores the VM 2830 in the cache 2813 Provide response data. In the "Update Page Table in KD of VM" action between VM 2830 and Memory Encryption Engine (TMEi) 2815, VM 2830 updates the page table in the VM 2830 key domain to reflect the newly allocated memory location. In the "write encrypted data and ICV" action between the memory encryption engine (TMEi) 2815 and the memory 2812, the memory encryption engine (TMEi) 2815 writes the encrypted data (updated page table) and the integrity check value to the memory 2812 .To establish ownership of the newly allocated memory, VM 2830 then performs a "MOVNT to newly allocated memory in VM's KD" action between VM 2830 and Memory Encryption Engine (TMEi) 2815 to the VM's key domain The newly allocated memory performs a MOVNT operation (or other write operation that does not read the contents of the memory location before writing to the memory location). MOVNT operation establishes VM 2830 as the owner of the newly allocated storage. In the "write encrypted data and ICV" action between the memory encryption engine (TMEi) 2815 and the memory 2812, the memory encryption engine (TMEi) 2815 writes the encrypted data and ICV to the memory 2812. As part of this write operation, the memory encryption engine (TMEi) 2815 calculates a new integrity check value for the newly allocated memory in the VM 2830 key domain. The new integrity check value will ensure that the VM2830 key domain key can be used to decrypt the contents of the newly allocated memory.FIG. 29 is a diagram showing a running domain manager (VMMlet) 2922. Prior to running the domain manager (VMMlet) 2922, the memory manager 2940 of the server supporting the key domain validated a hash of the processor state before performing domain boot mirroring for the domain manager (VMMlet). Once the processor status is verified, a domain boot image is executed to run the domain manager (VMMlet).The memory manager 2940 issues a command to a running domain manager (VMMlet) 2922 via an unencrypted (k-bit disabled) memory 2912. Similarly, the hardware 2910 of the server supporting the key domain issues a direct memory access (DMA) request to a running domain manager (VMMlet) via unencrypted (k-bit disabled) memory 2912. In response to receiving these commands or DMA requests, a domain manager (VMMlet) 2922 interacts with server hardware 2910 that supports key domains to set and / or access register values, handle interrupts, perform VM entry and exit, and so on.The memory manager 2940 determines the scheduling scheme regarding the VM currently executing; in FIG. 29, the VM currently executing is VM2 2930, and the associated key domain is key domain 2950. SwitchKD operation is used to switch to another domain (VM).Once the dynamic portion of the VM image is loaded (ie, the remainder of the VM image that is not included in the domain boot image), a dynamic entry point can be created locally within the VM. For example, in response to a Switch Key Domain (SwitchKD) instruction, a new Key Controlled Hash Message Authentication Code (HMAC) may be calculated based on the key domain key.Interrupts and VM exit instructions are delivered to the current domain manager (VMMlet) running on the CPU / core of the server supporting the key domain. The running domain manager (VMMlet) determines whether the interrupt / asynchronous event is for the currently running domain manager (VMMlet) or another domain manager (VMMlet). If the interrupt / asynchronous event is for another domain manager (VMMlet), the domain manager (VMMlet) will dispatch the correct domain manager (VMMlet) or notify the storage manager.Regarding resource management, paging is implemented by software, not by hardware paging mechanism. Domain Manager (VMMlet) uses software to encrypt pages (including integrity metadata) (for example, using Intel® AES New Instructions (AESNI) to accelerate AES encryption), updates page tables and extended page tables, and disables memory by k bits Send encrypted pages for storage or migration.Regarding input / output operations, direct assignment or virtualized device models can be used. The k-bit designated unencrypted memory area for DMA and memory mapped input / output (MMIO) is unencrypted. Although direct assignment of DMA is possible, the MMIO / PCIe device space must be k-bit disabled (unencrypted) memory. The processor must ensure that key domain transactions are allowed only for dynamic random access memory (DRAM) and not for device space.FIG. 30 is a diagram showing a plurality of virtual machines within a key domain managed by a domain manager (VMMlet) and a second key domain managed by another type of domain manager (OSlet).Because the domain manager (VMMlet) is a full-featured VMM, the domain manager (VMMlet) can host multiple guest operating systems (OS) within its key domain. VM2 3033 and VM3 3034 are shown running within the key domain KD1 30502 of the domain manager (VMMlet) 3022, and process 3031 is shown running within the key domain KD2 30501 of the OSlet 3060. As in the case of switching between domain managers (VMMlets), the memory manager 3040 issues a Switch Key Domain (SwitchKD) command to switch between domain types; that is, issues a SwitchKD command to switch between the domain manager (VMMlet) and Switch between Domain Managers (OSlets).Consumers want to be assured that public cloud service providers cannot access their workloads, even if authorized by a government order to do so. With the features provided by the secure public cloud environment described herein, cloud service providers, field administrators, or technicians cannot access secure VM data even if the VMM itself is rewritten (because consumers can measure their entire TCB).The above embodiments have been described with respect to a domain manager (VMMlet) that manages virtual machines, but the present invention is not limited thereto. The same model can support containers; although there is no corresponding VMM, the OS kernel is equivalent. Each container image in each key domain will have a collaborative kernel component (referred to herein as a domain manager (OSlet)) measured by the provider. A domain manager (OSlet) responds to memory manager commands, interrupts, scheduling, resource management, and the like in a manner similar to a domain manager (VMMlet).31A and 31B illustrate determining an integrity row position and a slot based on a physical memory address as a hardware function of a memory encryption engine. Unused address bits are passed through the cache, but they are not used because they correspond to unfilled physical memory. Unused bits are used to encode the key field (KD) selector information in the address. Different keys can be selected based on unused address bits for data line and memory location related encryption of corresponding integrity check values.According to an embodiment, the physical memory address 3100 may be used to determine a key or trim as described above, and / or an integrity check line 3112 and a slot 3114 (for integrity values associated with data lines 3116 (3116a-3116h)) 3114a-3114h). The physical memory address 3100 may include multiple address bits, which may be partitioned into multiple sectors. The segment of data physical memory address 3100 can be identified as data line byte 3102, including the integrity line slot selector 3110 and integrity line index 3108 (e.g., offset to integrity check line 3112). The address 3104 (the actual location in the memory of the data) and the unused address bit 3106 (eg, the alias bit) of an alias to the same physical memory. The unused address bit 3106 passes the cache but is not used due to unfilled external memory and can be used to encode the alias information in data physical memory address 3100. Accordingly, different keys can be selected based on unused address bits. For example, the encryption technology XTS (XEX-based (XOR-based encryption XOR) trimming codebook mode with ciphertext stealing) can use alias bits for fine-tuning the same physical memory location, where different addresses Aliases can lead to different ciphertexts, even if the data is the same.A memory selector (for example, the memory encryption engine 415 of FIG. 4) can read the data row physical address 3104 and identify the corresponding integrity row address 3112 and the integrity row slot (for example, 3114a-3114h) for verification The validity of data line bytes 3102 and / or integrity check values stored in integrity line slots (eg, 3114a-3114h). The value in alias bit 3106 can be stored in the integrity row slot (for example, 3114a-3114h) as ciphertext for decrypting data line bytes, and / or with the integrity row slot identified by alias bit 3106 (For example, 3114a-3114h) The values read are compared to verify the data line bytes.It is worth noting that not all bits of the memory are addressable because, for example, the actual memory deployed in a computing platform may be much smaller than the maximum possible memory that provides the largest amount of address space for it. For example, not all 64-bit (64-bit systems) physical memory is addressable (e.g., occupied with enough DIMMs). Thus, otherwise unused bits of the physical memory address 3100 can be used to determine which key and / or trim to use, for example, when encrypting and / or decrypting memory for a particular data line.The key field and / or trim field for the physical memory address 3100 can be of any size. In the illustrated example, the value selector may use unused address bits 3106 to derive keys and / or fine-tuning for the same physical memory address 3100. For example, the software value selector may select from 16 keys (and / or 16 trims) defined by the four most significant bits of the unused address bit 3106. In one example, setting the first bit to 0 (0000) or 1 (0001) can be used to derive the key fine-tuning (for example, if the bit is set to 1, encrypt with 1 and if it is set to 0, use 0 Encryption) or fine-tuning (for example, if the bit is set to 1, fine-tuning is used in the address, if set to 0, fine-tuning is used in the address, etc.). Thus, different keys and / or fine-tuning can be used. In this case, the first integrity check will fail when the data is decrypted with the wrong key and / or the wrong fine-tuning, and / or the second integrity will be checked when the integrity value of the relatively inappropriately decrypted data is checked. The check will fail.In addition, the integrity check line and / or slot may be determined and selected from the physical memory address 3100. For example, the integrity row selector may select the integrity check row from the integrity row index section 3108 of the physical memory address 3100, and / or the slot selector may select the integrity row slot selector region from the physical memory address 3100 Select the slot in segment 3110.As shown in FIG. 31B, data rows may be stored in a data memory address space, and integrity values (e.g., ICV, copy, etc.) may be stored in an integrity data address space. For example, a data line in the data memory address space starts at address 0, and an integrity check line in the integrity data memory address space starts at 1000 cache lines away from the data line at address 1000.Although various strategies may be implemented in an embodiment to map between each data line and each integrity check line (and / or each of its slots), using a data line address may be a suitable and complete way to determine and select An effective way to check rows and proper slots. For example, a lookup table may not be needed to determine the appropriate integrity check rows and / or slots for integrity values. In this regard, the value defined by the middle bit of each data line 3116a-3116h can be mapped to the integrity check line 3112 (determined by the arrow from the data line 3116a-3116h with the addresses 0-7 to the integrity check line 3112 (Indicated), and the value defined by the least significant bit of each data line 3116a-3116h can be mapped to the appropriate slot 3114a-3114h (as indicated by the position of the arrow) to accommodate the specific integrity value of each data line 3116a-3116h Instructions).In general, selecting an appropriate integrity check line and an appropriate slot can be based on a function such as (D-Dstart) / 8 + Istart, where the start of the data memory area is subtracted from the address D of the data line to be accessed Address Dstart, where Istart is the beginning of the integrity value memory address space, and an integer division by 8 can be performed by shifting the address offset by 3 to the right (or selecting the top bit minus the first 3 bits). In addition, once the appropriate integrity check line is extracted, the offset for the appropriate slot can be determined by (D-Dstart)% 8, where the modulo operation can select the least significant 3 bits of the address. It should be understood that although 3 bits can be used to select from the 8 slots on the integrity check line, the integrity check line can be different in size (e.g., half the size) so that 4 bits can be used from each Choose from 16 slots in the integrity check row to save integrity value overhead, and so on.Intermediate bits and / or least significant bits can also be used as an index to an array of assigned locations stored in privileged / secure memory locations to identify the mapping. There can also be an implicit mapping, where the first slot 3114a of the first integrity check line 3112 can be automatically selected for the data line 3116a with address 0, and the first integrity check line can be automatically selected for the data line 3116b with address 1. 3112's second slot 3114b, and so on. Any function, mapping, and / or assignment can be used so that data rows 3116a-3116h with addresses 0-7 can be mapped anywhere in the integrity data address space, and can be mapped to any in integrity check line 3112 Place, wait.A secure public cloud environment is implemented, with no additional performance overhead beyond the storage encryption engine. In one embodiment, the memory encryption engine is provided as a memory encryption engine (MEE) as described in US Patent No. 8,819,455 "Parallelized CounterTree Walk for Low Overhead Memory Replay Protection". In another embodiment, a memory encryption engine with integrity is provided, as described in US Patent 9,213,653 "Memory Integrity". In one implementation of a secure public cloud environment with an integrated total memory encryption (TMEi) engine, the TMEi engine operates at only 3% overhead. Finally, there are minimal hardware changes to ensure a secure public cloud environment, leveraging a storage encryption engine (such as the TMEi engine) and pushing most of the complexity to software (especially VMM). These features allow simple verification of VMM and fast time to market for hardware that supports the functionality of a secure public cloud environment.FIG. 32 is a diagram illustrating a system according to an embodiment of the present invention. As seen, the system 3200 may be a smartphone or other wireless communicator or any other IoT device. The baseband processor 3205 is configured to perform various signal processing regarding communication signals to be transmitted from or received by the system. The baseband processor 3205 is in turn coupled to an application processor 3210, which may be the main CPU and other system software of the system executing the OS, as well as user applications, such as many well-known social media and multimedia applications. The application processor 3210 may be further configured to perform various other computing operations on the device.The application processor 3210 can in turn be coupled to a user interface / display 3220, such as a touch screen display. In addition, the application processor 3210 may be coupled to a memory system, including non-volatile memory (ie, flash memory 3230) and system memory (ie, DRAM 3235). In some embodiments, the flash memory 3230 may include a secure portion 3232 in which keys, other secrets, and other sensitive information may be stored and manipulated. One or more of these storage devices may store information used to provide the secure public cloud described herein. As seen further, the application processor 3210 is also coupled to a capture device 3245, such as one or more image capture devices capable of recording video and / or still images.Still referring to FIG. 32, a universal integrated circuit card (UICC) 3240 includes a subscriber identity module. In some embodiments, the subscriber identity module includes a secure storage device 3242 that stores secure identity information. System 3200 may further include a security processor 3250, which may implement a trusted execution environment (TEE), and which may be coupled to an application processor 3210. Further, the application processor 3210 can implement secure operating modes (such as Intel® Software Guard Extensions (SGX) for a given instruction set architecture) and circuitry for hosting a trusted execution environment (TEE). The security processor 3250 and / or the application processor 3210 may be configured to participate in supporting operations that provide a secure public cloud as described herein. A plurality of sensors 3225 including one or more multi-axis accelerometers may be coupled to the application processor 3210 to enable input of a variety of sensing information, such as motion and other environmental information. In addition, one or more authentication devices 3295 may be used to receive user biometric input for authentication operations, for example.As further illustrated, a near field communication (NFC) contactless interface 3260 is provided, which communicates via an NFC antenna 3265 in the NFC near field. Although a separate antenna is shown in FIG. 4, it is understood that in some implementations, one antenna or a different set of antennas may be provided to enable various types of wireless functionality. A power management integrated circuit (PMIC) 3215 is coupled to the application processor 3210 to perform platform-level power management. To this end, the PMIC 3215 may issue a power management request to the application processor 3210 to enter certain low power states as desired. Furthermore, based on platform constraints, the PMIC 3215 can also control the power stages of other components of the system 3200.In order to enable transmission and reception communications such as in one or more IoT networks, various circuits may be coupled between the baseband processor 3205 and the antenna 3290. Specifically, there may be a radio frequency (RF) transceiver 3270 and a wireless local area network (WLAN) transceiver 3275. In general, the RF transceiver 3270 can be used for 3G or 4G communication according to a given wireless communication protocol such as, for example, according to Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), Long Term Evolution (LTE) or other protocols Protocol) to receive and transmit wireless data and calls. In addition, there may be a GPS sensor 3280, in which location information is provided to a security processor 3250, which can be used in certain security operations. Other wireless communications may also be provided, such as the reception or transmission of radio signals (eg AM / FM) and other signals. In addition, via the WLAN transceiver 3275, local wireless communication such as according to the Bluetooth ™ or IEEE 802.11 standards is also possible.Referring now to FIG. 33, a block diagram of a system according to another embodiment of the present invention is shown. As shown in FIG. 32, the multi-processor system 3300 can be implemented as a point-to-point interconnection system, such as a server system supporting a key domain. The system 3300 includes a first processor 3370 and a second processor 3380 coupled via a point-to-point interconnect 3350. As shown in FIG. 5, each of the processors 3370 and 3380 may be a multi-core processor, such as a SoC, including first and second processor cores (ie, processor cores 3374a and 3374b and processor core 3384a And 3384b), although there can potentially be many more cores in the processor. In addition, the processors 3370 and 3380 may each include a security engine 3375 and 3385 to perform the secure public cloud operations described herein.Still referring to FIG. 33, the first processor 3370 further includes a memory manager hub (MCH) 3372 and point-to-point (P-P) interfaces 3376 and 3378. Similarly, the second processor 3380 includes MCH 3382 and P-P interfaces 3386 and 3388. As shown in FIG. 33, MCH's 3372 and 3382 couple processors to respective memories, namely memory 3332 and memory 3334, which may be parts of main memory (such as DRAM) locally attached to the respective processors. The first processor 3370 and the second processor 3380 may be coupled to the chipset 3390 via P-P interconnects 3352 and 3354, respectively. As shown in Figure 33z2, the chipset 3390 includes P-P interfaces 3394 and 3398.Further, the chipset 3390 includes an interface 3392 to couple the chipset 3390 with a high-performance graphics engine 3338 through a P-P interconnect 3339. The chipset 3390 may be coupled to the first bus 3316 via the interface 3396. As shown in FIG. 33, various input / output (I / O) devices 3314 may be coupled to the first bus 3316 along with a bus bridge 3318 that couples the first bus 3316 to the second bus 3320. Various devices may be coupled to the second bus 3320, including, for example, a keyboard / mouse 3322, a communication device 3326, and a data storage unit 3328, such as a non-volatile storage device or other mass storage device. As can be seen, the data storage unit 3328 may include code 3330, and in one embodiment, code for performing secure public cloud operations described herein. As further seen, the data storage unit 3328 also includes a trusted storage device 3329 to store sensitive information to be protected. In addition, an audio I / O 3324 may be coupled to the second bus 3320.Embodiments may be used in environments where IoT devices may include wearable devices or other small form factor IoT devices, such as actuators and / or sensors. Referring now to FIG. 34, a block diagram of a module 3400 according to another embodiment is shown. In a specific implementation, the module 3400 may be an Intel® Curie ™ module, which includes multiple components adapted within a single small module. Module 3400 may be configured to participate in secure public cloud operations described herein. As shown, the module 3400 includes a core 3410 (of course, in other embodiments, there may be more than one core). Such cores can be ordered cores of relatively low complexity, such as based on the Intel Architecture® Quark ™ design. In some embodiments, the core 3410 may implement a trusted execution environment. The core 3410 is coupled to various components including a sensor hub 3420, which may be configured to interact with a plurality of sensors 3480, such as one or more biometrics, athletic environments, or other sensors. There is a power transmission circuit 3430 along with a non-volatile memory device 3440. In one embodiment, the circuit may include a rechargeable battery and a recharge circuit, which in one embodiment may receive charging power wirelessly. There may be one or more input / output (I / O) interfaces 3450, such as one or more interfaces compatible with one or more of the USB / SPI / I2C / GPIO protocols. In addition, there is a wireless transceiver 3490, which can be a Bluetooth ™ low energy or other short-range wireless transceiver to enable wireless communication as described herein. It should be understood that in different implementations, IoT modules can take many other forms. These forms have a small form factor, low power requirements, limited instruction set, and relatively slow computation compared to typical general-purpose CPUs or GPUs. Throughput or any of the above.As described above with reference to Figures 1-34, the consumer may provide the cloud service provider with an encrypted domain image. In the discussion of Figures 1-34, the consumer's encrypted domain image includes code and associated data to be executed as the consumer's workload. The consumer's workload and associated data are described above in the context of a consumer virtual machine, and a portion of the code provided by the consumer includes a portion of the code of the VMM for managing the consumer's virtual machine. This part of the VMM code is described above as a consumer domain manager image or "VMMlet".In the discussion of Figures 1-34, the domain manager (VMMlet) is a privileged code with the ability to create, exit, and restart the execution of a VM. These privileges may be referred to as "vmxroot" functionality and include the ability to execute commands such as virtual machine control structure (VMCS) save / restore, general register (GPR) save / restore, and / or VMexit / VMresume. Furthermore, the domain manager (VMMlet) controls key resources, such as interrupt descriptor tables (IDT), advanced programmable interrupt controller (APIC) instructions, and paged data structures such as page tables and extended page tables (EPT). Because the domain manager image described in Figures 1-34 has root privileges (VMMlet acts as a VMM), the host VMM may access all storage (with its key). No restrictions are imposed on host VMM access to consumer workloads and data.In contrast to the disclosure described with respect to Figs. 1-34, the present disclosure does not provide root privileges to consumers' mirrors. Instead, the consumer's image runs as a guest VM, and the guest VM can only access the memory mapped and granted permissions in the extended page table (EPT).The following sections of this application describe techniques to reduce encrypted consumer domain mirroring to exclude domain managers (VMMlets) and remove the need for cloud service providers to trust consumer-provided code. The encrypted consumer domain image may include only encrypted consumer domain control structures specific to the consumer's virtual machine. The control structure that would normally be provided by the host VMM is now also included in the encrypted consumer domain image provided to the cloud service provider by a consumer or consumer-trusted intermediary.By providing a control structure that sets the state of the consumer's VM processor, the consumer maintains control of the consumer's workload without relying on the host virtual machine monitor to protect the consumer's workload and data. Furthermore, the control structure is provided in an encrypted memory where the host VMM has no access rights and the host VMM does not own the encryption key to further protect the consumer's workload and data from the impact of the compromised host VMM.In one embodiment where the agent can directly access the control and / or memory mapping structure of the guest VM, a software policy for mutual protection of the cloud environment is implemented. An agent or an agent-protected guest VM can run at a given time, but not both. Before starting or restarting the agent to run in the cloud environment, the host VMM protects itself by verifying the agent's control structures (VMCS and EPT, using VMRead and / or HashKD instructions). The software policy implements several host self-protection rules, such as not allowing VMCS to edit itself, not allowing VMCS to edit its own EPT, overlapping in EPT is not allowed, and only when another guest VM is offline (not executed) Allows the agent to modify the VMCS of another guest VM.The proxy protects another guest VM by verifying the host's request to modify another client's VMCS or EPT. The agent protects itself by maintaining a separate VMCS for the client and maintaining the VMCS state for each hardware thread. The agent also checks the EPT to ensure that the client EPT does not allow access to the agent, and when the data page should be encrypted (k-bit on), the client EPT does not leak data by specifying unencrypted (k-bit off) memory. In addition, the agent will verify that the customer physical address (GPA) to physical address memory mapping, grant bits (R / W / X), memory type, paging level, and other control structure information are correct.Furthermore, the host VMM and agent can maintain separate copies of the verified control structure. By maintaining separate copies of the verified control structure, the host VMM and the agent can compare the contents of the control structure to ensure that they are the same (for example, using a hash key domain (HashKD) instruction to compare hash values).FIG. 35 illustrates a cloud service provider server environment 3500 including hardware 3510 controlled by a VMM 3522 of a virtual machine monitor (VMM) layer 3520. The VMM layer 3520 uses data structures such as a VM control structure (VMCS) 3524 and an extended page table (EPT) 3526 to control the execution of a virtual machine (VM). VMCS is a data structure in memory that exists once for each logical processor of each guest VM, and the guest VM is managed by the host VMM. In a multi-processor system, each processor executing a guest VM at the same time can have a unique VMCS. With each change in the execution context between different VMs, the VMCS is restored for the current VM, and the state of the virtual processor of the VM is defined. The extended page table (EPT) is used to map memory addresses from customer physical addresses (GPA) known to customers to physical addresses (PA) used to address physical memory.Virtual machine 3530T is an example of a typical implementation of a virtual machine, which is managed by VMM 3522 using VMCS 3524 and EPT3526, both of which are under the separate control of VMM 3522. The virtual machine 3530T is referred to herein as a "trusted" virtual machine, because although the VM 3530T is managed by and under the control of the VMM 3522, there is no mechanism for the VM 3530T to verify that the VMM3522 is not compromised. Therefore, the virtual machine 3530T must trust that the actions of the VMM 3522 will not harm the workload or data of consumers or reveal consumer secrets. In addition, the code for VMM 3522 is considered to be part of the Trusted Code Base (TCB) of the consumer VM.In contrast to the VMCS 3524 used by VMM 3522, each of the virtual machines VM1 35301, VM2 35303, and VM335304 and the proxy virtual machines 35302 and 35305 includes a protected memory area and a corresponding data structure referred to herein as a "control structure" . Each of VM1 35301, VM2 35303, and VM3 35304 includes a protected memory area (key domain), including the respective control structures VMCS / EPT 35401, VMCS / EPT 35403, and VMCS / EPT35404. Similarly, the agent virtual machines 35302 and 35305 include corresponding control structures VMCS / EPT 35402 and VMCS / EPT 35405 within their protected memory area. Although these control structures VMCS / EPT 35401-5 are shown as a combination of VMCS and EPT, the control structures may include separate data structures for VMCS and EPT. The control structure may also include other extended control structures, such as virtualized exception information pages, an extended page table pointer (EPTP) list for VMFunction, a model-specific register (MSR) bitmap, and an input / output (I / O) bitmap , MSR load and store pages, or any future control structure extensions to VMCS.The processor appends the key domain identifier / address selector for the guest VM currently executing to the topmost part of these addresses specified in the VMCS.When VMM 3522 manages the instantiation of each untrusted and proxy virtual machine 35301-5, the consumer or their trusted intermediary provides a control structure 35401-5 for controlling the execution of the consumer's virtual machine, and specifically To define the state of the virtual processor for each VM.The virtual machines VM1 35301, VM2 35303, and VM3 35304, and the proxy virtual machines 35302 and 35305 execute in a password-encrypted protected area of a memory 3512 called a key domain such as KD1 35501 and KD2 35502. Key domain keys are used to encrypt and decrypt data in each key domain.Each untrusted guest virtual machine, such as untrusted VM1 35301, along with the associated control structure VMCS / EPT 35401, is hosted inside the key domain. This hosting scheme cryptographically protects untrusted guest virtual machines and associated control structures from compromised virtual machine monitor tampering. Furthermore, placing the control structure for each virtual machine in the same key domain as the associated virtual machine enables each virtual machine to verify the actions taken by the virtual machine monitor with respect to the control structure.Key domains provide a protected environment in which the consumer virtual machine can operate with the consumer's confidence that the consumer's workload and data are protected. Similarly, a virtual machine monitor that manages consumer virtual machines can ensure that no consumer virtual machine has damaged the server platform hardware of the cloud service provider, or software or firmware running on the platform hardware.Consumer virtual machines, such as untrusted VM1 35301, can verify the actions requested by the untrusted cloud service provider's virtual machine monitor 3522 relative to the control structure 35401. The cloud service provider's virtual machine monitor 3522 maintains control of the platform, manages the execution of consumer virtual machines (such as untrusted VM1 35301, untrusted VM2 35303, and untrusted VM3 35304), and can authenticate consumers Virtual machine has not destroyed the virtual machine control structure.The key domains KD1 35501 and KD2 35502 each further include a proxy client virtual machine, respectively an agent 35302 having its own control structure 35402 and an agent 35305 having its own control structure 35405. Because the virtual machine monitor 3522 cannot read data encrypted in the key domain, the virtual machine monitor 3522 uses proxy client virtual machines 35302 and 35305 to act on behalf of the VMM 3522 in the corresponding key domain KD1 35501 or KD2 35502. Specifically, the virtual machine monitor 3522 uses a proxy guest virtual machine to manipulate the control structure of the protected virtual machine. For example, the virtual machine monitor 3522 may use the agent 35305 to manipulate the control structure of the untrusted VM3 35304 within the key domain KD2 35502. The virtual machine monitor 3522 can use the consumer's agent to manipulate the control structure of the virtual machine during the process of switching execution to a virtual machine, resuming execution of the virtual machine, and so on.Key domains (such as KD1 or KD2) cryptographically separate the virtual machines for each consumer from each other and from the virtual machine monitor of the cloud service provider. Similarly, actions taken by an untrusted consumer virtual machine can be verified by a cloud service provider's virtual machine monitor. This mutual verification enables cloud service providers to provide a public cloud environment that consumers can trust as the protection of consumer virtual machine workloads and data, and at the same time enables cloud service providers to confirm that consumer virtual machine activities have not damaged the public cloud surroundings.In one embodiment, the host VMM / agent relationship is established during system initialization. Trusted agent code, data, extended page tables, and initial VMCS are loaded into memory and measured. In one embodiment, the measurements of the trusted agent code, extended page tables, and VMCS are performed during the startup sequence for Intel® Trusted Execution Technology (TXT). TXT can also measure other states, such as the state of the system transfer monitor, to prevent access based on the system management mode (SMM); BIOS and VMM measurements can also be taken. Finally, the agent establishes a secret value with the CPU, which is stored in a hidden register and used to identify a valid agent-authorized VMCS. Once the measured agent is loaded into the protected area (key domain) of the memory, the VMM can be loaded outside the key domain and unprotected (trusted) guest VMs can operate normally.Alternatively, when a protected memory area is an alias-named memory encryption area, a secure enclave such as that provided by Intel® Software Protection Extensions (SGX) may load an already encrypted memory image into the memory. When the secure enclave knows the storage encryption key domain key, the secure enclave can load and prove the VM image at runtime. Encrypted memory images can include VMCS, EPT, and code.Similarly, consumers who know the storage encryption key and use the Create Key Domain (CreateKD) instructions described herein can encrypt the consumer's own image, which can be loaded by the cloud service provider's software into an alias to the key domain In plaintext memory. Consumer images include code images encrypted with key domain keys, VMCS, and EPT, making it inaccessible by cloud service providers. The cloud service provider's host VMM can then load (via VMPTRLD instructions) the consumer's VMCS in the key domain. The memory encryption engine will decrypt the VMCS structure from the consumer's encrypted image when the memory is read (VMRead). The host VMM can verify the content of the consumer VMCS, and then the host VMM can call VMLaunch, which will enable the key domain key, and pull the remainder of the consumer image through the storage encryption engine, which will mirror the consumer when it executes Decrypted into cache. The VMLaunch or VMResume instruction will enable the specified key domain key (thereby properly decrypting the memory contents).In embodiments where the range register is used to protect the protected memory area of the guest VM, running the agent within the protected memory area has different behavior. Only the guest VM's VMLaunch (and / or VMResume) can abandon the range register protection to allow the guest VM to execute while an effective control structure (VMCS) resides in the key domain. When the guest VM exits (via VMExit), the range register protection will be re-enabled before returning control to the host VMM. Write operations (VMWrite) from the host VMM directly to the protected VMCS are either restricted or rejected. As an optimization, some embodiments may allow restricted write operations (VMWrite) to those areas of the VMCS without compromising the security of the guest VM, for example, exclusively restricting VMWrite from the host VMM to the host status area of the VMCS .Editing VMCS for protected guest VMs requires cooperation from agents. The clear operation (VMClear) returns the cached VMCS to a protected area (key domain) of the guest VM's memory. The host VMM can then call the guest VM via VMLaunch (VMCS using a proxy). The agent can perform editing of the VMCS in a protected memory area (key domain) and also verify host status (eg, using VMCS shadow VMRead / VMWrite or editing VMCS structures in memory).When the agent has finished editing the VMCS, the agent can return (via VMCall or VMExit) to the host VMM. At this point, the host VMM can load the VMCS pointer (VMPTRLD) again and use VMRead to verify that the host state of the VMCS has not been maliciously tampered with by the agent (ie, verify that the VMCS of the host VMM is correct / in the expected state). At the same time, the read operation (VMRead) of the client status area of VMCS may be rejected because VMCS is in the key domain and the guest VM status will be hidden from the host VMM. If the VMCS is valid, the host VMM can then edit the VMCS by the VMLaunch agent and restart the guest VM.HashKD is an instruction used by the host VMM to "read through" a protected memory area while maintaining confidentiality. In an embodiment protected by a range register, a processor range register that protects a memory area of a guest VM does not prevent memory reads when derived from a HashKD instruction. In an embodiment protected as a key domain, the memory is encrypted, but the processor will allow the HashKD instruction to access the memory and decrypt it (for the purpose of generating a SHA2 / 3 hash of the memory contents). HashKD instructions can be used to ensure to the VMM that the structure formed by the customer (such as the new EPT) matches the expectations of the host VMM and will not allow the customer to access the host or other customers. The hash does not reveal the memory content / secret (unknown value), but if the hash value matches a structure that the host already knows / expects, the host VMM can be assured that the memory is properly configured by the client agent, with reference to a proven EPT or Corresponding VMCS of other control structures starts the guest VM.Using these techniques, agents can create additional VMCS and additional EPT structures for multiple guest VMs in a protected memory area (key domain), load consumer images into those guest VMs, and so on. The host VMM can then use these VMCS to launch the guest VMs after verifying their EPT and / or other control structures using HashKD instructions. Each client VMCS may include a secret value shared between the agent and the CPU (and stored in a hidden register that is configured with the secret value when the key domain is created (the CreateKD instruction is executed). When the VMPTRLD instruction is executed, the secret value is verified; the VMCS is considered valid only if the secret shared between the agent and the CPU is found. In this way, the protected guest VM cannot collude with the host VMM to create its own VMCS. A malicious VMM also cannot send data to a protected VM (eg, via an I / O or communication channel), which happens to be formatted as a VMCS and thereby implicitly undermines the security of the customer. Additional fields can be added to the VMCS to allow the agent to control whether the host VMM can restart the guest VM. Before the VMCS can be restarted by the host VMM, the restart operation (VMResume) can be restricted by the guest VM, requiring the agent to run and reset the VMCS field first.The VMExit process can be modified to first save (for example, execute XSAVE instructions) all processor register states, or a customer interrupt handler / "shim" can be inserted by the agent to ensure all processor register states are saved Protected memory and cleared before returning control to the host VMM. For an unrestricted guest VM, before returning control to the host VMM (via VMCall), the shim can intercept all interrupts and save and clear the guest register state. In one embodiment, Intel® Virtual Anomaly (#VE) can also be used to intercept all EPT violations and redirect those back to the customer shim, where the processor state can be saved to an encrypted memory area and passed through VMCall is cleared before transferring control to VMM. This technique prevents the register state of the guest VM from being exposed to the host.Finally, multiple register ranges and / or multiple key fields can be established, allowing multiple untrusted guest VMs to be isolated from each other. When the control structure (VMCS) is loaded via the VMPTRLD instruction, the location of the VMCS determines which range or key domain is accessible after VMLaunch. Each key domain is then responsible for its own security, with its own VMCS, EPT, and code / data. By verifying the VMCS before starting the guest VMs controlled by the VMCS, and verifying that the associated EPT and other control structures (using HashKD) are correctly configured to restrict the guest VM's access to the host VMM and other guest VMs, host VMM protection is ensured.Range-register-protected memory can only be an alias back to the host memory (eg, the alias uses higher-order unused address bits). The alias bit (or bits, referred to as "k (s) above") is used by the memory encryption engine to determine whether the memory is encrypted with a key when writing or decrypted when reading. In this way, the host VMM and its guest VM share the same physical storage, but the protected guest VM content is protected from the host VMM access because the guest VM content is stored in storage by the storage encryption engine using the guest VM's key domain Key encryption. The memory is accessed through the host alias (for example, where the higher-order physical address bits are not set), the memory encryption engine is disabled, and the encrypted content is left encrypted. In contrast, if the encrypted memory is accessed by a guest VM set with high-order address bits, the memory content is first decrypted by the memory encryption engine using the secret key, leaving the plaintext in the cache, where the high-order bits of the address correspond to the key Domain identifier / address selector. At the same time, protected guest VMs can access the storage through encrypted aliases or plaintext aliases, allowing the guest VMs to communicate with the host VMM and other guest VMs by accessing the host memory area (controlled by the host VMM's EPT verified by the guest VM via HashKD). Alternatively, a known shared key domain (or key domain of the host VMM) can be used for communication between the host VMM and the guest VM.The techniques disclosed herein enable consumer workloads and secrets to be protected without exposing the consumer's key domain keys used to encrypt consumer images, VMCS, EPT, code, and data. The key domain identifier / address selector is not exposed to the host VMM, nor does the key domain identifier appear in the physical address of a control structure, such as an extended page table or a virtual machine control structure. When the VMM enters a protected VM, a key domain identifier / address selector is used, where the VMM establishes a key identifier for the VM when the VMCS is loaded via a VM pointer load (VMPTRLD) instruction. If the VMCS is properly decrypted (unbroken or invalid and / or the secret value inside the VMCS is correct), the hardware is using the correct key domain key (identified by the key domain identifier / address selector), and the key The domain identifier / address selector is associated with the current key domain.Using these techniques, an unlimited number of consumer key domains can be encrypted in memory. In the memory encryption engine, reprogram the keys for the available key domain identifier / address selector slots. For example, when a VM can be suspended such that it is not executing on any core, a dump can be used to clear the cache cache content for that VM. Then, the Create Key Domain (CreateKD) instruction can be called by the VMM to establish different keys for the key domain identifier / address selector of the suspended VM. In this way, suspended VMs can be scheduled.To summarize the techniques used in this article, the initial startup or restart of a guest virtual machine (or agent) in a protected memory area (key domain) causes hardware (e.g., a page miss handler) to be used for allocation to the client The unused bits of the corresponding physical address of each memory location of the virtual machine (or agent) are set to the key domain identifier / address selector. The key domain identifier / address selector identifies a protected memory area (key domain), where data for the guest virtual machine (or agent) is encrypted by the key domain key. The unused bits of the physical address are set to the key domain identifier / address selector, except for cases where the extended page table (EPT) of the guest virtual machine (or agent) specifies that encryption is to be turned off.When the guest virtual machine (or agent) is to be started or restarted initially, the key domain identifier / address selector is specified by the unused bits of the address provided in the VM Pointer Load (VMPTRLD) instruction, which is used to load Control Structure (VMCS) of a guest virtual machine (or agent) that is started or restarted. In response to the VMPTRLD function call, the CPU hardware reads the VMCS inside the key domain by setting the key domain identifier / address selector in the physical address. If the VMCS is invalid or corrupted, the host VMM will reject the VMPTRLD function call for the request to load the VMCS. If the VMCS is valid, the VMCS will be written to the cache for use by the agent or guest VM to be started, the client address space identifier will be cleared by the dump, and the guest status will be cleared, enabling the new VMCS to be newly started The VM or agent configures the address space and guest status.Exiting the guest VM (or agent) causes hardware (such as a page miss handler) to stop using the key domain identifier / address selector in the address, or switch back to the key domain of the host VMM.Agents are loaded and measured (e.g., by Intel®'s Trusted Execution Technology (TXT), Software Guard Extensions (SGX), or manageability engine), or simply included by consumers as images in encrypted storage for Manage consumer VMs. The agent uses the correct physical memory address as a fine-tuning to perform XTS encryption on the key domain, runs as a guest VM, and manages the control structures (VMCS and EPT) of the guest VM for agent protection. Consumers can trust the proxy to measure and prove the validity of the host's VMM, securely exchange private keys, load consumers' encrypted domain images into the key domain, and maintain consumer workloads and data privacy.In one embodiment, a guest virtual machine (such as an agent) can use a shim (such as an interrupt descriptor table hook) to intercept interrupts. The shim can be a driver in the guest operating system. The guest VM can create its own shim (driver or code that intercepts interrupts / exceptions), or the agent can create shim on behalf of the guest VM. The agent can also have its own shim (driver or code that handles interrupts / exceptions, etc.). The shim runs as an unrestricted client, handles interruptions and virtualization exceptions, uses the VMCall instruction to exit the VM, uses a virtualization exception handler to intercept extended page table violations, uses VMFunc functions to switch to other extended page tables, and ensures that Before the VMCall instruction is transmitted back to the host VMM, the general-purpose registers and XMM state, which may keep confidential data, are saved and cleaned. Depending on the context, the shim can selectively expose some client state to the host. For example, if the virtual device driver accesses device space with a register value describing the I / O memory location to be used by the virtual device for DMA, the register data can be exposed to the host VMM (instead of being saved and cleared).These features enable the "blind" host VMM / supervisor to maintain control of the platform. For example, the host VMM can refuse to launch guest virtual machines without a valid or verifiable control structure (VMCS / EPT). Furthermore, the host VMM can set the "VM preemption" timer to preempt the execution of the guest virtual machine and return control to the host VMM.Figure 36 shows the data flow of a virtual machine monitor (host VMM) accessing a virtual machine control structure for a guest virtual machine running in a protected key domain. Because the host VMM 3622 cannot access the protected memory (key domain KD1 35501) where the agent 36302 is located, the host VMM 3622 requests the agent VM 36302 to interact with the control structure 36401 for the untrusted VM 136301 on its behalf.In the action "VMRead" 36.1, the host VMM 3622 is allowed to read the control structure VMCS / EPT 36401. Even if the VMM 3622 does not have the decrypted key domain key, there is no need to request the proxy VM 36302 to read the control structure VMCS / EPT 36401 on its behalf. Processor microcode can allow operations to be read into the cached VMCS so that the host VMM can verify the VMCS before entering. The read operation does not expose confidential information about the guest VM (because the GPR register state is saved elsewhere, the host VMM cannot access it). The read operation (VMRead) allows the host VMM to verify that the agent correctly edits the VMCS as requested by the host VMM. An alternative to a read operation (VMRead) is to use a HashKD (HashKD) instruction to verify (by matching the hash value) that the VMCS in memory matches the expected VMM of the host. Regardless of whether the host VMM uses a proxy read control structure, the host VMM is allowed to verify but not modify the VMCS / EPT.In action "VMWrite" 36.2, VMM 3622 is not allowed to write data directly to VMCS / EPT 36401. In contrast, in the action "Request VMWrite" 36.3, the VMM 3622 sends a VMWrite request to the agent 36302 to write to the control structure VMCS / EPT 36401. In the action "Request EPT Edit" 36.4, VMM 3622 requests agent 36302 to edit the EPT in VMCS / EPT 36401. In "VMWrite" action 36.5, the agent 36302 performs the requested editing of the EPT within the VMCS / EPT 36401. In some embodiments, the host VMM may allow restricted write operations (VMWrite), for example, the host VMM write operations are restricted by the CPU to those fields of the VMCS that have no effect on the behavior of the guest VM. For example, the CPU may exclusively allow the host VMM to write to the host status area of the VMCS. Any write operation that affects the security of the client needs to be performed by the agent.FIG. 37 illustrates a process of an agent editing a virtual machine control structure for a guest virtual machine running in a protected key domain on behalf of a virtual machine monitor action. Two types of shading are shown for each box; line fill patterns are used to show actions under the control of the host VMM, while dot fill patterns show that control has been passed to the guest VMs that are executing within a protected key domain .At "VMLaunch VM" box 3710, the process begins with the host VMM launching the guest virtual machine. In one embodiment, in order to start the guest virtual machine, the VMM first issues a command to execute a VM pointer load (VMPTRLD) instruction, which provides the guest VM with a pointer to a control structure (such as VMCS) provided by the consumer, thereby setting the current VMCS and key domain identifier / address selector. As a result of executing the VM pointer load instruction, the CPU caches the VMCS. If the VMCS is invalid or corrupted, the host VMM will reject the VMPTRLD function call for the request to load the VMCS. If the control structure / VMCS is in a protected memory area (key domain), the key domain identifier / address selector is appended by hardware to each physical address belonging to the guest VM that is starting up. Once the current VMCS is established, the VMM issues a command to execute a VMLaunch or VMResume instruction (these are also referred to herein as VMEntry or VMEntry instructions).After entering the guest VM, the key domain can be said to be "on", similar to the SwitchKD instruction previously described. As shown in the transition from the line fill pattern to the dot fill pattern in the "VMLaunch" box 3710, when the guest VM is started, control is transferred from the host VMM to the guest VM in the key domain. Write the VMCS to the cache and set the client address space identifier, or otherwise clear the client TLB. The guest VM then executes until the guest VM completes its workload.At "(Time Lapse) VMExit" box 3720, the guest VM completes execution and control returns to the host VMM. As shown by the transition from a dot fill pattern to a line fill pattern in the "(Time Lapse) VMExit" box 3720, when the guest VM completes execution, control is transferred from the guest VM in the key domain back to the host VMM. VMExit is usually an operation performed by a guest VM (such as access to an invalid, protected, or paged-out memory area) or an asynchronous event caused by an external event (such as an unhandled interrupt by the guest VM or the preemption timer expires). Alternatively, the guest VM may issue a command to execute a VMCall instruction that results in a type of VMExit. When the guest VM exits, the key domain identifier / address selector is reset to the key domain of the host VMM, or the protected memory range register of the guest VM is re-enabled and control is returned to the host VMM.After exiting the guest VM, and before returning control to the "root" / host VMM, the microcode reprograms the hardware (such as a page miss handler) to set the host VMM or shared key domain on all physical addresses (unless Where indicators (such as the k bits described above) are set to turn off encryption). When returning control to the host VMM, the client's key domain key should no longer be used, nor should the guest VM's key domain identifier / address selector be used. Use the host's VMM key, or turn off encryption (k-bit off). In fact, the CPU switches out the key domain of the guest VM (similar to the implicit key domain switching) to the key domain of the cloud service provider (host VMM). Because the host VMM "root" is running under the guest VM that is exiting, control is returned to the host VMM, the key domain is switched back to the key domain of the host VMM, or shared unencrypted memory (for example, the shared bit indicator (kbit) is turned off ).The host VMM executes the VMClear command at the "VMClear" box 3730 to save the state of the guest VM from the cache to the memory. As part of the VMClear command, the host VMM provides a pointer to the key domain in which the guest VM is executing and checks for a valid control structure (VMCS) within the key domain, or the VMCS has been cached for the same address.The pointer provided with the VMClear instruction should be the same as the one originally used to load VMCS with the VMPTRLD instruction into the cache, where the key domain ID has been appended to the pointer, which is the physical memory address, including the key Domain ID. It is important that VMClear does not send VMCS from one memory location to another, or from one key domain to another, because these operations can be an attack. Thus, the VMClear instruction uses the same pointer that was given with the VMPTRLD instruction and is cached by the processor, or the processor will first need to verify that for a specified key domain identifier / address selector, a valid VMCS is being VMCleared in memory location.If VMCS is valid and the pointer matches the cached VMCS memory address and key domain identifier / address selector, the host VMM uses the key domain identifier / address selector in the physical address to clear the VMCS dump To the memory, so that the state of the guest virtual machine is saved in the memory of the key domain. VMCS may not be cached because not all processors explicitly cache VMCS; some processors will access VMCS from memory. If the VMCS is not cached, the host VMM can perform a consistency check by reading the memory using the key domain identifier / address selector in the physical address to check for invalid / broken VMCS (e.g., secret values inside VMCS Does not match one of the values established in the processor's hidden register when the key domain is created (when the CreateKD instruction is executed).The processor first checks the VMCS by reading the VMCS before writing the VMCS to memory to ensure that the VMCS is valid and not corrupted. For example, if the host VMM specifies the wrong key domain identifier / address selector and therefore the VMCS decryption is incorrect, or if the host VMM specifies the wrong memory location for the VMCS, the VMCS will be corrupted. If invalid or corrupted VMCS data is found, the host VMM will receive an error and the processor will not use VMCS.Control proceeds from the "VMClear" box 3730 to the "VMPTRLD agent VMCS" box 3740. The VMPTRLD instruction provides the physical address of the VM Control Structure (VMCS) for the agent to be loaded into memory. If the address of the proxy VM control structure is inside the key domain, the unused bits of the physical address include the key domain identifier / address selector.As described above, the agent's code and control structure (VMCS) is provided to the host VMM by the consumer as part of the consumer's encrypted domain image, and the correct storage location of the VMCS is provided to the host VMM. The host VMM then proceeds to the "VMEnter Agent" box 3750.After confirming that the control structure (VMCS) indicated by the VM Pointer Load (VMPTRLD) instruction is valid, the host VMM will issue a VMEnter command (which can be a VMLaunch or VMResume command) to execute in a protected memory area (key domain) proxy. The processor will use the key domain ID in the address (thereby allowing the agent to access properly decrypted memory). In embodiments where the protected memory area is provided by a range register, the VMEnter instruction disables the range register protection only when the agent's VMCS is inside the protected memory area and includes secret values that are known only to the CPU and agent. Run the agent in a protected memory area. As shown in the transition from the line fill pattern to the dot fill pattern in the "VMEnter Agent" box 3750, when the agent is started, control is transferred from the host VMM to the agent in the key domain. Once the agent is in control within the key domain, the host VMM is "blind" to activities that occur within the consumer's guest VM.The host VMM may launch an agent within the key domain to request the agent to act on behalf of the host VMM to control the execution of another guest VM executing within a protected memory area (key domain). To control the execution of another guest VM, the host VMM requests the agent to edit the control structure of the other guest VM. According to the software agreement agreed between the cloud service provider and the consumer, requests from the host VMM can be made in many forms. For example, the request can be put into a structure or command queue in memory, or the request can include the entire VMCS to be copied from the memory of the host VMM to the customer's protected key domain memory. Alternatively, the request of the host VMM may be encoded in a processor register (eg, GPR).In the "Agent read request from host" box 3760, the agent reads the request from the host VMM to edit the control structure of another guest VM. As mentioned above, depending on the software agreement agreed between the cloud service provider and the consumer, the request may have been made in multiple forms.The agent proceeds to "Agent edits VMCS inside KD" box 3770, where the agent edits the control structure (VMCS) of another guest VM in the key domain. This permission to read and write the control structure of the guest VM (using VMRead and VMWrite instructions) can be implemented using VMCS shadows. Without the VMCS shadow, the guest VM (such as an agent) cannot normally execute the VMRead and VMWrite instructions because the VMRead and VMWrite are intended for use by the host VMM running in VMXRoot mode. Because VMCS is a structure in memory, guest VMs executing within the same key domain may edit VMCS directly on behalf of the host VMM; however, for security reasons, at least one embodiment limits the ability to edit VMCS to agents.After editing the control structure of another client VM, the agent exits by executing the VMCall instruction, returning control to the host VMM, as shown by the transition from a dot fill pattern to a line fill pattern in the "VMExit" box 3780 . VMExit returns to the host VMM to re-enable the protected memory area provided by the encryption key domain, which not only prevents the host VMM from accessing the agent's code / data, VMCS and extended page tables (EPT), but also prevents it from accessing the agent-protected guest VM Code / data, VMCS and EPT.After receiving control back from the agent, in the "VMCS edited by VMPTRLD in KD" box 3790, the host VMM first uses the edited VMCS address / pointer (including key domain identifier / address selector) to execute the VMPTRLD instruction. The VMPTRLD instruction loads the edited control structure (VMCS) into the key domain. Only in this way can the host VMM execute the VMRead instruction to verify that the agent edits another client's VM control structure as requested by the host VMM. Even if the data in the key domain is encrypted and the host VMM does not have the decrypted key domain key, the host VMM is allowed to use the VMRead instruction to read parts of the VMCS control structure in the memory.After confirming that the guest VM control structure has been edited as requested, control then returns to the "VMLaunchVM" box 3710, where the host VMM launches the guest VM to execute in accordance with the edited control structure provided by the host VMM.In some embodiments, the write operation (VMWrite) can be restricted from the host VMM to the protected VMCS, requiring the host to always require the agent to perform any editing of the VMCS on behalf of the host. In such an embodiment, if the host VMM requests a write operation to a VMCS inside the key domain, the write operation will be blocked. In other embodiments, a restricted write operation (VMWrite) to certain VMCS fields (such as fields in the host status area that do not affect the security of the guest VM) may be allowed by the host VMM.The host VMM is allowed to perform a storage operation (VMPTRST) to store the current (cached) VMCS pointer from the VMPTRLD instruction to a specified address in the memory, which must be outside the protected or encrypted memory of the guest VM. The key domain is not specified as part of the physical address used for VMPTRST operation.FIG. 38 illustrates an interrupt handler / shim for a guest virtual machine to selectively protect a processor register state of the guest VM, such as a general-purpose register (GPR), from a modified virtual machine monitor. If VMExit is triggered, the interrupt handler / shim is called before exiting the guest VM, giving the guest VM the opportunity to conditionally save and protect the state of its processor registers before exiting the host VMM. In the example shown, each of the untrusted guest virtual machines VM1 38301 and VM2 38302 and the agent 38303 within the key domain 35501 has its own respective interrupt handlers / shims 38351, 38352, and 38353. Instead of the VMExit instruction causing the corresponding host VMM to be instantiated, the VMExit instruction is redirected to the corresponding interrupt handler / shim, which uses software to hide the processor state of the guest VM from the host VMM 3822. One example of an interrupt handler / shim is the virtualization exception (#VE) driver described in US Patent Application Publication 2015/0121366.Each of the interrupt handlers / pads 38351, 38352, and 38353 can perform general-purpose register save or restore operations. In one embodiment, each guest VM runs as an unrestricted guest, intercepts all interrupts, uses virtualized exception handlers to intercept EPT violations, uses VMFunc to switch extended page tables, and ensures that execution is transferred to the host via VMCall instructions Prior to VMM 3822, conditional saving and / or cleaning of general-purpose registers and other registers was conditional.FIG. 39 shows the interrupt handler / shim implementation of FIG. 38. The untrusted VM1 3930 caused a VMExit condition, causing the virtualization exception to be redirected back to the customer's interrupt handler / shim 3935. This redirection is considered a "customer-induced exit" rather than a VMM-induced exit. In one embodiment, the customer-induced exit causes a virtualization exception, and the interrupt handler / shim 3935 handles the virtualization exception. The interrupt handler / shim 3935 may include a virtualized exception handler (#VE) code (not shown), bypassing the host VMM. After the virtualization exception handler #VE processes the virtualization exception, control returns to the interrupt handler / shim 3935, which conditionally saves and clears the processor register values (for example, before making a VMCall to return control to the host VMM) (e.g., , GPR). The host VMM can then issue a command to execute the VMResume instruction, which will restart the untrusted VM1 3930, including the interrupt handler / shim 3935. The interrupt handler / pad 3935 causes the register value to be restored and returns control to the untrusted VM1 3930, for example, by using an IRET instruction.In one embodiment, the virtualization exception handler #VE can intercept EPT violations, as well as CPUID and other customer-induced exit conditions. The virtualization exception handler #VE can decide which GPR / XMM register state to save and / or clear by issuing a command to execute the VMCALL instruction before transferring control back to the host VMM. In this way, the untrusted VM1 3930 can determine aspects of the CPU state to be exposed to the host VMM.FIG. 40 shows the data flow during the interrupt handler / pad driver operation of FIGS. 38 and 39. Regarding saving the encryption state, some data in the key field 4050 will be register state information. Host VMM-specific exits (no emulation required, such as, for example, a preempt timer event) can automatically save the CPU state. For example, the CPU can save all register status information to the VE information area 4033 (such as a 4KB page), then clear the register status to zero, set VMCS to restore the register status upon reentry, and then exit the guest VM 4030. The host VMM 4022 will process the VMExit and VMResume instructions. After receiving the command to execute the VMResume instruction to restart a given VM, the CPU will check the VMCS, restore the processor state of the guest VM from the VE information area 4033, and restart the guest VM.41 is a diagram illustrating messages between components of a cloud service environment for encrypting a code image provided by a consumer and establishing a key domain according to one embodiment of the present invention. The data flow described for FIG. 41 is similar to the data flow described above for FIG. 13, although the data flow is in the context of a cloud service provider's host VMM and a consumer-provided image that will run as a guest virtual machine that provides the proxy. Note that the same data flow applies regardless of whether the image provided by the consumer is for an agent running as a guest virtual machine or for any other consumer workload running as a guest virtual machine.Consumer 4101 requests a protected service from a cloud service provider, and in response, the cloud service provider's software, such as, for example, host VMM 4122, proxy images, virtual machine control structures (VMCS), and extended page tables (EPT) for customers Provide a memory location. Given these storage locations, the consumer 4101 edits the VMCS and EPT that will be used to instantiate the customer agent image on the cloud service provider's server.In one embodiment, the client agent image including the control structures VMCS and EPT is then encrypted by the consumer 4101 using a memory location-related "fine-tuned" password (eg, XTS) and the consumer's key domain key. In the embodiment shown, VMCS and EPT are embedded within the client agent image, although VMCS and EPT can be provided separately as long as they are encrypted using the consumer's key domain key.The consumer 4101 may also use the key domain key to calculate an integrity check value (ICV, such as a key-controlled hash message authentication code (HMAC) value) used to encrypt the client agent image. The ICV can be calculated as a location-dependent value and used to verify the content and location of the associated memory location of the encrypted client agent image.The consumer 4101 requests the cloud service provider host VMM 4122 to identify a server in the cloud service provider network that provides key domain management functionality. The cloud service provider host VMM 4122 (in this example, from a server with a CPU 4111) obtains a server certificate of a server that supports the key domain, and provides the server certificate to the consumer 4101. According to at least one embodiment, the consumer 4101 verifies that the server certificate is signed by an authority that the server identified by the certification provides key domain management functionality.Consumer 4101 encrypts the consumer's key domain key (and any other consumer secret data, such as storage) with the public key of the key domain-supporting server corresponding to the certificate of the server supporting the key domain by the cloud service provider Secret value within VMCS). Consumer 4101 sends encrypted key domain key, encrypted client agent image (including EPT and VMCS) and (optional) integrity check value (ICV) to cloud service provider host VMM 4122, cloud service provider host The VMM 4122 issues a Create Key Domain (CreateKD) command to the CPU 4111 of a server that supports the key domain. In one embodiment, the cloud service provider host VMM 4122 identifies the key domain address selector to be used for the new key domain, and provides the key address domain selector to the CPU 4111 of the server supporting the key domain. The CPU 4111 of the server supporting the key domain creates and initializes the key domain. Initializing the key domain may include dumping the cache of any previous key domain (identified by the previous key domain address selector), and dump clearing the cache of the translation lookaside for the address mapping of the previous key domain Device. Initializing the key domain may also include programming the memory encryption engine with the decrypted key domain key, and configuring a hidden register in the CPU corresponding to the secret value of the client agent (s) VMCS that uniquely identifies the consumer.As an alternative to performing the initialization function as part of the key domain creation instruction, the CPU 4111 of the server supporting the key domain may execute the initialization key domain (InitKD) instruction to dump the clear cache and translation lookaside buffer.The host VMM 4122 executing on the CPU 4111 can also directly provide the consumer's encrypted client agent image (including EPT and VMCS) and the integrity check value to the memory 4112. The customer agent image of the consumer has been encrypted, so that the customer agent image of the consumer can be directly written into the memory as it is in plain text, bypassing the memory encryption engine TMEi4115. Alternatively, the consumer's encrypted client agent image can be passed through the memory encryption engine TMEi 4115 with the encryption bit (k bits) of the physical address turned off, so that the memory encryption engine TMEi 4115 treats the consumer's encrypted client agent image as unencrypted Encrypted plain text. When the consumer's client agent image is read from the memory 4112 later with the correct key domain key, the memory encryption engine TMEi 4115 will then decrypt the content (when the encrypted VMCS / control structure is being read and the consumer While the client agent image is being executed by CPU 4111).Figure 42 illustrates an alternative embodiment for creating a key domain. In this example, a secure enclave, such as an enclave created using Intel® Software Protection Extensions (SGX), creates a customer image locally on the cloud service provider's server. Consumer 4210 first obtains a certificate from enclave 4290 running on a cloud service provider's server. Consumer 4210 then validates the enclave certificate and sends the client image to enclave 4290 via a security mechanism such as a secure socket layer connection. Enclave 4290 obtains the memory locations of customer images, VCMS, and EPT from the cloud service provider's host VMM (not shown) and programs the local key domain key. Enclave 4290 then issues a command to CPU 4211 to execute the key domain creation instruction. When a new key domain is created, the local CPU 4211 dump clears the cache of the previous key domain address selector and programs the memory encryption engine TMEi 4215 with the key domain key of the address selector.Enclave 4290 then creates VMCS and EPT for the customer image and re-encrypts the image (including VMCS and EPT) using the key domain key previously determined by Enclave 4290. In the embodiment shown, the enclave 4290 also calculates the integrity check value of the customer image using the key domain key. Enclave 4290 then provides the encrypted client image, including VMCS and EPT, to the cloud service provider's host VMM (not shown, but executed on CPU 4211). Alternatively, any other Trusted Execution Environment (TEE), such as a Manageability Engine (ME) or Converged Security Engine (CSE), may perform the same functions that are given to the enclave TEE here. Similarly, trusted third-party servers or services can generate encrypted memory images on behalf of consumers for instantiation on the cloud service provider's infrastructure.As described above for FIG. 41, the host VMM (not shown) executed on the CPU 4211 can also directly provide the consumer's encrypted client image (including EPT and VMCS) and integrity check values to the memory 4212. The consumer's customer image has been encrypted, so that the consumer's customer image can be directly written to the memory as it is in plain text, bypassing the memory encryption engine TMEi 4215. Alternatively, the encrypted client image of the consumer may pass the memory encryption engine TMEi 4215 with the encryption bit (k bits) of the physical address turned off, so that the memory encryption engine TMEi 4215 treats the encrypted client image of the consumer as unencrypted Plain text. When the consumer's customer image is read from the memory 4212 with the correct key domain key later, the memory encryption engine TMEi 4215 will decrypt the content (when the encrypted VMCS / control structure is being read and the consumer's customer (Agent image is being executed by CPU 4211).FIG. 43 illustrates one embodiment of a process for a host VMM to verify a proxy VMCS provided by a consumer. The cloud service provider's host VMM 4322 issues to the CPU 4311 the VM pointer load using the control structure (VMCS) address provided by the consumer and the key domain identifier / address selector located in the consumer's encrypted memory image ( VMPTRLD) command. The memory encryption engine TMEi 4315 uses selected bits (such as the uppermost unused bit) of the physical address specified via the VMPTRLD instruction for VMCS as the key domain identifier / address selector. The memory encryption engine TMEi 4315 reads the encrypted data rows from the memory 4312 and decrypts the data rows using the key domain key determined from the key domain identifier / address selector (using a memory location-dependent password as described above, such as XTS ). If the VMCS decryption is correct, the CPU 4311 caches the VMCS and the host VMM 4322 of the cloud service provider executes VMRead of the VMCS to ensure that the VMCS is properly configured for the host VMM 4322. If the VMCS configuration is correct, the host VMM4322 issues a command to the CPU 4311 to execute the VMLaunch instruction to start the agent using the agent VMCS provided by the consumer. The CPU 4311 then runs the agent as a guest VM.The verification of the proxy control structure (VMCS / EPT) described above with respect to FIG. 43 may alternatively be performed using a HashKD (HashKD) instruction. In an implementation using HashKD, the CPU will execute the HashKD instruction at the memory location where VMCS / EPT was originally installed to determine the expected hash value, create another hash value based on the data generated in response to the read instruction, and verify The hash values match, thereby validating the control structure (VMCS / EPT).Once the agent control structures (VMCS and EPT) are verified by the host VMM, the host VMM trusts the agent. The agent can then be started by the host VMM and used to modify the consumer's image by adding code or data to modify the functionality of the consumer's guest VM.Although the description of FIG. 43 relates to the verification agent VMCS, the same data flow applies to any consumer-provided guest VMCS to be used to launch the guest virtual machine.Figure 44 provides an example of a host VMM requesting an agent to modify the data flow of a control structure (VMCS / EPT) for another guest VM. The described process uses EPT as an example of the control structure being modified, but the same process applies to both EPT and VMCS modification, as well as modification of any other control structure or memory contents of the guest VM.When the guest VM 4402 exits and returns control to the cloud service provider's host VMM 4422, the EPT entry may indicate that the page does not exist. As an example, the EPT entry may be set to indicate that the memory page does not exist because the page was previously paged out by the host VMM page. If the guest VM attempts to access the address of the non-existing page, the associated EPT entry will reveal that the page does not exist, causing VMExit to appear. As shown in Figure 44, the goal of the host VMM is to replace the original page back into memory, and then reset the exited EPT entry to indicate that the page now exists at the specified address, so that the host VMM can return on the accessed page In the case of memory, execution of the guest VM is restarted.After receiving control, the CPU (not shown) will restore the host state of the host VMM 4422 from the control structure (eg, VMCS, not shown) associated with it. The host VMM 4422 may then decide to restart the guest VM 4402 and send a message to the agent 4403, which acts on behalf of the host VMM 4422 in the key domain of the client VM 4402 to be restarted. The host VMM 4422 tracks which guest VMs have exited (in this example, the guest VM 4402) because the host VMM4422 originally launched the guest VMs, including the guest VM 4402. The host VMM 4422 also tracks the key domain to which each guest VM belongs, because the host VMM 4422 specifies the key domain identifier / address selector as the address used in the VM Pointer Load (VMPTRLD) instruction used to load the VMCS for the guest VM a part of. The host VMM 4422 also tracks the agents used for a given key domain and the associated guest VMs for that given key domain.In order to restart the guest VM 4402, the host VMM 4422 prepares a message for the agent 4403 to edit the EPT of the guest VM 4402 and performs a clear operation (issues a command to execute a VMClear instruction) to remove the VMCS cache from each processor (not shown Out) The VMCS of the guest VM 4402 is cleared. As part of the clear operation, the VMCS of the guest VM 4402 along with the key domain identifier / address selector also returns to the cache 4413 on its way back to the memory 4412. The key domain identifier / address selector will be used as a selected set of bits (such as the uppermost unused bit) for the physical address that VMCS will eventually write to indicate to the memory encryption engine which key domain key to use )a part of. The memory encryption engine TMEi 4415 writes the encrypted VMCS to the memory 4412 along with the integrity check value of the embodiment with integrity.After issuing the command to execute the VMClear instruction, the host VMM 4422 performs two operations. In the first operation performed by the host VM4422, the host VMM 4422 puts an unencrypted (k-bit off, also designated as! K) request into the shared with the proxy 4403 (which can be in the cache 4413 or the memory 4412) The memory location specifies the editing of the EPT request for VM 4402. In the example shown, the memory encryption engine TMEi 4415 writes a plaintext request to a memory location in the memory 4412. Later, the agent 4403 will retrieve the edits requested for the EPT of the VM 4402 from the memory 4412. In addition to the EPT edit request, the host VMM 4422 can provide additional information to the agent, including the contents of the encrypted page that will be paged back into memory.In the second operation performed by the host VM 4422, the host VMM 4422 issues a command to execute the VMPTRLD instruction, providing a pointer to the VMCS and key domain identifier / address selector of the agent 4403, which determines the identifier / address selector The key used to decrypt the VMCS of the agent 4403. The memory encryption engine TMEi 4415 reads the VMCS of the agent 4403 from the memory 4412, and the engine decrypts the VMCS of the agent 4403 using the key domain identifier / address selector specified in the VMPTRLD instruction. The host VMM 4422 obtains the decrypted VMCS generated by the memory encryption engine TMEi 4415. In response to the VMCS of the agent being properly loaded, the host VMM 4422 issues a command to execute the VMLaunch instruction, which uses the properly decrypted agent VMCS as the control structure to start the agent 4403. The CPU running the agent 4403 code restores the client state of the associated virtual processor from the decrypted agent VMCS.In the example shown, the agent 4403 causes the request data to be read from the memory 4412. In one embodiment, the agent 4403 checks the command queue location in the memory and finds there another request from the host VMM 4422, and then the agent 4403 responds to the command by having the request data read from the memory. The request may also be passed to the host VMM 4422 as the processor register status (eg, via the GPR register status), and the host VMM 4422 may use the processor register status to trigger the agent 4403 to read the request from memory. Other embodiments may use different mechanisms to pass to the agent 4403 that there is an outstanding host VMM 4422 request.The proxy 4403 reads the unencrypted request data from the cache, and the proxy 4403 processes the request to edit the host VMM 4422 of the EPT of the client VM4402. The agent 4403 writes the EPT edit for VM4402 to the memory 4412 via the memory encryption engine TMEi 4415, which causes the encrypted EPT and the associated integrity check value to be written to the memory 4412. Alternatively, the agent 4403 (which has access to the memory encryption key and the memory location of the EPT entry) may simply use the memory encryption key to encrypt the edited EPT structure, and the ciphertext (encrypted edited EPT structure) is passed back to the host VMM 4422.The proxy 4403 can also decrypt any encrypted page content provided by the host VMM 4422 that will be paged back into memory. For example, the proxy 4403 can use the different secret keys that the proxy 4403 uses for paging, using the customer physical address (GPA) address as a fine-tuning, to decrypt the page. The proxy 4403 can also verify the page content, or use ICV to verify that the page content has not been modified since they were last paged out and encrypted with the same key and fine-tuning. The agent 4403 can then write the decrypted page (s) to memory, assuming the agent can access these memory locations, or the agent can use the memory encryption key and physical address fine-tuning to re-encrypt the page and obtain the resulting via a shared memory channel The ciphertext is returned to the host VMM 4422. After completing the request to edit the EPT issued by the host VMM 4422, the agent 4403 completed its task and exited, returning control to the host VMM 4422.The CPU executing VMExit restores the host VMM 4422 state from the host status area of the VMCS of the agent 4403. The host VMM 4422 may optionally issue a command to execute a VMClear instruction to clear the VMCS of the agent 4403. As part of the clear operation, the VCMS of the agent 4403 writes to the cache 4413 with a key domain identifier / address selector. The encryption agent VMCS and the associated integrity check value are written to the memory 4412 by the memory encryption engine 4415.The host VMM 4422 can then verify the agent 4403 by issuing a command to execute a hash key domain (HashKD) instruction on the memory location where the EPT of the VM 4402 will be modified or where the host VMM installs the EPT ciphertext provided by the agent Correctly edited the EPT of the VM 4402. The memory encryption engine TMEi 4415 uses the key domain identifier / address selector as the address selector to read the encrypted data of the associated key domain. The memory encryption engine TMEi 4415 reads the encrypted data line, decrypts the data, and sends the decrypted data to the cache 4413 for the address and key domain identifier / address selector.The processor executing the HashKD instruction reads the decrypted data obtained by the memory encryption engine TMEi 4415, and the host VMM 4422 verifies that the hash value of the read memory location content matches the hash value of the expected EPT content. In this regard, the host VMM 4422 can also write any ciphertext of the page re-encrypted by the agent 4403 and returned via shared memory (for cases where the agent cannot access the memory to install the page itself). If the hash values match, the host VMM 4422 issues a command to execute the VMPTRLD instruction for the VMCS of the VM 4402. At this point, the host VMM can then use VMRead to revalidate the contents of the VMCS. The host VMM 4422 then issues a command to start the guest VM 4402 from the VMCS of the VM 4402. The processor executing the code of the VM 4402 restores the state of the virtual processor of the VM 4402 from the VMCS of the VM 4402 and executes the VM 4402 using the modified EPT, which now indicates that the page exists and identifies the memory address where the page is located.The memory transaction described with reference to FIG. 44 may reside in the cache 4413, thereby eliminating the need for the memory encryption engine (TMEi 4415) to read and write data from the cache 4413 to the memory 4412 (and vice versa). However, the above-mentioned memory transactions are described as if the data for each transaction was evicted from the cache 4413 into the memory 4412.Some embodiments may also enable page modification logging (PML) so that the host VMM can track the memory locations accessed by the client agent. The number of HashKD instructions can only be limited to those memory locations that are actually modified while the agent is executing. Here, the PML address and log page will be kept in the key field (or shared memory / k-bit key) of the host VMM, so that the VMM can track the actions of the guest VM.Similarly, some embodiments may use a sub-page policy table (SPPT) [as described in US Patent 9,335,943 B2] to allow the host VMM to further restrict the memory locations that the agent can access. In such an embodiment, the host VMM will control the SPPT and the agent may be given access to the memory via its EPT, where the agent's VMCS indicates that additional SPPT is enabled. The subpage protection table pointer in VMCS will use the key field of the host VMM (or shared memory / k-bit key). The proxy's EPT entry includes an SPP bit set for each entry that the host VMM can use the SPPT of the host VMM to rewrite access to.FIG. 45 is a flowchart showing execution of VM pointer load (VMPTRLD), VMEnter (VMLaunch or VMResume), and VMExit instructions. In block 4505, in response to the VMPTRLD instruction providing the VMCS address, a key domain identifier / address selector from the VMCS address is used to access the VMCS. At decision point 4510, the content of the VMCS is verified (which can include the version number and secret value shared between the agent and the processor and stored in the agent VMCS, as described above with reference to Figure 37 "VMEnter agent" box 3750) To determine if the VMCS was correctly decrypted. If the VMCS is not decrypted correctly, control passes to block 4515, where an error is returned to the host VMM that issued the command to execute the VMPTRLD instruction. If the VMCS decryption is correct, control passes to block 4520, which optionally caches the VMCS. At block 4525, an optional VMRead instruction is executed by the host VMM to verify the contents of the VMCS.From block 4520, control proceeds to block 4530, where the VMEnter instruction is executed. Instantiate (usually start or restart) the virtual machine using the key domain identifier / address selector from the address in the VMPTRLD instruction to access the VMCS for entering the virtual machine (if the VMCS has not been cached in box E20) . If the VMCS has been cached at block 4520, then block 4530 can use the cached VMCS to instantiate the virtual machine.At decision point 4535, it is determined whether the VMCS is corrupted or invalid, because the processor will perform other consistency checks on the VMCS when trying to restore the client's processor state. If the VMCS is corrupted or invalid, control passes to block 4540 where an error is returned to the host VMM. If the VMCS is not corrupted or invalid, control proceeds to block 4545. At block 4545, the dump clears the CPU pipeline. Given a key domain identifier / address selector, set the address space identifier tag for the transaction lookaside buffer (TLB), or dump the TLB. The CPU registers are set to the VMCS client state. In addition, if there is a Virtualization Exception (VE) information page indicated by its physical address in VMCS, and VMCS indicates that additional client status is currently stored there, the rest of the processor registers will be restored from the VE Info page. The recovery processor register includes adding the current key domain identifier / address selector to the VE Info page address. The extended page table pointer (EPTE) is the physical address used for the EPT table library. Because the EPT table is inside the key domain of the guest VM, the address should also include the key domain identifier / address selector so that the EPT structure is properly decrypted.Control proceeds from block 4545 to block 4550 where a branch is performed at the client state instruction pointer (IP). Execution continues until the exit condition of the currently executing virtual machine is exited, and control proceeds to block 4555. As with any VMExit, the guest processor state is stored into the current VMCS. If a VEInfo page exists and the exit is not due to VECall, the remainder of the client's processor register state can be saved to the VEInfo page and then these registers are cleared. From block 4555, processing proceeds to block 4560. At block 4560, the dump clears the CPU pipeline. Set the address space identifier label for VMXRoot, or dump the TLB. In the embodiment where the host VMM has its own key domain identifier / address selector, the current key domain is set as the host VMM key domain identifier / address selector. In other embodiments where the host VMM is in unencrypted memory, the k bits are set to "off" and the key domain identifier / address selector address bits are cleared. The CPU registers are set to the cached VMCS host state. Control proceeds from block 4560 to block 4565, where a branch is performed at the host state instruction pointer (IP).In FIG. 45, the EPT is kept under the control of the host VMM (as normal). To prevent memory remapping attacks, some embodiments may add a new table (called a reverse mapping table, not shown), which is accessed by the processor (page miss handler (PMH)) during page traversal. After the address mapping is determined through page traversal, the obtained physical address is first checked against the entry in the reverse mapping table (RMT) of the obtained physical address index. This entry includes the physical address expected by the consumer (and / or the consumer's agent) and its associated client physical address (and the permitted bits and / or k bits). If PMH determines that the customer physical address (GPA) or grant does not match those determined by the completed page traversal, the CPU will exit (execute the VMExit instruction indicating an error). The RMT entry is accessed by the processor / PMH using the key domain ID belonging to the guest VM being executed. Thus, each RMT table entry must be XTS encrypted using the consumer's key, so that only the consumer or consumer's agent can generate these RMTs when they are applied to the consumer's key domain.Referring now to FIG. 46, a process for updating a customer's VM image for a consumer is shown. This process begins after the consumer's initial code image 4631 has been established to run as a guest VM 46301 in the key domain 4650. The client VM 46301 can only access extended page tables provided by the consumer as part of the encrypted client control structure ( EPT). To enable consumer customer VM 46301 to provide additional functionality, additional code can be added to customer VM 46301 code image 4631. However, because the customer VM 46301 can only access that portion of the memory in the key domain 4650, the cloud service provider's host VMM 4622 and the customer agent 46302 participate in updating the code image 4631 of the customer VM.As mentioned above, the consumer's encrypted client code image may include code for launching a second client (client agent). Once the client agent 46302 runs as a client agent virtual machine within the consumer's encryption key domain 4650, the client agent 46302 can perform several tasks on behalf of the host VMM 4622. The client agent 46302 performs a task on behalf of the host VMM 4622 on request, as long as the task does not harm the consumer's customer VM 46301. The host VMM 4622 can verify that the tasks performed by the agent 46302 have been performed as requested. Therefore, the host VMM 4622 does not need to provide code mirroring for the agent 46302 or it does not need to trust the agent 46302.As an example of the tasks that can be performed by the agent 46302 on behalf of the host VMM 4622, the agent 46302 can create other guest virtual machines within the key domain 4650, request more storage from the host VMM 4622, and from consumers or third parties authorized by the consumer Move memory pages to consumer customer VM 46301 workload. For example, the agent 46302 can securely pass the consumer encrypted image 4604 of the customer VM code to the customer's customer VM 46301 securely. The encrypted remainder 4604 of the VM code image may be passed first via a secure connection (such as a Transport Layer Security / Secure Socket Layer (TLS / SSL) session) established between the consumer and the client agent 46302. The host VMM 4622 copies the encrypted packet and buffer including the remainder of the VM code image 4604 to the shared memory 4612U for retrieval by the client agent 46302. The client agent 46302 terminates the TLS / SSL session by decrypting the data included in the encrypted packets and buffers.Operating within the key domain 4650, the client agent 46302 has access to a storage encryption key domain key used by the consumer (because in FIG. 41 the consumer uses the key domain key to create the initial client VM image). Using the key domain key, the agent 46302 can XTS encrypt additional data / code / structure alone (fine-tuned with appropriate memory address). The client agent 46302 can then provide the resulting ciphertext to the host VMM 4622 (eg, via the shared memory channel 4612U, as shown by the "write to memory" arrow from the slave agent 46302 through the shared memory 4612U). The host VMM 4622 can then install the ciphertext (the host VMM 4622 cannot decrypt it because the host VMM 4622 does not have a key domain key) in the appropriate memory location (at the address they are encrypted for) because only the host VMM 4622 Access to all memory, including unencrypted shared memory 4612U.For example, in order to change the memory location that the client agent 46302 can access, the host VMM 4622 can request the client agent 46302 to generate appropriate ciphertext for the new EPT data and address fine-tuning (using the consumer's memory encryption key domain key) (and if Integrity is expected, then an integrity check value (ICV) is calculated). The host VMM 4622 will then copy the ciphertext to the memory on behalf of the client agent 46302, as only the host VMM 4622 has the ability to address all memory. Once the updated agent EPT is properly installed, the client agent 46302 will have direct access to these new storage locations.Follow a similar process to restrict the memory locations that the client agent 46302 can access because the host VMM4622 can request the client agent 46302 to create a ciphertext for the selected EPT structure to specify the selected EPT structure as non-existent (no permission or no mapping ). In all cases, once the ciphertext is installed in the correct memory location, the host VMM 4622 can use the HashKD command described above to verify the content of the ciphertext.The cloud service provider's host VMM 4622 controls the system, storage, and other resources, but the client agent 46302 controls the generation of data approved by the client agent 46302 because only the client agent 46302 has a storage encryption key domain key and can encrypt the approved encryption The data is passed to the host VMM 4622.For systems with System Management Mode (SMM) or similar privileged mode enabled, the SMM should not have access to the consumer's key or key domain identifier / address selector. Similarly, when a system management interrupt (SMI) occurs during the execution of a consumer's guest VM, the guest processor register state should be saved to a memory location that is not accessible to the SMM and cleared.Figure 47 depicts another process by which a consumer adds pages to a consumer's customer VM workload. In the example shown, packets for the remainder of the customer's client VM image are sent by the consumer directly to the host VMM 4722 (via a secure communication session between the consumer and the client agent, as described above with reference to FIG. 46 of). In response, the host VMM4722 writes the data of the rest of the customer VM image for the consumer to the memory 4712 via the memory encryption engine TMEi 4715.The host VMM 4722 issues a command to start the client agent on the CPU 4711, and the CPU 4711 starts executing the client agent code image provided by the consumer as part of providing the encrypted client VM code image of the client agent 4703. The host VMM 4722 sends the rest of the customer's guest VM image through the shared (encrypted bits (k bits) off) portion of the memory 4712. The running agent 4703 reads data from the address of the key domain identifier / address selector with the highest unused bit set to the shared (k-bit off) key domain by requesting the memory encryption engine TMEi 4715, The shared portion (unencrypted, where k bits are off) of the memory 4712 of the host VMM 4722 reads data for the rest of the customer's guest VM image. Since the proxy 4703 is the endpoint for a secure communication session with the consumer, the proxy 4703 decrypts (eg, using the OpenSSL software library) network packets into a shared memory area. The running agent 4703 copies the resulting decrypted data to encrypted memory (encrypted with the consumer's key domain key, where k bits are on). The top unused bit is set to the secret for the consumer. The key domain identifier / address selector address of the key domain. During the first write to a new memory address, the MOVNT instruction can be used to perform a write combination operation that writes to the new memory address in memory 4712 without first reading the contents of the new memory address to be written. The memory encryption engine TMEi 4715 then writes the encrypted data for the rest of the customer's guest VM image to the memory 4712 along with the integrity check value (ICV).The agent 4703 processes the data (eg, decrypts the data with software, performs an integrity check, etc.). The consumer forms a secure communication session (for example, using a TLS / Secure Socket Layer session from the consumer using the consumer's encryption key (key domain key) to the proxy-mirrored TLS stack), and the packet passes through the controller Send via shared memory (unencrypted, where k bits are off).The embodiments described above describe an environment in which consumers can trust their secrets and data to be as secure in a public cloud as in a private cloud environment. The consumer (or consumer's trusted intermediary) can provide an encrypted guest virtual machine (VM) image that can be installed in a protected area of the memory (called a key domain) where the memory pages are provided by the consumer Key domain key encryption. The processor of the cloud service provider server can use the key domain key provided by the consumer as the encryption key domain key to the host VMM to decrypt the consumer's encrypted client virtual machine image. Consumer customer VMs can be verified by the cloud service provider's software / host virtual machine monitor (VMM) without the host VMM being exposed to the encryption key domain key or the content of the encrypted guest virtual machine image and the secret. Consumer's encrypted guest virtual machine image can be decrypted by the processor using a control structure provided by the consumer (s) (which is also provided to the host VMM encrypted with the consumer's key domain key) Device status information.The control structure information encrypted with the consumer's key domain key may include a memory-mapped structure (Extended Page Table (EPT)).The processor executing the consumer's encrypted guest virtual machine image may encounter situations where it is necessary to exit from the guest VM to the host VMM. When the guest VM exits, the host processor automatically switches the current key domain back to the key domain of the host VMM, or back to the shared key domain shared between the host VMM and each guest virtual machine managed by the host VM . When the guest VM exits, the customer-controlled structure provided by the consumer specifies a protected memory location, where the host processor can automatically store and clear the processor registers when exiting to the main VMM.In one embodiment, the guest VM code image includes interrupt handler code to intercept the interrupt and convert the guest VM exit into an exception, where the guest VM can save processor register information to protected memory and clear or conditionally Ground to expose the processor registers required by the host VMM. When the guest VM has finished preparing to save the processor state of the guest VM, the guest VM can call the host VMM.In one embodiment, the encrypted customer VM image and customer control provided by the consumer can be dynamically updated or modified by sending an encrypted image update or encrypted control structure update (incremental) that is installed and verified by the host VMM in the host's memory. structure. The encrypted image update may serve as a replacement for the encrypted code image of the guest VM, or the encrypted image update may be an "incremental" image used to modify the encrypted code image of the guest VM.Dynamic updates to the encrypted guest control structure can be initiated by the host VMM or the guest VM. For example, the host VMM can determine that the client code image should be moved to a different memory location. Moving the customer code image to a different memory location affects the extended page table (EPT) of the customer control structure. The guest VM can verify changes to the client control structure and provide the host VMM with an updated encrypted client control structure. The host VMM can copy the updated encrypted customer control structure provided by the customer to the appropriate storage location.As an example of a dynamic update of an encrypted guest control structure initiated by a guest VM, the guest VM may request more storage from the host VMM. In response to the request, the host VMM can determine that the extended page table (EPT) of the customer control structure must be modified. The guest VM can verify the allocation of additional memory and the resulting changes to the client control structure, and provide the host VMM with an updated encrypted client control structure. The host VMM can copy the updated encrypted client control structure to the appropriate memory location.For example, using the physical address bits of a memory location to indicate whether the memory location is to be shared, a shared communication channel can be established between the protected guest VM and the host VMM. In one embodiment, the bit may be turned off to indicate that the memory location is to be shared, and the bit may be turned on to indicate that the memory location is to be protected (encrypted).Requests to change the customer-controlled structure or customer VM image provided by the consumer can be sent through the shared communication channel between the host VMM and the protected customer VM. A protected guest VM can verify that such a request does not compromise the security of the guest VM, and the guest VM can use a key domain key to generate a ciphertext for the requested change. The guest VM can then provide the host VMM with a ciphertext that implements the requested change. The host VMM can install the ciphertext provided by the guest VM into the memory and verify that the guest VM has correctly completed the requested change (for example, using a HashKD) instruction. Once the ciphertext is verified, the host VMM can then execute the modified guest VM image.In one embodiment, the consumer-provided encrypted guest VM image may include code that implements a second guest VM (agent) that has access to the memory (key domain) of the consumer-provided guest VM. This agent is provided by the consumer to enable the host VMM to request the agent to perform tasks on behalf of the host VMM. The host VMM can pass the request to the agent through the shared communication channel. The host VMM can request the agent to modify the memory content or control structure of the second guest VM (or the third guest VM, etc.) on behalf of the host VMM. The proxy can verify that the host VMM request to change the guest VM's storage does not compromise the consumer's security policy. The host VMM can verify that changes to the memory were made correctly by the agent (eg, via HashKD instructions), and then execute the modified guest VM.In addition, the agent can perform memory paging operations when requested by the host VMM. For example, the agent can "page-out" a page of protected memory (typically including 4KB of data), using a second key for offline storage to encrypt the content of the protected memory. In this page-out scheme, the agent provides the encrypted page to the host VMM via a shared memory channel. The agent can also perform a "page-in" operation on the request from the host VMM, use the second key to decrypt the memory content provided by the host VMM via the shared memory channel, verify the content, and install the decrypted content to the host Protected memory.In each of the above embodiments, the cryptographically encrypted memory protection provided to the key domain may optionally include integrity verification, where an integrity check value (ICV) can be used to verify the contents of the memory location.Building on and / or providing alternative methods to other embodiments disclosed in this specification, embodiments of the present invention may include extending existing instruction set architectures and / or reusing existing virtualization technologies such as EPT and multi-key storage Integrity Technology (MKTME, MKTME) to reduce the complexity of protecting VMs from attacks by other VMs, VMMs, system administrators, or physical means Compare other methods with other changes). Embodiments may include two new instructions (referred to as VMPageIn and VMPageOut in this specification) to allow the CPU to control paging with memory integrity (providing secure paging) and to address client-to-host physical address translation issues with memory integrity . Embodiments can provide a stateless method that eliminates CPU data structure maintenance and complexity, while providing security for tenant VMs. An embodiment may not have a new model, no measurement of the image of a cloud service provider (CSP) or its client, while providing protection against sabotage, replay, and remapping attacks.Embodiments of the present invention may include replacing the HPA with MAC in the existing VT control structure and extended page table, allowing the client (software tool or service) to create a transportable secure VM image that encodes the client's full security policy and executes Secure key exchange, and use ISA to verify these MACs against the actual page content (and the original GPA) to restore the memory address where the VM image is logged in. Transforming VM images in-place alleviates the need for VMM to allocate additional storage for copying VM images from one location in memory to another location.In an embodiment, such as shown in FIG. 48, a VMM (e.g., VMM 4822) may not have direct access to the encrypted and integrity protected memory of a secure VM (e.g., VM 4830). The VM's private memory encryption key also protects the VMCS and EPT structures (eg, VMCS / EPT 4840) that govern the behavior of the VM and is also not directly accessible by the VMM. However, the above embodiment specifies that the VMM uses a CPU-controlled mechanism to restrict access to these structures through hardware. In an embodiment, the VMM may use VMRead and VMWrite instructions to access portions of the VMCS. For example, the VMM can use the VMRead and VMWrite instructions to access the host area of VMCS, but not the guest area of VMCS, and it can use them only for reads and not for write access to EPTP fields, and so on. In other words, the VMM can use VMRead to query the VMCS provided by the client (and thus, the VMM does not need to measure the client image), but is limited by the VMCS field it can VMWrite. In these and / or other embodiments of the invention, two new instructions (VMPageIn and VMPageOut described below) can be used to extend CPU controlled access to provide limited access to EPT from the VMM.According to the above embodiments and / or according to the embodiments described below or further described, the initialization of the key domain may include loading at least one VMCS structure and one EPTP root into the memory. To provide initialization of the key domain, embodiments may include instructions such as the CreateKD instruction described above and / or further described below. The CreateKD instruction can take the following inputs: (1) The client / consumer (by the client / consumer encrypted with the server's public key (for example, the server's RSA public key, where the corresponding RSA private key of the server is inaccessible / unknown by the CSP or VMM) The owner of the secure VM provided by the CSP); (2) the KeyID (specified by the VMM) of the client / consumer key to be used to reference the secure VM; (3) a pointer to the client / consumer created and provided to the CSP The VMM's initial VMCS physical address pointer (host physical address or HPA); (4) a physical address pointer to the root EPT structure (host physical address or HPA); and (5) a client for a given provided client / Consumer key without the message authentication code (MAC) used to verify the integrity of the VMCS and EPT pages. Therefore, the CreateKD instruction can be said to have the following format: CreateKD ([in] RSAEncryptedKey, [in] KeyID, [in] VMCS_HPA, [in] EPTP, [in] MAC).To execute or otherwise respond to a single instruction, such as a CreateKD instruction, a processor (e.g., processor 411 of FIG. 4) or a processor core (e.g., core 416 or 418 of FIG. 4), including or in conjunction with cryptographic hardware (e.g., (MEE415 of FIG. 4), which can execute the method embodiment of the present invention, for example, as illustrated in FIG. 49A. Other such method embodiments of the present invention may include any portion or portions shown in FIG. 49A (whether or not FIG. 49A indicates that a portion may be optional) and / or FIG. 49A in various orders. The section or sections shown. Method embodiments may include (eg, in 4900) receiving, decoding, or otherwise identifying a CreateKD instruction. Method embodiments may include (e.g., in 4902) using the server's private key (e.g., RSA private key) to decrypt the encrypted client key. Method embodiments may include (e.g., in 4902) a decryption key domain configuration policy. Method embodiments may include (e.g., in 4904) determining whether the decrypted client key and / or configuration policy is valid. If not (e.g., in 4906), the method embodiment may return an error. If the decrypted client key and / or configuration policy is determined to be valid (e.g., in 4904), then (e.g., in 4906) may begin to initialize a key domain with a KeyID that prevents the processor from being used, and may Includes a dump to clear the cache of the old KeyID, a dump to clear the old KeyID mapping / ASID processor TLB, a KeyID dump to clear the VMCS cache, and a new secret key (e.g., using the PCONFIG instruction or with it) To the memory encryption and integrity engine for that KeyID (eg, the memory encryption engine 415 of Figure 4 or another MKTME / MKTMEi engine). Method embodiments may include (e.g., in 4910) loading the referenced VMCS and EPT pages into protected memory (using KeyID), and setting the EPTP in the VMCS as the HPA of the EPT root page. Method embodiments may include (eg, in 4912) using a secret client key and MAC to check the integrity of the VMCS page and the EPT root page. Note that in various method embodiments, parts of the method shown in FIG. 49A may be included in a different order, for example, HPA that sets EPTP in VMCS as the EPT root page may be executed after the EPT root page has been verified . If (for example, in 4912) it is determined that the VMCS page and the EPT root page are correct, the processor can proceed to writing / storing the two pages (direct write / non-temporary write or write to memory) using the new KeyID No memory read for ownership is performed) MKTME encrypted and integrity protected memory area, otherwise (eg, in 4914) aborts the instruction and rewrites the page. If the MAC is correct, the VMCS page and the EPT root page are now the only two pages loaded into the secret crypto-protected memory of the VM used for KeyID, and (for example, in 4916) can be the new KeyID Assign an ASID tag.Other embodiments may have instructions (eg, VMAddKD instruction) that individually add one or more VMCS, specifying the KeyID, VMCS HPA, and MAC of the client for the VMCS to be added. If the MAC matches the VMCS content specified by the client, VMAddKD can add VMCS to the private KeyID. The CPU can then maintain a private structure that maintains several VMCSs installed for specific keys. Note that the client's VMCS may also include MACs for certain VMCS fields that would normally include HPA, so that before HPA can be assigned to this field, the content of the page must match the MAC value.FIG. 49B illustrates a method of entering and exiting a virtual machine in a key domain according to an embodiment of the present invention. Other such method embodiments of the present invention may include any portion or portions shown in FIG. 49B (whether or not FIG. 49B indicates that a portion may be optional) and / or FIG. 49B in various orders. The section or sections shown. Method embodiments may include (e.g., in 4920) using / executing a VMCS pointer load instruction (e.g., VMPTRLD) to provide access to the VMCS, where the instruction specifies an address that includes a KeyID (e.g., a portion 5020 that is specified as a physical address 5000) , As shown in Figure 50). Note that only processors can select KeyIDs created by CreateKD; these KeyIDs are not accessible to VMM / software and cannot be mapped via page tables or extended page tables. The processor may switch to the KeyID based on the current VMCS position specified by the VMPTRLD (eg, VMLaunch), VMRead or VMWrite and / or VMPageIn or VMPageOut instructions on the VM entry. VMPTRLD can maintain a lock on the VMCS structure to ensure that no other threads / cores can load the same VMCS at the same time.A method embodiment may include (e.g., in 4922) determining whether the VMCS is properly decrypted (e.g., using the correct KeyID; if not (e.g., 4924), an error may be returned to the VMM; if so, then VMCS may be cached (e.g., in 4926), and / or the VMM may use VMRead instructions to verify the contents of the VMCS (e.g., in 4928). Method embodiments may include (e.g., in 4930) using / execute with The VMEnter instruction of the KeyID of the VMPTRLD address is used to access the VMCS. Method embodiments may include (e.g., in 4932) determining if the VMCS is corrupted / invalid; if it is (e.g., in 4934), return an error to the VMM; if not ( For example, in 4936), the dump clears the processor pipeline, sets the TLB ASID tag (or dump clears the TLB) for the KeyID, sets the current KeyID to the KeyID from the VMPTRLD address, and sets the processor register to the VMCS client state Method embodiments may include (e.g., in 4938) branching to execute the next instruction specified by the instruction pointer of the client state.Method embodiments may include initiating an exit from the virtual machine based on detecting a VM exit condition or event (eg, in 4940). Method embodiments may include (eg, in 4942) saving the client state and clearing it from the processor register. A method embodiment may include (e.g., in 4944) dumping a processor pipeline, setting a TLBASID tag (or dumping a TLB) for the root, setting the current KeyID to the KeyID for the VMM, and setting the processor register to VMCS host status. Method embodiments may include (e.g., in 4946) branching to execute the next instruction specified by the instruction pointer of the host state.Embodiments of the present invention may include using / executing a VMClear instruction to return a cached / loaded / current VMCS to an unlocked state and return to the VMCS that was loaded from it using a KeyID (the VMCS is loaded using the KeyID). Memory location (as specified in the VMCS address provided to VMPTRLD). VMClear can be extended to maintain the state of VMCS, whether it is current / loaded and locked or cleared and unlocked. VMClear can also dump the TLB to clear the KeyID used to load the VMCS, as well as the PXE cache and any other residual state that may have been maintained for the loaded / current / cached VMCS. In this way, the cleared VMCS can be released (via VMClear) via a new instruction (such as VMFreeKD). Executing the VMFreeKD instruction that specifies the HPA and KeyID of the VMCS can return the MAC of the VMCS in the memory and the VMCS encrypted with the client's key to the VMM, so that the VMCS itself can be safely paged out of the memory, allowing the CPU to track the VMCS count in the structure (Inverse of VMAddKD) Decreasing. When all (one or more) VMCSs are released for KeyID, all CPU caches on all packages may be invalidated by anything cached with KeyID, and then KeyID can be reassigned (CreateKD again) ).According to an embodiment of the present invention, the VMCS may include a field including a secret identifier that identifies the data structure as a VMCS added by the CPU (the secret value is only known to the CPU). The EPT root may include several page table entries (EPTE) marked as non-existent (or use a new bit to indicate that the entry includes a MAC value instead of HPA), and each page table entry includes a MAC related to the referenced page (instead of HPA ). These MACs can be created by the client using the client's key in a secure MAC function (eg, SHA3 KMAC). For example: MAC = SHA3 (Key, GPAStart, GPAEnd, PageContent), where Key is the secret key, GPAStart is the first client physical address associated with the GPA range covered by the EPT entry, and GPAEnd is the last one corresponding to the range GPA. Each EPT entry can cover the address range, starting from the root EPT structure to cover the entire customer physical address space. The actual data / VM mirror page can have a 4KB range (page size), or a large page (eg, 2MB or larger) can be specified, where the full page content of the GPA range is calculated using MAC. Non-existent pages can be indicated as such in EPTE with invalid MAC values.Figure 51 illustrates how a secure virtual machine image for a client can be built using the VMPageIn instruction (e.g., by VMM) and how to page out using the VMPageOut instruction (e.g., by VMM) according to an embodiment of the invention described below. .In an embodiment of the invention, the CPU allows the VMM to page in and out of EPT pages from the secure VM. First, the VMM will make CreateKD VMCS current via the VMPTRLD instruction that specifies the VM's KeyID in the input physical address. The VMM can then issue a VMRead that accesses the VMCS. In an embodiment of the present invention, the VMRead concept can be extended to read the EPT referenced by the EPTP in the VMCS. This can be done by traversing the EPT from the GPA to the HPA. Therefore, the processor will allow the VMM to read the VM's EPT VMRead page by specifying the GPA as a parameter. The processor will then traverse the VMTP's EPTP for the GPA to get the HPA. It will then use the VM's KeyID to read the actual page plaintext.Embodiments of the present invention provide a single instruction for the VMM to execute to change the EPTE from a MAC value to an HPA value, and from a memory page to a content page. For example, the VMPageIn instruction may have the following format: VMPageIn ([in] GPAStart, [in] GPAEnd, [in] HPAofPage, [in] Permissions).To execute or otherwise respond to this single instruction, a processor (e.g., processor 411 of FIG. 4) or a processor core (e.g., core 416 or 418 of FIG. 4), including or in conjunction with cryptographic hardware (e.g., FIG. 4) MEE 415), which can execute the method embodiment of the present invention, for example, as illustrated in FIG. 52A. Other such method embodiments of the present invention may include any portion or portions shown in FIG. 52A and / or portions or portions not shown in FIG. 52A in various orders. Method embodiments may include (eg, in 5200) receiving, decoding, or otherwise identifying a VMPageIn instruction.In an embodiment, the VMPageIn instruction may be used starting from EPTP in the current (VMPTRLD) VMCS. Even if the VMM does not have write access to the protected VMCS, the processor can use the VM's private KeyID (MKTMEi key) to access the EPTP field and access the EPT root. Given a GPA range, the processor can (e.g., in 5206) navigate the extended page table until it finds an EPTE leaf for the GPA range (or (e.g., in 5210) if any intermediate EPTE does not exist or is destroyed, Then report an error). It can (for example, in 5214) use the MAC in the EPTE to verify the contents of the HPA page being paged in. It can (e.g., in 5212) read (load) memory contents using a shared KeyID, decrypt them using the client's CreateKD key (e.g., AES-XTS using GPA as fine-tuning encryption), and then use the VM's private KeyID ( (KeyID specified in the address used for VMPTRLD) writes (stores) the content back to the memory at the same address. Thus, the page is appropriately transformed from the client key to the VM's private MKTME key. If the HPA page of the VM has been mapped to a different GPA of the VM using the MKTMEi key of the VM, the memory integrity check will fail when read (loaded) using the shared KeyID, thereby indicating to the processor a sharing error (memory remapping attack ), And the page cannot be loaded (for example, in 5216, the page content is overwritten / cleared by writing a default value using a shared key). Similarly, if the MAC mismatch in the EPTE indicates a GPA or content modification attempt (attack) and the page cannot be loaded (eg, in 5216, the shared key is used to overwrite / zero the page with default content) the content of the page-in. If the MAC matches the content of the incoming page, then (for example, in 5218), the EPTE will be updated with the HPA of the loaded page, and the permissions (and memory type, etc.) will be set as specified in the VMPageIn instruction in the EPTE . If HPA was previously mapped to the same KeyID, all concurrent processors should clear their TLBs for the same KeyID dump and wait for the VMPageIn instruction to complete successfully if they execute the VM using the same KeyID. In some embodiments, the processor may use the HPA tracker structure to determine whether HPA has been used for a specific key, and only allow page-in operations for HPA that are not currently used by the same client key. Only the released (sheet-out) HPA can then be reused.Embodiments of the invention provide the VMM with a single instruction to execute to page out a client page from memory. For example, the VMPageOut instruction may have the following format: VMPageOut ([in] GPAStart, [in] GPAEnd, [out] Permissions).To execute or otherwise respond to this single instruction, a processor (e.g., processor 411 of FIG. 4) or a processor core (e.g., core 416 or 418 of FIG. 4), including or in conjunction with cryptographic hardware (e.g., FIG. 4) MEE 415), which can execute the method embodiment of the present invention, for example, as illustrated in FIG. 52B. Other such method embodiments of the present invention may include any portion or portions shown in FIG. 52B and / or portions or portions not shown in FIG. 52B in various orders. Method embodiments may include (eg, in 5240) receiving, decoding, or otherwise identifying a VMPageOut instruction.In an embodiment, the VMPageOut instruction can be used starting from EPTP in the current (VMPTRLD) VMCS. Even if the VMM does not have direct access to the MKTME-encrypted VMCS, the processor can use the VM's private KeyID (MKTMEi key) to access the EPTP field and access the EPT root. Given a GPA range, the processor can (e.g., in 5246) navigate the extended page table until it finds an EPTE leaf for the GPA range (or (e.g., in 5250) if any intermediate EPTE does not exist or is destroyed, Then report an error). It can (for example, in 5252) read (load) pages using the VM's private KeyID, calculate the associated MAC for a given content and GPA, and re-encrypt the content using the client's CreateKD key, and write them back using the shared KeyID The same memory page (for example, using AES-XTS with a GPA range for fine-tuning). The MAC for the page-out page and its GPA calculation can be stored in the associated EPTE. When the instruction completes successfully (e.g., in 5254) the original permission is provided in the output register (or memory location), the EPTE can be set Is not present. In embodiments where the CPU maintains the HPA tracker, the HPA may be recorded as being released for the client key, allowing it to be reused for subsequent VMPageIn operations.To restore such a page to a new HPA later, the VMM can VMPageIn a page, specifying the HPA of the page and the expected HPA (EPT leaf node), as well as permission for the encrypted page being restored for the associated MAC. The processor can use the VM's secret paging key (the client's key specified in CreateKD) to decrypt the page and check if the MAC matches the GPA and the content of the page. The processor can update the EPA's HPA and permissions for a given GPA, and use the VM's private KeyID (direct write, non-temporary write, or write for ownership) to write the decrypted page content to the HPA location. This will allow the updated memory integrity information to be restored. In some embodiments, the EPTE has a MAC that references pages that have been paged in. If VMPageIn specifies the HPA that has been loaded and the MAC matches the page content of the page in, the EPTE can be updated with the HPA of the page content that matches the reference MAC. In this way, the EPT structure can refer to already paged pages of other EPT structures, or form other directed graph structures that maintain the client's specified security policy. The HPA tracker may have a reference count to track how many references will exist to the same HPA. Similarly, when a reference is paged out via VMPageOut, the reference count in the HPA tracker will be decremented for the paged out HPA.For proper versioning, it can be assumed that once loaded, the EPT table is fixed in memory. Whenever a page is paged out by VMRead, the EPTE leaf is marked as non-existent, and the MAC (note that the MAC can be truncated to fit) value for the VMPageOut page is stored in EPTE, replacing HPA because HPA is no longer effective. Then, when the VMM uses VMPageIn to page back into memory, the processor can check the EPTE for the specified GPA and verify that the stored MAC matches the recovered page content and GPA. Thus, playback is impossible. Once all the leaf EPTE entries of the EPT page are paged out and set to non-existent, the EPT page itself can be paged out, and its MAC is stored up to the parent EPTE, replacing the HPA of the EPT page with the calculated MAC value. In this way, all EPT structures can be summarized into the root EPT, and finally, the root EPT itself can be represented as the MAC value stored in the VMCS EPTP, and the HPA for the EPT root is replaced with its MAC value. Thus, the entire EPT hierarchy representing the correct full state of the VM at any particular point in time can be paged in and out.To ensure that the HPA page has not been mapped, the processor can read it from memory using the shared KeyID. If the page has been mapped as non-shared (for example, stored with the VM's private key), the shared key with integrity will report an error. Therefore, before reassigning a page, the page needs to first be written with a shared KeyID or special key. When paging in memory, the entire page should be checked against the MAC with the VM private key. Read the shared cache line, and if there are no errors, write the cache line with VM data to the VM private key. When calculating the MAC, repeat for the entire page. Other embodiments can simply prevent HPA reuse by requiring only incremental or decrementing HPA, where the processor tracks the highest and lowest HPA assigned to the key and ensures that no HPA can be paged between the current lowest and highest HPA Into. Other embodiments may have an HPA tracker structure that maintains whether HPA is currently assigned to the key domain, and may include a reference count of how many structures reference the HPA from within the key domain.The above method will also work for any level of the paged EPT tree, allowing the EPT tree to be paged in and out. Instead of the leaf GPA of the content page, the path (GPA range) obtained through the tree can be MAC-encoded, or each intermediate level can be treated as a GPA range and used to calculate the MAC for the EPT page referenced by the parent EPTE. Then, the content of the paged in EPT page can be compared with the MAC of the parent EPTE to verify that the child EPT page is correct. Then, the MAC of the parent EPTE can be replaced with the HPA of the child EPT page, and the permission of the parent EPTE can be set.The EPT root page can also be paged out (for example, by specifying the entire GPA range and saving the MAC into the GPR), but it should be re-created with a new CreateKD. In some embodiments, the root EPT cannot be paged in because there is no parent EPT with a MAC to verify it against. Other embodiments may allow the RootEPT MAC to be paged out and stored in the VMCS structure, replace EPTP with the MAC of the root EPT, and specify the entire GPA range as part of the MAC calculation (because the entire GPA range is covered by the EPT root).The client can configure the initial EPT image with the correct MAC, so the VMM can page it into all EPT pages all the way to the EPTP root and VMCS. In some embodiments, if the EPTE is marked as non-existent, the client may allow the VM's memory to be extended; special bits or MAC values (such as zero MAC) may instruct the client to allow the GPA to be filled with zero pages. In this case, the processor can allow page-in operations to the private GPA space by filling the private KeyID page with zeros (or other default values) and then setting the EPTE HPA for this zeroed page. After the page is used by the VM, the page out operation (VMPageOut) can then calculate the correct MAC for the page and store it in the associated EPTE, thus safely expanding the guest VM with a page that is not part of the original encrypted VM image Of memory.Shared pages using shared KeyIDs can also be specified through GPA to HPA mapping. For example, the high memory area of a GPA address can be implicitly used for shared memory. This GPA area may always have the physical address appended with a shared KeyID (and not the private KeyID of the VM). Thus, for I / O, virtual devices, etc., the VM can use this area of the GPA space to communicate with the VMM. The VMM can also use the VMPageIn instruction to set HPA for these shared GPAs. Only the MAC will not check against higher GPA address spaces, and the page will not use the client's key for decryption because they are plaintext. Similarly, VMM can use VMPageOut to set the EPT mapping to not exist for the shared KeyID GPA area, but otherwise it will not generate a MAC or encrypt the associated page with the client's secret key, because the shared page is plaintext for both VM and VMM . Other embodiments may allow the VMM to expand the shared memory portion of the EPT tree, reading and writing EPT subentries and leaf entries of the EPT tree directly to the shared memory. That is, the shared part of the EPT tree (GPA range) under the EPT root can be accessed using the shared KeyID, allowing the processor to use the shared KeyID to traverse the EPT subentries.Figure 53 illustrates end-to-end provisioning of a secure VM according to an embodiment of the invention.As explained above in the description of FIG. 49B, when VMExit is performed on the VMM, the processor GPR state will be saved inside the VM's private key domain, and the register state will be cleared. This state may be stored on a VEInfo page as referenced by VMCS, or otherwise accessible to the VM for access on #VE (Virtualization Exception). The VM can always run the client's #VE handler to access the secure state on behalf of the VMM, where the VM can control what processor or memory state is made available to the VMM in order to protect the VM's secrets. For example, a customer's #VE handler can share information with the VMM via a shared memory area or by selecting a shared KeyID from a page table map. The CreateKD instruction may also include the HPA for referencing the VEInfo page of the VMCS and set the VMCS HPA field for the VE Info page (allowing the VMM to determine the HPA for the VE Info page). In other embodiments, the VE Info page can be paged into the customer's GPA space via the VMPageIn instruction, and the modified VMWrite can check the MAC in the VMCS (for example, located in the VE Info page address field) to verify that VMCS references are correct VE Info page, and then if it calculates the correct MAC value, then set the HPA field for the VE Info page. Similarly, the VMRead of the VEInfo page field can set the field to the MAC value representing the current VE Info page, and replace the HPA with the MAC value, return the HPA to the register, and VMRead will do the same. Multiple VMCSs can be added, and different VE Info pages can be specified, allowing interrupts, failures, exceptions, or SMI re-entry processing within the customer.In an embodiment, CreateKD can be extended to specify a list of multiple initial VMCS structures and VMCS shadow structures. If the MAC matches all structures loaded by CreateKD, the processor can link the shadow VMCS to its corresponding VMCS (link pointer). Providing shadows allows VMs to access shadow VMCS using VMRead and VMWrite instructions, and supports nested virtualization. In other embodiments, the client may specify the MAC of the page content for the shadow VMCS link pointer field, and the VMM may specify the HPA of the shadow VMCS in the modified VMWrite, where the processor may verify the content and use of the specified HPA page. MAC match in the VMCS field of the shaded VMCS link pointer. If the MAC in the field specified by the client matches the page content, the processor will write the HPA of the shaded page to the VMCS shadow link pointer field. Similarly, VMRead for the VMCS shadow link pointer field can calculate the MAC value for the shadow VMCS, store the MAC in the VMCS shadow link pointer field, and return the HPA of the shadow VMCS.The following paragraphs relate to further embodiments, each of which may be modified to include elements related to the VMPageIn instruction and / or VMPageOut instruction as described above.In Example 1, a device that securely executes consumer workloads in a public cloud environment without exposing consumer data or secrets, including a processor; and memory coupled to the processor; where the processor will execute untrusted A host virtual machine monitor to manage the execution of at least one guest virtual machine by the processor; an untrusted host virtual machine monitor will receive an encrypted key domain key, an encrypted client code image encrypted by the key domain key, and Encrypted client control structure encrypted by a key domain key, the key domain key is not accessible to the untrusted host virtual machine monitor; the untrusted host virtual machine monitor will issue a create command to the processor to create the first key Domain, the first key domain includes an area of memory to be encrypted by the key domain key, and the untrusted host virtual machine monitor further validates the encrypted client control structure; in response to receiving the creation command, the processor will create the first Key domain and decrypt the encrypted key domain key to generate a key domain key; the untrusted host virtual machine monitor will issue a start command to the processor, To start the first guest virtual machine in the first key domain; and in response to receiving the start command, the processor will switch to the first key domain and decrypt the encrypted client control structure to generate a client control including client processor state information Structure, decrypt the encrypted client code image to generate a client code image, and use the client processor state information to perform the client code image in the first key domain.Example 2 includes the device of Example 1, wherein the untrusted host virtual machine monitor will verify the encrypted client control structure by issuing a command to the processor to execute at least one of a hash key domain instruction and a VMRead instruction.Example 3 includes the device of Example 1, wherein in response to an event that triggers an exit condition of the first guest virtual machine, the processor switches from the first key domain to a second key domain.Example 4 includes the device of Example 3, wherein the second key domain is not encrypted; and the second key domain is shared by the untrusted host virtual machine monitor and each guest virtual machine managed by the untrusted host virtual machine monitor Shared area of memory.Example 5 includes the device of Example 3, wherein the second key domain is encrypted by the second key domain key for the untrusted host virtual machine monitor; the second key domain key is for the untrusted host virtual machine monitor and Accessible by each guest virtual machine managed by the untrusted host virtual machine monitor; and the second key domain is shared by the untrusted host virtual machine monitor and each guest virtual machine managed by the untrusted host virtual machine monitor Shared area of memory.Example 6 includes the device of Example 3, wherein the customer control structure specifies a protected location of the memory, and the processor can store customer processor state information.Example 7 includes the device of Example 6, wherein the processor is to save the guest processor state information for the first guest virtual machine in a protected location in memory in response to an event that triggers the exit condition of the first guest virtual machine; The host virtual machine monitor will issue a restart command to the processor to restart the first guest virtual machine; and in response to receiving the restart command, the processor will switch to the first key domain from the protected location of the memory Retrieve client processor state information for the first guest virtual machine, and use the client processor state information to perform client code mirroring in the first key domain.Example 8 includes the device of Example 3, wherein the client code image includes interrupt handler code to intercept the interrupt; the processor converts the exit condition of the first guest virtual machine to an exception; the client code image will respond to at least one of an interrupt and an exception Save processor register information to a protected location in memory; if the untrusted host virtual machine monitor does not require the first processor register, the client code image will clear the first processor register; if the untrusted host virtual machine monitors The client code image will conditionally expose the second processor register; the client code image will call the untrusted host virtual machine monitor; and the first guest virtual machine will be in the untrusted host virtual machine Exit when the monitor is called.Example 9 includes the device of Example 3, wherein the untrusted host virtual machine monitor will receive the encrypted updated client control structure, install the encrypted updated client control structure in memory, and verify the encrypted updated client control structure; processing The server will decrypt the encrypted updated client control structure to generate an updated client control structure; in response to verifying the encrypted updated client control structure, the untrusted host virtual machine monitor will issue a processor to use the updated client control structure to enter the first An entry command for a guest virtual machine; and in response to receiving the entry command, the processor will enter the first guest virtual machine using the updated guest control structure.Example 10 includes the device of Example 9, wherein the untrusted host virtual machine monitor will further receive the encrypted updated client code image and install the encrypted updated client code image in memory; the processor will decrypt the encrypted updated client code The image is updated to generate an updated client code image; and the processor will enter the first client virtual machine by executing the updated client code image using the updated client control structure.Example 11 includes the device of Example 3, wherein the untrusted host virtual machine monitor will determine whether a change to the customer control structure is required; the first guest virtual machine verifies that the change to the customer control structure does not compromise the security of the first customer virtual machine; The first guest virtual machine will use the key domain key to generate an updated client control structure that incorporates the changed encryption; and the first guest virtual machine will pass the storage of the shared memory by the untrusted host virtual machine monitor and the first guest virtual machine. The shared zone sends an encrypted updated client control structure to the untrusted host virtual machine monitor.Example 12 includes the device of Example 11, wherein the untrusted host virtual machine monitor installs the encrypted updated client control structure in storage; the untrusted host virtual machine monitor verifies the encrypted updated client control structure; the processor will Decrypt the encrypted updated client control structure to generate an updated client control structure; in response to verifying the encrypted updated client control structure, the untrusted host virtual machine monitor will issue a processor to the first client using the updated client control structure The virtual machine's entry command; and in response to receiving the entry command, the processor will enter the first guest virtual machine using the updated guest control structure.Example 13 includes the device of Example 12, wherein the untrusted host virtual machine monitor determines whether a change to the guest control structure is needed in response to a request received from the first guest virtual machine.Example 14 includes the device of Example 13, wherein the request further includes a second change to the client code image; the first guest virtual machine will verify that the second change to the client code image does not compromise the security of the first guest virtual machine; the first The guest virtual machine will use the key domain key to generate an encrypted updated client code image incorporating the second change; and the first guest virtual machine will send the encrypted updated client control image to the untrusted host virtual via the shared area of the memory Machine monitor.Example 15 includes the device of Example 14, wherein the untrusted host virtual machine monitor will receive the encrypted updated client code image; the processor will decrypt the encrypted updated client code image to generate an updated client code image; and the processor will pass Perform an updated customer code image to perform a customer code image.Example 16 includes the device of Example 15, wherein the encrypted client code image includes an agent code image; the encrypted client control structure includes an agent control structure; the untrusted host virtual machine monitor will verify the agent control structure; the untrusted host virtual machine monitor will Issue a second startup command to the processor to start the second guest virtual machine in the first key domain, the second guest virtual machine to provide a proxy to act on behalf of the untrusted host virtual machine monitor in the first key domain; and respond to Receiving the second start command, the processor switches to the first key domain, decrypts the encrypted client code image to generate an agent code image, decrypts the encrypted client control structure to generate an agent control structure including agent processor state information, and Proxy code mirroring is performed in the first key domain using the proxy processor state information.Example 17 includes the device of Example 16, wherein the untrusted host virtual machine monitor will pass a request to modify the client control structure of the first client virtual machine to the agent via a shared area of storage shared by the agent and the untrusted host virtual machine monitor. In response to reading the request from the shared area of the memory, the agent will modify the client control structure of the first client virtual machine in the first key domain to generate a modified client control structure of the first client virtual machine; untrusted host virtualization Machine monitor will verify the modified client control structure of the first guest virtual machine; when verifying the modified client control structure, the untrusted host virtual machine monitor will send the processor the first guest virtual machine that entered the first key domain And in response to receiving the entry command, the processor will perform client code mirroring in the first key domain using the second client processor state information from the modified client control structure.Example 18 includes the device of Example 16, wherein the untrusted host virtual machine monitor will pass a request to the agent to retrieve a page from an encrypted storage device, where each page of the encrypted storage device is encrypted by the storage key; the agent will use the storage secret The key decrypts the page to produce a decrypted page; the agent will verify the decrypted page; and if the decrypted page is verified, the agent will install the decrypted page into memory.Example 19 includes the device of Example 18, wherein if the agent can access the location in the memory where the decrypted page is to be installed, the agent will copy the decrypted page to the location of the memory; and if the agent cannot access the memory where the decrypted page is to be installed In the location, then: the agent further uses the key domain key and the location's physical address as fine-tuning to re-encrypt the page to produce a re-encrypted page; and the untrusted host virtual machine monitor will install the re-encrypted page to In memory.Example 20 includes the device of Example 16, wherein the untrusted host virtual machine monitor will pass a request to the agent to move a page from encrypted storage to a storage device, where each page of encrypted storage is encrypted by a key domain key; the agent will The key domain key is used to decrypt the page to produce a decrypted page; the agent will verify the decrypted page; and if the decrypted page is verified, the agent will re-encrypt the decrypted page with the storage device's storage device key to generate storage Device encrypted pages, move the storage device encrypted pages to the storage device, and provide the encrypted pages of the storage device to the untrusted host virtual machine monitor.Example 21 includes the device of Example 1, wherein the untrusted virtual machine monitor will issue a load command to the processor to load the encrypted client control structure into memory, the load command including a pointer to the client control structure from which the encryption is loaded and the first secret A pointer to a physical address in the memory of the key domain identifier of the key domain; and, in response to receiving the load command, the processor will determine the key domain key corresponding to the key domain identifier, where the processor will use the secret Key Domain Key to decrypt the encrypted client control structure.Example 22 includes the device of Example 1, wherein the processor further confirms the integrity of the encrypted client control structure.Example 23 includes a processor for safely executing consumer workloads in a public cloud environment without exposing consumer data or secrets, the processor for executing an untrusted host virtual machine monitor to manage Execution of at least one guest virtual machine; creating a first key domain in response to a create command issued by an untrusted host virtual machine monitor, the first key domain including a region of memory to be encrypted by the key domain key , The key domain key is not accessible to the untrusted host virtual machine monitor; decrypts the encrypted key domain key received from the untrusted host virtual machine monitor to generate a key domain key; The startup command issued by the host virtual machine monitor starts the first guest virtual machine in the first key domain, wherein starting the first guest virtual machine includes: switching to the first key domain, and decrypting the received from the untrusted host virtual machine monitor Encrypted client control structure to generate a client control structure that includes processor state information, decrypting the encrypted client generation received from an untrusted host virtual machine monitor To produce a mirror image customer code, and execute the client state information using a processor in a first key code image region.Example 24 includes the processor of Example 23, wherein the processor is further switched from the first key domain to the second key domain in response to an event that triggers an exit condition of the first guest virtual machine.Example 25 includes the processor of Example 24, wherein: the second key domain is not encrypted; and the second key domain is each guest virtual machine managed by the untrusted host virtual machine monitor and by the untrusted host virtual machine monitor Shared area of shared memory.Example 26 includes the processor of Example 24, wherein: the second key domain is encrypted by the second key domain key for the untrusted host virtual machine monitor; and the second key domain key is monitored for the untrusted host virtual machine And each guest virtual machine managed by the untrusted host virtual machine monitor; and the second key domain is each guest virtual machine managed by the untrusted host virtual machine monitor and the untrusted host virtual machine monitor Shared area of machine shared memory.Example 27 includes the processor of example 23, wherein the customer control structure specifies a protected location of the memory, wherein the processor can store customer processor state information.Example 28 includes the processor of Example 27, wherein the processor is further configured to: store the client processor state information for the first guest virtual machine in the memory in response to an event that triggers an exit condition of the first guest virtual machine And in response to receiving a restart command from the untrusted host virtual machine monitor, the processor is further configured to switch to the first key domain and retrieve from the protected location of the memory for the first client virtual Machine's client processor state information, and use the processor state information to perform client code mirroring in the first key domain.Example 29 includes the processor of Example 23, wherein the processor further: converts the exit condition of the first virtual machine to an exception.Example 30 includes the processor of Example 23, wherein the processor is further configured to: decrypt the encrypted updated client control structure to generate an updated client control structure; and in response to receiving an entry command to the first client virtual machine, The updated guest control structure enters the first guest virtual machine.Example 31 includes the processor of Example 23, wherein the processor is further configured to: decrypt the encrypted updated client code image update to generate an updated client code image; and execute the updated client code image by using the updated client control structure Come into the first guest virtual machine.Example 32 includes the processor of Example 23, wherein the processor is further configured to: in response to receiving a second startup command to start the second guest virtual machine in the first key domain, the second guest virtual machine is to provide an agent to represent The untrusted host virtual machine monitor action in the first key domain, the processor is used to: switch to the first key domain, decrypt the encrypted client code image to generate a proxy code image, and decrypt the encrypted client control structure to generate a proxy including An agent control structure for processor state information, and performing agent code mirroring in the first key domain using the agent processor state information.Example 33 includes the processor of Example 23, wherein the processor is further for: in response to receiving an entry command to enter the first guest virtual machine using the modified client control structure, using the second client process from the modified client control structure The server status information performs client code mirroring in the first key domain.Example 34 includes the processor of example 23, wherein the processor is further configured to determine a corresponding key domain key for a key domain identifier for the first key domain, wherein the processor is responsive to receiving the encrypted key domain key The client control structure is loaded into the memory with a load command, and the corresponding key domain key is used to decrypt the encrypted client control structure. The load command includes a pointer to the client control structure from which the encryption is loaded and a A pointer to a physical address in the key domain identifier's memory.Embodiment 35 includes the processor of Examples 23-34, and further includes a system on chip (SoC) incorporated in the user equipment touch-enabled device.Example 36 is a system including a display, a memory, and a processor of one or more of the examples 23-34 described above.Example 37 includes at least one computer-readable medium including instructions that, when executed by a processor, cause a computer to safely execute a consumer workload in a public cloud environment without exposing the consumer's data or secrets, the computer : Receiving the encrypted key domain key, the encrypted client code image encrypted by the key domain key, and the encrypted client control structure encrypted by the key domain key; issuing a create command to the processor to create the first key domain, The first key domain includes an area of memory to be encrypted by the key domain key; a client control structure that verifies the encryption; and a start command to the processor to start the first guest virtual machine in the first key domain.Example 38 includes the computer-readable medium of example 37, wherein the instructions further cause the computer to verify the encrypted client control structure by issuing a command to the processor to execute at least one of a hash key domain instruction and a VMRead instruction.Example 39 includes the computer-readable medium of example 37, wherein the instructions further cause the computer to issue a restart command to the processor to restart the first guest virtual machine.Example 40 includes the computer-readable medium of Example 37, wherein the instructions further cause the computer to: intercept an interrupt; and save the processor register information in response to at least one of an interrupt and an exception that is caused when the first guest virtual machine causes an exit condition To the protected location of the memory; if the untrusted host virtual machine monitor that manages the execution of the first guest virtual machine does not require the first processor register, clear the first processor register; if the untrusted host virtual machine monitor If the second processor register is needed, the second processor register is conditionally exposed; the untrusted host virtual machine monitor is called; and the first guest virtual machine is exited when the untrusted host virtual machine monitor is called.Example 41 includes the computer-readable medium of Example 37, wherein the instructions further cause the computer to: receive the encrypted updated client control structure, install the encrypted updated client control structure in memory, and verify the encrypted updated client control structure And in response to verifying the encrypted updated client control structure, issuing an entry command to the processor to use the updated client control structure to enter the first guest virtual machine, and the updated client control structure generated by the processor decrypts the encrypted updated client control structure.Example 42 includes the computer-readable medium of claim 41, wherein the instructions further cause the computer to: receive an encrypted updated client code image; and install the encrypted updated client code image in a memory.Example 43 includes the computer-readable medium of claim 37, wherein the instructions further cause the computer to: determine whether a change to the client control structure is required; verifying by the first virtual machine that the change to the client control structure did not harm the first client virtual machine Security; the key domain key is used by the first virtual machine to generate a merged, encrypted, and updated updated client control structure; and by the first virtual machine via storage shared by the untrusted host virtual machine monitor and the first guest virtual machine The shared area sends the encrypted updated customer control structure to the untrusted host virtual machine monitor.Example 44 includes the computer-readable medium of claim 43, wherein the instructions further cause the computer to: install the encrypted updated client control structure in memory; verify the encrypted updated client control structure; and respond to verifying the encrypted updated client control structure. The client control structure sends an entry command to the processor to use the updated client control structure to enter the first guest virtual machine, and the updated client control structure generated by the processor decrypts the encrypted updated client control structure.Example 45 includes the computer-readable medium of claim 44, wherein the instructions further cause the computer to determine whether a change to the customer control structure is required in response to a request received from the first customer virtual machine.Example 46 includes the computer-readable medium of claim 45, wherein the instructions further cause the computer to verify, by the first virtual machine, that the second change to the client code image included in the request does not compromise the security of the first client virtual machine; The key domain key is used by the first guest virtual machine to generate an encrypted updated client code image incorporating the second change; and the encrypted updated client control image is sent by the first guest virtual machine to the untrusted via the shared area of the memory Host virtual machine monitor.Example 47 includes the computer-readable medium of claim 46, wherein the instructions further cause the computer to: receive the encrypted updated client code image, wherein executing the client code image includes performing an update generated by the processor to decrypt the encrypted updated client code image Customer code mirroring.Example 48 includes the computer-readable medium of claim 37, wherein the instructions further cause the computer to: verify the agent control structure included in the encrypted client control structure; and issue to the processor to use the agent control structure to initiate the first key domain. A second startup command for the second guest virtual machine. The second guest virtual machine provides a proxy to act on behalf of the untrusted host virtual machine monitor in the first key domain.Example 49 includes the computer-readable medium of claim 48, wherein the instructions further cause the computer to pass to the agent, via a shared area of memory shared with the agent, a request to modify a client control structure of the first guest virtual machine; Read request from the shared area, the agent modifies the client control structure of the first client virtual machine in the first key domain to generate a modified client control structure of the first client virtual machine; verifies the modified client of the first client virtual machine A control structure; and when verifying the modified client control structure, issuing an entry command to the processor to use the modified client control structure to enter the first guest virtual machine in the first key domain.Example 50 includes the computer-readable medium of claim 48, wherein the instructions further cause the computer to: pass to the agent a request to retrieve a page from an encrypted storage device, wherein each page of the encrypted storage device is encrypted by a storage device key; The storage device key is used by the agent to decrypt the page to generate a decrypted page; the agent verifies the decrypted page; and if the decrypted page is verified, the decrypted page is installed into the memory.Example 51 includes the computer-readable medium of claim 50, wherein the instructions further cause the computer to: if the agent has access to a location in the memory where the decrypted page is to be installed, the agent copies the decrypted page into the location of the memory; and if the agent The location in the memory where the decrypted page is to be installed cannot be accessed, then: the agent uses the key domain key and the physical address of the location as fine-tuning to re-encrypt the page to produce a re-encrypted page, and by an untrusted host virtual machine The monitor installs the re-encrypted page into memory.Example 52 includes the computer-readable medium of claim 48, wherein the instructions further cause the computer to pass a request to the agent to retrieve a page from an encrypted storage device, wherein each page of the encrypted storage device is encrypted by a storage device key; The agent uses the storage device key to decrypt the page to generate a decrypted page; the agent verifies the decrypted page; and if the decrypted page is verified: the agent uses the storage device key of the storage device to re-encrypt the decrypted page to generate storage Device encrypted pages; the storage device encrypted pages are moved to the storage device by the agent; and the agent encrypted pages are provided to the untrusted host virtual machine monitor by the agent.Example 53 is a method for securely executing consumer workloads in a public cloud environment without exposing consumers' data or secrets, the method comprising: receiving an encrypted key domain key, a key domain key Encrypted encrypted client code image and encrypted client control structure encrypted by the key domain key; issuing a creation command to the processor to create a first key domain, the first key domain including the key domain key to be encrypted An area of memory; verifying the encrypted client control structure; and issuing a startup command to the processor to start the first guest virtual machine in the first key domain, wherein the startup command includes a pointer to an address of the encrypted client control structure.Example 54 includes the method of example 53, wherein verifying the encrypted client control structure includes verifying the encrypted client control structure by issuing a command to the processor to execute at least one of a hash key domain instruction and a VMRead instruction.Example 55 includes the method of Example 53, further comprising: issuing a restart command to the processor to restart the first guest virtual machine.Example 56 includes the method of Example 53, further comprising: intercepting an interrupt; and in response to at least one of an interrupt and an exception raised when the first guest virtual machine causes an exit condition, saving processor register information to a protected location in the memory; if The untrusted host virtual machine monitor that manages the execution of the first guest virtual machine does not require the first processor register, so the first processor register is cleared; if the untrusted host virtual machine monitor requires the second processor register, then Conditionally expose the second processor register; call the untrusted host virtual machine monitor; and exit the first guest virtual machine when the untrusted host virtual machine monitor is called.Example 57 includes the method of Example 53, further comprising: receiving the encrypted updated client control structure, installing the encrypted updated client control structure in a memory, and verifying the encrypted updated client control structure; and responding to verifying the encrypted update The client control structure sends an entry command to the processor to enter the first guest virtual machine using the updated client control structure, and the updated client control structure generated by the processor decrypts the encrypted updated client control structure.Example 58 includes the method of Example 57, further comprising: receiving an encrypted updated client code image; and installing the encrypted updated client code image in a memory.Example 59 includes the method of Example 53, further comprising: determining whether a change to the customer control structure is required; verifying by the first virtual machine that the change to the customer control structure does not harm the security of the first guest virtual machine; used by the first virtual machine The key domain key generates an encrypted updated updated client control structure; and the updated updated client is encrypted by the first virtual machine via a shared area of memory shared by the untrusted host virtual machine monitor and the first guest virtual machine The control structure is sent to the untrusted host virtual machine monitor.Example 60 includes the method of Example 59, further comprising: installing the encrypted updated client control structure in memory; verifying the encrypted updated client control structure; and issuing the use to the processor in response to verifying the encrypted updated client control structure. An entry command for the updated client control structure to enter the first guest virtual machine, and the updated client control structure generated by the processor decrypts the encrypted updated client control structure.Example 61 includes the method of Example 60, further comprising determining whether a change to the customer control structure is required in response to a request received from the first customer virtual machine.Example 62 includes the method of Example 61, further comprising: verifying, by the first virtual machine, that the second change to the client code image included in the request does not compromise the security of the first client virtual machine; and using the key by the first client virtual machine The domain key generates an encrypted updated client code image incorporating the second change; and the encrypted updated client control image is sent by the first client virtual machine to the untrusted host virtual machine monitor via the shared area of the memory.Example 63 includes the method of example 62, further comprising: receiving the encrypted updated client code image, wherein executing the client code image includes executing the updated client code image generated by the processor decrypting the encrypted updated client code image.Example 64 includes the method of Example 53, further comprising: verifying the proxy control structure included in the encrypted client control structure; and issuing a second boot to the processor to use the proxy control structure to boot the second client virtual machine in the first key domain Command, the second guest virtual machine provides a proxy to act on behalf of the untrusted host virtual machine monitor in the first key domain.Example 65 includes the method of Example 64, further comprising: passing a request to modify the client control structure of the first client virtual machine to the agent via a shared area of the memory shared with the agent; and in response to the read request from the shared area of the memory, the agent Modify the client control structure of the first client virtual machine in the first key domain to generate a modified client control structure of the first client virtual machine; verify the modified client control structure of the first client virtual machine; and verify the modified client When controlling the structure, the processor is issued an entry command for the first client virtual machine in the first key domain using the modified client control structure.Example 66 includes the method of Example 64, further comprising: passing a request to the agent to retrieve a page from the encrypted storage device, wherein each page of the encrypted storage device is encrypted by the storage device key; and the agent uses the storage device key to decrypt the page To generate a decrypted page; the decrypted page is verified by a proxy; and if the decrypted page is verified, the decrypted page is installed into memory.Example 67 includes the method of Example 66, further comprising: if the agent can access the location in the memory in which the decrypted page is to be installed, the agent copies the decrypted page into the location in the memory; and if the agent cannot access the memory in which the decrypted page is to be installed In the location, then: the agent uses the key domain key and the location's physical address as fine-tuning to re-encrypt the page to produce a re-encrypted page, and the un-encrypted host virtual machine monitor installs the re-encrypted page to In memory.Example 68 includes the method of Example 64, further comprising: passing a request to the agent to retrieve a page from the encrypted storage device, wherein each page of the encrypted storage device is encrypted by the storage device key; and the agent uses the storage device key to decrypt the page To generate the decrypted page; the decrypted page is verified by the agent; and if the decrypted page is verified: the decrypted page is re-encrypted by the agent with the storage device key of the storage device to generate the encrypted page of the storage device; The device-encrypted page is moved to a storage device; and the agent-encrypted page is provided by an agent to an untrusted host virtual machine monitor.In Example 69, a computer-readable medium including instructions will perform the method of any of the above examples.In Example 70, a computer-readable medium including data will be used by at least one machine to fabricate at least one integrated circuit to perform the method of any of the examples above.In Example 72, the device includes means for performing the method of any of the above examples.In Example 73, a device for securely executing consumer workloads in a public cloud environment without exposing consumer data or secrets includes: a key domain key for receiving encryption, a key domain secret for receiving Key-encrypted encrypted client code image and components of an encrypted client control structure encrypted by a key domain key; means for issuing to the processor a creation command to create a first key domain, the first key domain comprising An area of memory encrypted by the key domain key; verifying the encrypted client control structure; and means for issuing to the processor a startup command to start the first guest virtual machine in the first key domain, wherein the startup command includes a pointer to the encrypted client A pointer to the address of the control structure.Example 74 includes the device of example 73, wherein the means for verifying the encrypted client control structure includes means for issuing a command to the processor to execute at least one of a hash key domain instruction and a VMRead instruction.Example 75 includes the device of example 73, further including means for issuing a restart command to the processor to restart the first guest virtual machine.Example 76 includes the device of Example 73, further comprising: means for intercepting an interrupt; and for saving processor register information to memory in response to at least one of an interrupt and an exception that is raised when the first guest virtual machine causes an exit condition A protected location of the component; a component for clearing the first processor register if the untrusted host virtual machine monitor does not need the first processor register; for a second process if the untrusted host virtual machine monitor requires The device register conditionally exposes the components of the second processor register; the component for invoking the untrusted host virtual machine monitor; and the component for exiting the first guest virtual machine when invoking the untrusted host virtual machine monitor .Example 77 includes the device of Example 73, further comprising: means for receiving the encrypted updated client control structure, installing the encrypted updated client control structure in memory and verifying the encrypted updated client control structure; and responding A component that issues an entry command to the processor using the updated client control structure to enter the first guest virtual machine to verify the encrypted updated client control structure, and the updated client control structure generated by the processor decrypts the encrypted updated client control structure .Example 78 includes the device of Example 77, further comprising: means for receiving an encrypted updated client code image; and means for installing the encrypted updated client code image in memory.Example 79 includes the device of Example 73, further comprising: means for determining whether a change to the customer control structure is required; and means for verifying by the first virtual machine that the change to the customer control structure has not compromised the security of the first client virtual machine Means; means for generating, by the first virtual machine using the key domain key, a merged, encrypted, updated, updated client control structure; and means for the first virtual machine via the untrusted host virtual machine monitor and the first client The shared area of the virtual machine's shared memory sends the encrypted updated client control structure to a component of the untrusted host virtual machine monitor.Example 80 includes the device of Example 79, further comprising: means for installing the encrypted updated client control structure in memory; means for verifying the encrypted updated client control structure; and means for responding to verifying the encrypted update The client control structure sends an entry command to the processor using the updated client control structure to enter the first guest virtual machine, and the updated client control structure generated by the processor decrypts the encrypted updated client control structure.Example 81 includes the apparatus of example 80, further including means for determining whether a change to the customer control structure is required in response to a request received from the first customer virtual machine.Example 82 includes the device of Example 81, further comprising: means for verifying by the first virtual machine that the second change to the client code image included in the request does not compromise the security of the first client virtual machine; Means for the guest virtual machine to use the key domain key to generate an encrypted updated updated client code image incorporating the second change; and to send the encrypted updated client controlled image to the unavailable by the first guest virtual machine via the shared area of the memory A part of the host virtual machine monitor.Example 83 includes the device of example 82, further comprising: means for receiving an encrypted updated client code image, wherein executing the client code image includes executing the updated client code image generated by the processor decrypting the encrypted updated client code image.Example 84 includes the device of Example 73, further comprising: means for verifying the proxy control structure included in the encrypted client control structure; and issuing to the processor the use of the proxy control structure to launch a second client in the first key domain As part of the second startup command of the virtual machine, the second guest virtual machine provides an agent to act on behalf of the untrusted host virtual machine monitor in the first key domain.Example 85 includes the device of Example 84, further comprising: means for communicating a request to modify the client control structure of the first guest virtual machine to the agent via a shared area of memory shared with the agent; and in response to reading from the shared area of memory A component that takes the request and the agent modifies the client control structure of the first client virtual machine in the first key domain to generate a modified client control structure of the first client virtual machine; a client control for verifying the modification of the first client virtual machine A component of the structure; and a component for issuing an entry command to the processor to enter the first guest virtual machine in the first key domain when the modified guest control structure is verified.Example 86 includes the device of Example 84, further comprising: means for passing to the agent a request to retrieve a page from the encrypted storage device, wherein each page of the encrypted storage device is encrypted by the storage device key; for use by the agent A means for storing the device key to decrypt the page to generate a decrypted page; a means for verifying the decrypted page by the agent; and a means for installing the decrypted page into the memory if the decrypted page is verified.Example 87 includes the device of Example 86, further comprising: means for copying the decrypted page into the location of the memory if the agent is able to access the location in which the decrypted page is to be installed; and means for if the agent cannot access the location where the decrypted page is to be installed; and The location in the memory where the decrypted page is installed is then used by the agent to re-encrypt the page using the key domain key and the physical address of the location as fine-tuning to produce a re-encrypted page; and to install the decrypted page if the agent cannot access it The location in the memory is where the untrusted host virtual machine monitor installs the re-encrypted page into the memory.Example 88 includes the device of example 84, further comprising: means for passing to the agent a request to retrieve a page from the encrypted storage device, wherein each page of the encrypted storage device is encrypted by the storage device key; and used by the agent Means for decrypting the page by the storage device key to generate a decrypted page; means for verifying the decrypted page by the agent; for re-encrypting the decrypted page by the agent with the storage device key of the storage device if the decrypted page is verified Means for generating a storage device encrypted page; means for moving the storage device encrypted page to the storage device by the agent; and means for providing the storage device encrypted page to the untrusted host virtual machine monitor by the agent.The invention also provides the following technical solutions:Technical Solution 1. A processor, including:A core for executing a first instruction for paging a first virtual machine (VM) client page into a key domain, the execution of the first instruction includes using a A message authentication code (MAC) in an extended page table entry (EPTE) of a page to verify the first VM client page and replace the MAC in the EPTE with a host physical address of the VM client page; andAn encryption engine configured to decrypt the first VM client page in response to the first instruction.Technical Solution 2. The processor according to Technical Solution 1, wherein:The core is further configured to execute a second instruction to page the first VM client page out of the key domain; andThe encryption engine is further configured to encrypt the first VM client page in response to the second instruction.Technical Solution 3. The processor according to Technical Solution 1, wherein the core is further configured to execute a third instruction for creating the key domain, the key domain including multiple protected memory locations to store multiple The VM client page includes the first VM client page.Technical Solution 4. The processor of Technical Solution 3, wherein execution of the third instruction includes decrypting an encrypted key domain key to provide to an encryption engine to decrypt the plurality of VM client pages.Technical Solution 5. The processor according to Technical Solution 1, wherein the first instruction is used to specify a first client physical address to indicate a start of a guest physical address range of the first VM.Technical Solution 6. The processor according to Technical Solution 5, wherein the first instruction is used to specify a second client physical address to indicate an end of the client physical address range of the first VM.Technical Solution 7. The processor according to Technical Solution 3, wherein the first instruction is used to specify a host physical address of a first protected memory location to store the first VM guest page.Technical Solution 8. The processor according to Technical Solution 7, wherein the first instruction is used to specify permission for accessing the first protected memory location.Technical Solution 9. The processor according to Technical Solution 1, wherein the second instruction is used to specify a first client physical address to indicate a start of a guest physical address range of the first VM, and to specify a second client physical address To indicate the end of the guest physical address range of the first VM.Technical Solution 10. The processor according to Technical Solution 9, wherein the second instruction is used to specify permission for accessing the first VM client page.Technical solution 11. A system comprising:Processor; andA memory coupled to the processor; whereinThe processor will execute an untrusted host virtual machine monitor to manage execution by a processor of at least one guest virtual machine;The untrusted host virtual machine monitor is to receive an encrypted key domain key, an encrypted client code image encrypted by the key domain key, and an encrypted client control structure encrypted by the key domain key. The key domain key is inaccessible to the untrusted host virtual machine monitor;The untrusted host virtual machine monitor is to issue a creation instruction to the processor to create a first key domain, the first key domain including an area of the memory to be encrypted by the key domain key , The untrusted host virtual machine monitor is to additionally verify the encrypted client control structure;In response to receiving the creation instruction, the processor is to create the first key domain and decrypt the encrypted key domain key to generate the key domain key; andThe untrusted host virtual machine monitor is to issue a page-in instruction to the processor to build a first guest virtual machine in the first key domain.Technical solution 12. The system according to technical solution 11, wherein:The untrusted host virtual machine monitor is to issue a startup instruction to the processor to start the first guest virtual machine in the first key domain; andIn response to receiving the startup instruction, the processor is to switch to the first key domain, decrypt the encrypted client control structure to generate a client control structure including client processor state information, and decrypt the encrypted client control structure. Client code mirroring to generate a client code mirroring, and performing the client code mirroring in the first key domain using the client processor state information.Technical solution 13. The system according to technical solution 12, whereinIn response to an event that triggers an exit condition of the first guest virtual machine, the processor is to switch from the first key domain to a second key domain.Technical solution 14. The system according to technical solution 13, wherein the client control structure specifies a protected location of the memory, and the processor is to store the client processor at the protected location of the memory status information.Embodiment 15. The system according to embodiment 14, whereinIn response to the event that triggers the exit condition of the first guest virtual machine, the processor further saves the client processor state information of the first guest virtual machine in the memory of the memory. In a protected positionThe untrusted host virtual machine monitor is to issue a restart instruction to the processor to restart the first guest virtual machine; andIn response to receiving the restart instruction, the processor is to switch to the first key domain, and retrieve the client processor state information of the first guest virtual machine from the protected location of the memory And using the client processor state information to perform the client code mirroring in the first key domain.Technical Solution 16. A method comprising:The encrypted key domain key received by the untrusted host virtual machine monitor, the encrypted client code image encrypted by the key domain key, and the encrypted client control structure encrypted by the key domain key. The key domain key is inaccessible to the untrusted host virtual machine monitor;The untrusted host virtual machine monitor sends a creation instruction to a processor to create a first key domain, where the first key domain includes an area of memory to be encrypted by the key domain key, and the unavailable The host virtual machine monitor is to additionally verify the encrypted client control structure;Creating the first key domain by the processor in response to receiving the creation instruction, and decrypting the encrypted key domain key to generate the key domain key; andA page-in instruction for constructing a first guest virtual machine in the first key domain is issued by the untrusted host virtual machine monitor to the processor.Technical solution 17. The method according to technical solution 16, further comprising:Issuing, by the untrusted host virtual machine monitor, a start instruction to the processor to start the first guest virtual machine in the first key domain; andThe processor switches to the first key domain in response to receiving the startup instruction, decrypts the encrypted client control structure to generate a client control structure including client processor state information, and decrypts the encrypted client control structure. Client code mirroring to generate a client code mirroring, and performing the client code mirroring in the first key domain using the client processor state information.Technical solution 18. The method according to technical solution 17, further comprising:The processor switches from the first key domain to the second key domain in response to an event that triggers an exit condition of the first guest virtual machine.Technical solution 19. The method of technical solution 18, wherein the client control structure specifies a protected location of the memory, and the processor is to store the client processor state in the protected location of the memory information.Technical solution 20. The method according to technical solution 19, further comprising:In response to the event triggering the exit condition of the first guest virtual machine, storing, by the processor, the guest processor state information of the first guest virtual machine in the memory of the memory In a protected positionIssuing a restart instruction to the processor by the untrusted host virtual machine monitor to restart the first guest virtual machine; andIn response to receiving the restart instruction, the processor switches to the first key domain, and retrieves the client processor state information of the first guest virtual machine from the protected location of the memory And using the client processor state information to perform the client code mirroring in the first key domain.It is understood that various combinations of the above examples are possible.Note that the terms "circuit and circuitry" are used interchangeably herein. As used herein, these terms and the term "logic" are used alone or in any combination to refer to analog circuits, digital circuits, hard-wired circuits , Programmable circuits, processor circuits, microcontroller circuits, hardware logic circuits, state machine circuits, and / or any other type of physical hardware component. Embodiments can be used in many different types of systems. For example, in one implementation In the example, the communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the invention is not limited to the communication device, and on the contrary, other embodiments can be directed to other types of devices for processing instructions, or include instructions One or more machine-readable media, the instructions, in response to being executed on a computing device, cause the device to perform one or more of the methods and techniques described herein.Embodiments may be implemented in code and may be stored on a non-transitory storage medium having instructions stored thereon, which can be used to program a system to execute instructions. Embodiments may also be implemented with data and may be stored on a non-transitory storage medium that, if used by at least one machine, causes the at least one machine to make at least one integrated circuit to perform one or more operations. Still further embodiments may be implemented in a computer-readable storage medium including information that, when manufactured into a SoC or other processor, configures the SoC or other processor to perform one or more operations. The storage medium may include, but is not limited to, any type of disk, including a floppy disk, an optical disk, a solid state drive (SSD), a compact disc read-only memory (CD-ROM), a compact disc rewritable (CD-RW), and a magneto-optical disc, a semiconductor device such as Read memory (ROM), random access memory (RAM) (such as dynamic random access memory (DRAM), static random access memory (SRAM)), erasable programmable read-only memory (EPROM), flash memory, electrical memory Erase a programmable read-only memory (EEPROM), magnetic or optical card, or any other type of medium suitable for storing electronic instructions.Although the invention has been described with respect to a limited number of embodiments, those skilled in the art will recognize numerous modifications and alterations thereto. It is intended that the appended claims cover all such modifications and alterations as fall within the true spirit and scope of the invention. |
A method for communicating between a controller and a device with double-buffered inputs comprises the steps of providing one or more communication paths for exchanging data between the controller and the device, providing a data transfer control signal from the controller to the device for transferring input data from one or more input registers into one or more latchable data registers, and providing a data transfer delay signal from the device to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from the input registers into the latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal. An apparatus for communicating between a controller and a device is also described. |
1. A method for communicating between a controller and a Digital-to-Analog Converter (DAC) with double-buffered inputs, the method comprising the steps of:(a) providing one or more communication paths for exchanging data between the controller and the DAC; (b) providing a data transfer control signal from the controller to the DAC for transferring input data from one or more input registers into one or more latchable data registers; and (c) providing a data transfer delay signal from the DAC to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from said one or more input registers into said one or more latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal. 2. The method in accordance with claim 1, wherein the step (a) of providing one or more communication paths further comprises providing a serial data communication line and a serial clock signal communication line.3. The method in accordance with claim 2, wherein the serial data communication line is a bi-directional data communication line.4. The method in accordance with claim 1, wherein the step (a) of providing one or more communication paths further comprises providing a parallel data bus and parallel data transfer control signals.5. The method in accordance with claim 4, wherein the parallel data bus is a bi-directional parallel data bus.6. The method in accordance with claim 1, wherein the step (b) of providing a data transfer control signal further comprises providing a data transfer control signal that latches input data from the input registers into the latchable data registers on a high-to-low logic level transition.7. The method in accordance with claim 1, wherein the step (b) of providing a data transfer control signal further comprises providing a data transfer control signal that is held at a first logic level such that completion of a write operation to an input register controls latching of input data into the latchable data registers, subject to delay introduced by the data transfer delay signal.8. The method in accordance with claim 1, wherein the step (c) of providing a data transfer delay signal from the DAC to the controller further comprises the step of providing an open-drain data transfer delay signal between the DAC and the controller.9. The method in accordance with claim 8, wherein the open-drain data transfer delay signal is coupled to an internal buffer that generates a BUSY input signal on the DAC that prevents transfer of input data from said one or more input registers.10. The method in accordance with claim 9, wherein the DAC comprises multiple DACs and the open-drain data transfer delay signal is coupled to other data transfer delay signals from other similar DACs to realize a system-wide data transfer delay signal.11. Apparatus for communicating between a controller and a Digital-to-Analog Converter (DAC) with double-buffered inputs comprising:means for providing one or more communication paths for exchanging data between the controller and the DAC; means for providing a data transfer control signal from the controller to the DAC for transferring input data from one or more input registers into one or more latchable data registers; and means for providing a data transfer delay signal from the DAC to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from said one or more input registers into said one or more latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal. 12. The apparatus of claim 11, wherein the means for providing one or more communication paths further comprises a serial data communication line and a serial clock signal communication line.13. The apparatus of claim 12, wherein the serial data communication line is a bi-directional data communication line.14. The apparatus of claim 11, wherein the means for providing one or more communication paths further comprises a parallel data bus and parallel data transfer control signals.15. The apparatus of claim 14, wherein the parallel data bus is a bi-directional parallel data bus.16. The apparatus of claim 11, wherein the means for providing a data transfer control signal further comprises means for providing a data transfer control signal that latches input data from the input registers into the latchable data registers on a high-to-low logic level transition.17. The apparatus of claim 11, wherein the means for providing a data transfer control signal further comprises means for providing a data transfer control signal that is held at a first logic level such that completion of a write operation to an input register controls latching of input data into the latchable data registers, subject to delay introduced by the data transfer delay signal.18. The apparatus of claim 11, wherein the means for providing a data transfer delay signal from the DAC to the controller further comprises means for providing an open-drain data transfer delay signal between the DAC and the controller.19. The apparatus of claim 18, wherein the open-drain data transfer delay signal is coupled to an internal buffer that generates a BUSY input signal on the DAC that prevents transfer of input data from said one or more input registers.20. The apparatus of claim 19, wherein the DAC comprises multiple DACs and the open-drain data transfer delay signal is coupled to other data transfer delay signals from other similar DACs to realize a system-wide data transfer delay signal.21. A communications interface for enabling communication between a controller and a Digital-to-Analog Converter (DAC) with double-buffered inputs, the communications interface comprising:one or more communication paths for exchanging data between the controller and the DAC; a data transfer control signal from the controller to the DAC for transferring input data from one or more input registers into one or more latchable data registers; and a data transfer delay signal from the DAC to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from said one or more input registers into said one or more latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal. 22. The communications interface of claim 21, wherein said one or more communication paths comprise a serial data communication line and a serial clock signal communication line.23. The communications interface of claim 22, wherein the serial data communication line is a bi-directional data communication line.24. The communications interface of claim 21, wherein the data transfer delay signal from the DAC to the controller comprises an open-drain data transfer delay signal coupled to an internal buffer that generates a BUSY input signal on the DAC that prevents transfer of input data from said one or more input registers.25. The communications interface of claim 24, wherein the DAC comprises multiple DACs and the open-drain data transfer delay signal is coupled to other data transfer delay signals from other similar DACs to realize a system-wide data transfer delay signal.26. A method for communicating between a controller and multiple data conversion devices, each of said data conversion devices including multiple DACs with double-buffered inputs, the method comprising the steps of:(a) providing a bi-directional serial data communication line and a serial clock signal communication line for exchanging data between the controller and the data conversion devices; (b) providing a data transfer control signal from the controller to the data conversion devices that latches input data from input registers into interconnected latchable data registers of associated DACs on an active transition; (c) providing open-drain, bi-directional data transfer delay signals in a wired-OR configuration from the data conversion devices to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from said input registers into said latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal; such that, when any of the data conversion devices drives the data transfer delay signal to said first logic state, transfer of input data from said input registers into said latchable data registers is inhibited in every data conversion device that is part of the wired-OR configuration. |
FIELD OF THE INVENTIONThis invention relates generally to an interface protocol for a device and in particular to an interface between an external controller and multiple devices arranged in a bus configuration, and is more particularly directed toward a method and apparatus for communicating between a microcontroller and a plurality of bus-compatible data conversion devices.BACKGROUND OF THE INVENTIONData conversion products provide the necessary bridge between analog and digital worlds. Analog-to-digital converter (ADC) products allow digital system elements, such as microprocessors and digital signal processors (DSPs) to sample analog signals, while digital-to-analog converters (DACs) permit these digital system element to generate smooth, time-varying voltages and currents. ADCs find many specific applications in modern systems, including the sampling of speech signals for telecommunications uses, while DACs are often employed to generate speech or music waveforms, to function as programmable voltage or current sources, or to precisely control analog signal levels.For complex signal generation, it may be necessary for a single microprocessor or DSP to control multiple DACs. FIG. 1 illustrates, in block diagram form, a data conversion device 100 of the prior art that includes multiple DACs. Although there are a number of examples of both parallel and serial interface DACs, the device 100 is designed to communicate with an external controller or processor over a serial interface.The external controller (not shown) transmits data to the device 100 over serial data line DIN 104, in conjunction with a serial clock signal SCLK 105. The upper portion of the timing diagram of FIG. 2 illustrates a typical data transmission, in which data bits transmitted from the controller on the data line DIN 104 are shifted in on low to high transitions of the serial clock SCLK 105. It is customary in devices such as the device 100 to provide some means for addressing particular data to a specific one of the input registers 102 provided in the device 100.The device 100 is an example of a double-buffered device. Each of the DACs within the device 100 has an associated input register 102 and an interconnected DAC input data register 103. If the LDAC signal 106 is held in a high logic state by the external controller, the internal DAC data registers 103 are maintained in a latched condition. That is, the data in the input registers 102 may be changed at will without affecting the DAC register 103 contents. In one mode of operation, when all DAC input registers 102 have been programmed with the desired data using the serial interface, the LDAC signal 106 is brought to a logic low level, which latches the data in the input registers 102 into the DAC data registers 103, resulting in a simultaneous update (and corresponding output voltage changes) for all DACs in the device 100. This is referred to as asynchronous operation, since DAC update is not tied to the operation of loading data into the input registers 102.It is worth noting that synchronous operation, in which data is transferred from an individual input register 102 into its associated DAC register 103 immediately upon completion of input data loading over the serial interface, is also supported. For the device 100, this mode of operation can be selected by tying the LDAC signal 106 to a low logic state.As will be appreciated, rapid loading of input registers 102 may be accomplished over the serial interface, followed by a simultaneous transfer of all input data into the DAC registers 103. However, the microcontroller or DSP that is controlling the device 100 has no way of knowing how fast it may update the input register 102 data. Even if conversion of the digital input data into an analog output voltage has not been completed, the input registers 102 can still be loaded with new data, and this new data can be readily transferred into the DAC registers 103.At least for analog-to-digital converters, this uncertainty as to completion of data conversion has been minimized through the use of a BUSY signal. FIG. 3 depicts, in block diagram form, an ADC 300 of the prior art that incorporates a BUSY signal.The ADC 300 is a parallel interface device that presents eight data bits in a data bus 302 for interconnection with an associated controller (not shown), such as a microcomputer or DSP. In order to initiate a conversion of an analog input voltage 305, the controller asserts control signal CONVST 304, an input to the device 300. Upon detecting the active transition of CONVST 304, as shown in the timing diagram of FIG. 4, the control logic 301 of the ADC device 300 begins the data conversion process, and also asserts device output BUSY 303 by bringing the BUSY signal 303 to a logic high state.When the BUSY signal 303 is in its logic high state, it signals to the external controller that a conversion is in progress. After the BUSY signal returns to its logic low level, the external controller may read the conversion result over the data bus 302. Of course, the return of the BUSY signal 303 to its low logic level merely signals that data conversion has been completed. The external controller is not prevented from reading the contents of the ADC data register over the data bus 302 while BUSY is high. Of course, even though BUSY has been described as an active HIGH signal, it may just as readily be implemented as an active LOW signal. The polarity of the active transition is not a key issue; it is overall functionality that is important.As noted, double-buffered DACs enable rapid updating of input registers combined with simultaneous data transfer (and output voltage update) for all DACs within a device. Unfortunately, in devices of the prior art, there is no way of determining precisely how rapidly the input registers of multiple DACs can be updated, since there is no indication as to whether the internal conversion operation of a particular DAC has been completed. This is particularly disadvantageous for complex systems in which multiple DAC devices (such as device 100 of FIG. 1) are employed. Of course, it may be possible to create empirical timing routines so that associated controllers will wait long enough for conversions to be completed before attempting DAC updates, but, in high-speed systems, there may not be code space or system time to waste on such a solution. Additional hardware resources may be required, in some cases, to perform this type of function.Accordingly, a need arises for a device interface that permits register updates to progress as rapidly as possible without interfering with ongoing data conversions, and without the need for additional system hardware to monitor conversion status.SUMMARY OF THE INVENTIONThese needs and others are satisfied by the present invention, in which an interface is disclosed that includes a built-in indication that signal processing has been completed and that data registers in data conversion devices are ready to be re-loaded.In short, a new system design is proposed that may use a wired-OR BUSY signal to provide maximum control and flexibility. The BUSY signal remains high while a conversion is in progress anywhere in the system. While the BUSY signal is in its high logic state, BUSY prevents any DAC data register updates from occurring. In other words, even in asynchronous modes of operation, pulsing an LDAC line low will not cause a DAC data register update until BUSY once again becomes high. This characteristic can be viewed as "stalling" (delaying) the LDAC function temporarily, or, in an alternative view, "storing" the LDAC pulse so that it becomes operative on the rising edge of the BUSY signal.In accordance with one aspect of the invention, a method for communicating between a controller and a device with double-buffered inputs comprises the steps of providing one or more communication paths for exchanging data between the controller and the device, providing a data transfer control signal from the controller to the device for transferring input data from one or more input registers into one or more latchable data registers, and providing a data transfer delay signal from the device to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from the input registers into the latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal.In one form of the invention, the step of providing one or more communication paths further comprises providing a serial data communication line and a serial clock signal communication line. The serial data communication line may be a bi-directional data communication line. The step of providing one or more communication paths could comprise, in the alternative, providing a parallel data bus and parallel data transfer control signals, and the parallel data bus may be a bi-directional parallel data bus.In another form of the invention, the step of providing a data transfer control signal further comprises providing a data transfer control signal that latches input data from the input registers into the latchable data registers on a high-to-low logic level transition. The step of providing a data transfer control signal may further comprise providing a data transfer control signal that is held at a first logic level such that completion of a write operation to an input register controls latching of input data into the latchable data registers, subject to delay introduced by the data transfer delay signal.In accordance with yet another form of the invention, the step of providing a data transfer delay signal from the device to the controller further comprises the step of providing an open-drain data transfer delay signal from the device to the controller. The open-drain data transfer delay signal is coupled to an internal buffer that generates a BUSY input signal on the device that prevents transfer of input data from the input registers. The device may also comprise multiple devices, where the open-drain data transfer delay signal is coupled to other data transfer delay signals from other similar devices to realize a system-wide data transfer delay signal.In accordance with another aspect of the invention, apparatus for communicating between a controller and a device with double-buffered inputs comprises means for providing one or more communication paths for exchanging data between the controller and the device, means for providing a data transfer control signal from the controller to the device for transferring input data from one or more input registers into one or more latchable data registers, and means for providing a data transfer delay signal from the device to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from the input registers into the latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal.In one form, the means for providing one or more communication paths further comprises a serial data communication line and a serial clock signal communication line. The serial data communication line may be a bi-directional data communication line. The means for providing one or more communication paths could also comprise a parallel data bus and parallel data transfer control signals, in which the parallel data bus is a bi-directional parallel data bus.In another form of the invention, the means for providing a data transfer control signal further comprises means for providing a data transfer control signal that latches input data from the input registers into the latchable data registers on a high-to-low logic level transition. The means for providing a data transfer control signal may comprise means for providing a data transfer control signal that is held at a first logic level, such that completion of a write operation to an input register controls latching of input data into the latchable data registers, subject to delay introduced by the data transfer delay signal.In yet another form of the invention, the means for providing a data transfer delay signal from the device to the controller further comprises means for providing an open-drain data transfer delay signal from the device to the controller. The open-drain data transfer delay signal is coupled to an internal buffer that generates a BUSY input signal on the device that prevents transfer of input data from the input registers. The device may also comprise multiple devices, and the open-drain data transfer delay signal may be coupled to other data transfer delay signals from other similar devices to realize a system-wide data transfer delay signal.In accordance with yet a further aspect of the invention, a communications interface for enabling communication between a controller and a device with double-buffered inputs comprises one or more communication paths for exchanging data between the controller and the device, a data transfer control signal from the controller to the device for transferring input data from one or more input registers into one or more latchable data registers, and a data transfer delay signal from the device to the controller. In a first logic state, the data transfer delay signal prevents transfer of input data from the input registers into the latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal.In one form of the invention, the communication paths comprise a serial data communication line and a serial clock signal communication line. The serial data communication line may be a bi-directional data communication line. The data transfer delay signal from the device to the controller may comprise an open-drain data transfer delay signal coupled to an internal buffer that generates a BUSY input signal on the device, that prevents transfer of input data from the input registers. The device could also comprise multiple devices, and the open-drain data transfer delay signal may be coupled to other data transfer delay signals from other similar devices to realize a system-wide data transfer delay signal.In accordance with still a further aspect of the invention, a method for communicating between a controller and multiple data conversion devices, each of the data conversion devices including multiple DACs with double-buffered inputs, comprises the steps of providing a bi-directional serial data communication line and a serial clock signal communication line for exchanging data between the controller and the data conversion devices, providing a data transfer control signal from the controller to the data conversion devices that latches input data from input registers into interconnected latchable data registers of associated DACs on an active transition, providing open-drain, bi-directional data transfer delay signals in a wired-OR configuration from the data conversion devices to the controller, wherein, in a first logic state, the data transfer delay signal prevents transfer of input data from the input registers into the latchable data registers until after a transition to a second logic state occurs on the data transfer delay signal. In this way, when any of the data conversion devices drives the data transfer delay signal to the first logic state, transfer of input data from the input registers into the latchable data registers is inhibited in every DAC in every data conversion device that is part of the wired-OR configuration.Further objects, features, and advantages of the present invention will become apparent from the following description and drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts, in block diagram form, a data conversion device of the prior art;FIG. 2 is a timing diagram that illustrates operation of the device of FIG. 1;FIG. 3 is a block diagram of another data conversion device of the prior art;FIG. 4 is a timing diagram that illustrates operation of the device of FIG. 3;FIG. 5 is a block diagram of a device in accordance with the present invention;FIG. 6 is a timing diagram that illustrates operation of the device of FIG. 6; andFIG. 7 is a schematic illustration of a device pin configuration.DETAILED DESCRIPTION OF THE INVENTIONThere is described herein a device interface that offers distinct advantages when compared to the prior art. FIG. 5 is a block diagram depiction of a device 500 employing an interface in accordance with the present invention.The device 500 is a data conversion device that includes multiple DACs. Double-buffering is used, so that input registers 502 can be loaded with DAC input data, while the DAC registers 503 remain latched and unaffected by information presented to the input registers 502 until certain conditions have been satisfied.The device 500 incorporates several advantageous features. A serial communications interface is supported via a serial data communications path DIN 504 and a serial clock SCLK 505. Input data may be directed to the desired input register 502 by providing address information as part of the serial data transmission over the serial data line DIN 504. Data bits are shifted in on the low-to-high transitions of the serial clock SCLK 505. An LDAC signal 506 controls the transfer of data from the input registers 502 into the DAC registers 503, with a high-to-low transition of the LDAC signal 506 initiating the data transfer. In this way, all of the DACs on the device may be updated at the same time. This mode of operation, in which the LDAC signal 506 is normally maintained in a logic high state, then is pulsed low to initiate a data transfer, is called asynchronous device operation.There is also a synchronous mode of operation, in which the LDAC signal 506 is simply held in a low logic state. In this synchronous mode, input data is transferred from an input register 502 to the interconnected DAC register 503 upon completion of a write operation to the input register 502. However, operation of the device 500 differs from operation of prior art devices in that the device 500 incorporates a BUSY signal 507.The BUSY signal 507 remains in a logic high state so long as all data conversions that can take place on the device have been completed, and the DAC data registers 503 are ready to be updated with new information from the input registers 502. The BUSY signal 507 transitions to a low logic state immediately after the data interface write cycle has been completed (provided BUSY has not already been asserted, of course), and remains low until data conversion has been completed and the input registers 502 have been updated. The BUSY signal 507 then returns to its high logic level.Of course, as noted previously, the specific logic levels involved in the active transition of BUSY could easily be reversed without adverse effect on functionality. In fact, the active transition of any of the control or status signals described herein could easily be reversed without affecting functionality. It should also be noted that the BUSY signal could be shared with other functions on one IC or multiple ICs. For example, BUSY could be shared with the power-on state machine function on a DAC integrated circuit, with a conversion time A-to-D converter BUSY signal, or with a system level reset and/or hold signal, among other possibilities. And, since multiple write cycles may be necessary under some conditions, BUSY signal timing may vary.During the interval when the BUSY signal is low, no data transfers from the input registers 502 to the DAC registers 503 are permitted. This prohibition on updates of the DAC registers 503 even affects the synchronous mode described above. Consequently, even with the LDAC signal 506 tied to a low logic level, no update of a DAC register can occur at the completion of an input register 502 write operation unless the BUSY signal 507 is in its high logic state.The timing diagram of FIG. 6 illustrates the effect of the BUSY signal 507. When the signal LDAC<1 > makes a high-to-low transition while the BUSY signal is in a logic low state, there is no immediate effect. It is not until after the rising edge of BUSY that VOUT<1 > actually begins to change its value. This is because the transfer of data from the input register 502 to the DAC register 503 (the effect of LDAC, in other words) is "delayed" or "stalled" until after BUSY returns to a high logic level.However, when LDAC<2 > is asserted in the timing diagram of FIG. 6, BUSY has already returned to a high logic level, and VOUT<2 > begins to change value in direct response to LDAC<2> . As noted above, this is because the contents of the input registers 502 are immediately transferred to DAC registers 503 when LDAC is asserted, unless BUSY is in its low logic state, indicating that a conversion is still in progress. Thus, when BUSY goes HIGH, LDAC becomes active, to yield maximum update rate.FIG. 7 represents the BUSY pin 507 of the device 500 in more detailed, schematic form. As can be appreciated from an examination of FIG. 7, internal signal busy_out 705 is provided to an inverter/buffer 703, which in turn drives the gate of open-drain MOSFET 702. When the internal busy_out signal 705 is in its low logic state, indicating that a conversion is still in progress in the device 500, transistor 702 will turn ON, and the external BUSY pin 507 will go low. The internal busy_in signal 706, which is driven by the drain of transistor 702 through buffer 704, will also be low under these conditions, and it is this internal busy_in signal 706 that actually inhibits (stalls or delays) data transfers from input registers 502 (FIG. 5) to DAC registers 503. Of course, if another device is also connected to the BUSY pin, BUSY may already be low. That is, another device may already have pulled the BUSY signal to a low logic level.The pin configuration depicted in FIG. 7 is readily adaptable to a wired-OR "system BUSY" connection. Since the BUSY signal 507 is open-drain, a plurality of BUSY signals from similar devices may be connected together. If a BUSY condition occurs anywhere in the system, the resulting low logic level at the BUSY pin 507 will pull down the input to buffer 704, placing the internal busy_in signal 706 in a low logic state, and inhibiting DAC data transfers and consequent DAC updates. Of course, the internal busy_in signals for all of the wired-OR devices will similarly be low, thus inhibiting DAC updates throughout the system while any conversion activity is still in progress.It should be noted that the term "open drain," as it is used herein, does not exclude the introduction of a relatively small series impedance. Nor is the interconnection of open drain signal lines inconsistent with the insertion of clamp circuits intended to stop the open drain signal from "hanging" near the mid-threshold region for prolonged periods. For example, a back-to-back configuration of weak inverters might be used to accomplish this clamp function.Furthermore, the controller described herein may, for example, be a microcontroller, a digital signal processor (DSP), or other master control device. There may even be more than one controller involved in the system, with each controller having the capability to monitor and/or manipulate the system BUSY control signal. Such a system may be characterized as a multi-controller or multi-master system. One of the master devices may assert the BUSY signal, thus forcing the remaining devices to wait for its release. This permits an added degree of freedom in system design.There has been described herein a device interface that offers distinct advantages when compared with the prior art. It will be apparent to those skilled in the art that modifications may be made without departing from the spirit and scope of the invention. Accordingly, it is not intended that the invention be limited except as may be necessary in view of the appended claims. |
A memory system having a number of partitions each operative to independently service memory requests from a plurality of memory clients while maintaining the appearance to the memory client of a single partition memory subsystem. The memory request specifies a location in the memory system and a transfer size. A partition receives input from an arbiter circuit which, in turn, receives input from a number of client queues for the partition. The arbiter circuit selects a client queue based on a priority policy such as round robin or least recently used or a static or dynamic policy. A router receives a memory request, determines the one or more partitions needed to service the request and stores the request in the client queues for the servicing partitions. In one embodiment, an additional arbiter circuit selects memory requests from one of a subset of the memory clients and forwards the requests to a routing circuit, thereby providing a way for the subset of memory clients to share the client queues and routing circuit. Alternatively, a memory client can make requests directed to a particular partition in which case no routing circuit is required. For a read request that requires more than one partition to service, the memory system must collect the read data from read queues for the various partitions and deliver the collected data back to the proper client. Read queues can provide data in non-fifo order to satisfy an memory client that can receive data out-of-order. |
1. A graphics subsystem, comprising:a graphics memory; a graphics memory access bus connected to said graphics memory; a plurality of graphics processing units, each graphics processing unit issuing memory requests, each individual memory request having an associated data transfer size; and a memory controller connected between said graphics memory access bus and said plurality of graphics processing units, said memory controller providing a non-partitioned view of said graphics memory to said plurality of graphics processing units, while dividing said graphics memory access bus into individual bus partitions, each of which is a fraction of the graphics memory access bus size, said memory controller partitioning information within said graphics memory into independently accessible memory partitions, said memory controller routing data between said independently accessible memory partitions and said plurality of graphics processing units via said individual bus partitions, said memory controller determining which memory requests are to be serviced in particular clock cycles via one or more of said independently accessible memory partitions and said memory bus partitions; wherein said memory controller identifies memory requests requiring a subset of said independently accessible memory partitions and determines if another memory request can be serviced in parallel via another subset of said independently accessible memory partitions to improve throughput. 2. The graphics subsystem of claim 1 wherein said memory controller includes control logic to select one or more of said individual bus partitions to route data in response to a data request from a graphics processing unit of said plurality of graphics processing units.3. The graphics subsystem of claim 1 wherein said memory controller maps data to said independently accessible memory partitions in an interleaved fashion to balance memory load across said independently accessible memory partitions.4. The graphics subsystem of claims 1 wherein said individual bus partitions have corresponding individual queues.5. The graphics subsystem of claim 4 further comprising a multiplexer to combine data from said individual queues.6. The graphics subsystem of claim 4 wherein said individual queues have corresponding arbiter circuits.7. The graphics subsystem of claim 6 wherein said arbiter circuits include an arbiter circuit to prioritize requests from a sub-set of said plurality of graphics processing units.8. The graphics subsystem of claim 7 wherein said sub-set of said plurality of graphics processing units share a command and write data path.9. The graphics subsystem of claim 7 wherein each graphics processing unit of said sub-set of said plurality of graphics processing units has a sub-request ID.10. The graphics subsystem of claim 7 wherein said arbiter circuit pre-arbitrates requests from a sub-set of low-bandwidth graphics processing units.11. The graphics subsystem of claim 10 wherein said arbiter circuit treats said sub-set of low-bandwidth graphics processing units as a single client.12. The graphics subsystem of claim 6 wherein each arbiter circuit implements an independent priority policy to route information to said plurality of graphics processing units.13. The graphics subsystem of claim 12 wherein said priority policy is a static policy.14. The graphics subsystem of claim 12 wherein said priority policy is a least recently used policy.15. The graphics subsystem of claim 12 wherein said priority policy is a round-robin policy.16. The graphics subsystem of claim 12 wherein said priority policy is a fixed priority policy.17. The graphics subsystem of claim 12 wherein said priority policy is a dynamic priority policy.18. The graphics subsystem of claim 4 wherein said individual queues facilitate ordered read data delivery and thereby prevent deadlocking across said individual bus partitions.19. The graphics subsystem of claim 18 wherein said individual queues process read data requests that span a plurality of individual bus partitions.20. The graphics subsystem of claim 4 wherein said individual queues include request queues and read data return queues to balance data locality requirements to facilitate memory access efficiency.21. The graphics subsystem of claim 1 wherein individual memory partitions of said independently accessible memory partitions are assigned to solely service individual graphics processing units of said plurality of graphics processing units.22. The graphics subsystem of claim 1 wherein a selected graphics processing unit of said plurality of graphics processing units accepts data in an out-of-order fashion.23. The graphics subsystem of claim 22 wherein said selected graphics processing unit accepts data as soon as said data is available from a partition.24. A method of servicing data requests from graphics processing units, comprising:receiving data requests from a plurality of graphics processing units accessing a unitary graphics memory subsystem, each individual data request having an associated data transfer size; assigning said data requests to a plurality of independently accessible memory partitions imposed upon said unitary graphics memory subsystem; and delivering data from said independently accessible memory partitions to said graphics processing units via individual bus partitions of a unitary graphics memory access bus; wherein for a particular request requiring a subset of said independently accessible memory partitions said assigning includes determining if another memory request can be serviced in parallel via a different subset of said independently accessible memory partitions to improve throughput. 25. The method of claim 24 further comprising storing data from said independently accessible memory partitions prior to said delivering.26. The method of claim 25 further comprising combining stored data prior to said delivering.27. The method of claim 26 further comprising facilitating ordered read data delivery to prevent deadlocking across said individual bus partitions.28. The method of claim 27 further comprising prioritizing requests from a sub-set of said plurality of graphics processing units.29. The method of claim 28 further comprising sharing a command and write data path between a sub-set of said plurality of graphics processing units.30. The method of claim 29 further comprising assigning a sub-set request ID to individual graphics processing units of said sub-set of said plurality of graphics processing units.31. The method of claim 24 further comprising pre-arbitrating requests from a sub-set of low-bandwidth graphics processing units of said graphics processing units.32. The method of claim 31 further comprising treating said sub-set of low-bandwidth graphics processing units as a single client.33. The method of claim 24 further comprising assigning individual memory partitions of said independently accessible memory partitions to service individual graphics processing units of said graphics processing units.34. The method of claim 24 further comprising accepting data at a selected graphics processing unit of said graphics processing units in an out-of-order fashion.35. The method of claim 34 further comprising accepting data as soon as said data is available from a memory partition.36. The method of claim 24 further comprising balancing data locality requirements to facilitate memory access efficiency. |
CROSS-REFERENCE TO RELATED APPLICATIONS1. Field of the InventionThe present invention relates generally to memory controllers in a computing system and more particularly to a controller for a memory system in which the storage array is partitioned into a number of independent sections.2. Description of the Related ArtIn current graphics subsystems, the speed and number of graphical processing elements has increased enough to make the graphics memory subsystem a barrier to achieving the higher performance that these elements may be capable of achieving. Typical graphics processing elements which are the memory clients include the host processor, a texture engine, a z-buffer engine, a color engine, a 2D-graphics engine, a 3D-graphics engine and a display engine.FIG. 1 shows a graphics memory controller of the prior art. The memory controller 10 acts as a switch to determine which of several graphics processing clients 12, 14, 16, 18, 20 can access the memory storage array 22, which is organized as a single, i.e., monolithic, partition. Each client requests one or more cycles of the memory storage array 22 which transfers, in each cycle, a data quantity equal to the size of the data bus of the array.Monolithic memory subsystems for the various graphics clients have evolved in an attempt to overcome this barrier chiefly by using a wider memory data bus for increased throughput. Memory busses in current architectures are now 128 bits physically, and 256 bits (32 bytes), effectively, when the minimum data transfer requires both phases of single clock cycle. (Hereinafter, when referring to the size of the data bus, the effective size, rather than physical size is meant.) The size of the memory data bus sets the size of the minimumn access that may be made to the graphics memory subsystem and for some devices that make use of the graphics memory, 32 bytes is an acceptable minimum.However, for many devices in a graphics subsystem, a minimum size of 32 bytes is inefficient. As geometry objects become smaller and finer, a minimum access of 32 bytes transfers more data than is needed by the various graphics engines for the geometry object. One measure of fetching efficiency of the access is the ratio of pixels used for the geometry object to pixels fetched during an access. As the size of the memory bus increases or the minimum access increases, this ratio becomes smaller. A small ratio implies a large amount of wasted memory throughput in the graphics memory subsystem.FIG. 2 shows a plot of the fetching efficiency of the prior art system of FIG. 1, in which the fetching efficiency is plotted on the vertical scale and the relative number of pixels per triangle is plotted on the horizontal scale. The plot shows that, as the number of pixels per triangle decreases, the fetching efficiency also decreases to a point that becomes unacceptably low for small triangles, i.e., triangles having approximately 10 to 100 pixels. At that point more than half the data fetched is not used for the triangle.Thus, it is desirable to improve the fetching efficiency to the local memory storage but without altering the view to the memory clients of memory as a single unit to maintain compatibility with existing memory clients.Monolithic memory subsystems are also inefficient in another way. Because the topology of the array is such that the entire or substantial part of the address path and the control path must reach each device that makes up the memory array, the electrical load on these paths is quite high. This leads to a slower cycle time for the array which translates into a loss of throughput than the array can supply. A partial answer to this problem is the replication and buffering of the address and control paths to portions of the array, which has the effect of increasing the cost of the array to gain a portion of the lost performance.Therefore, there is a need to improve the efficiency of the memory access and the topology of the array without changing the existing memory clients. Improved efficiency and topology would lead to higher throughputs from the memory array and would lead to better use of the available memory throughput by the graphics subsystem thereby improving the performance of the graphics subsystem without a major impact on the memory clients.BRIEF SUMMARY OF THE INVENTIONThe present invention meets the above need. A system in accordance with the present invention includes a plurality of memory partitions each operable to service a memory request independently of other memory partitions. The memory request includes information specifying a location of requested data in the memory subsystem and a data transfer size for the request. A plurality of client queues, one for each memory client and each partition is also included along with a plurality of routing circuits, wherein one routing circuit is connected to each memory client and to the client queues. Each routing circuit is operative to determine for each request, independently of the other requests, one or more partitions needed to service the request based on the specified location and transfer size, and each routing circuit is operative to route and store, independently of the other requests, each of the requests in the client queues of the one or more servicing partitions. Further included are a plurality of arbiter circuits, one arbiter circuit connected to each partition and to the client queues for the partition, where each arbiter circuit is operative to select, independently of other partitions, based on a priority policy, a request for servicing from one of the client queues for the partition, and to transfer each of the selected requests, independently of the other selected requests, to the servicing partition.A method of servicing memory requests from at least one memory client in accordance with the present invention includes the following steps. First, at least one memory request from the at least one client is received, where the memory request is directed to a memory subsystem having a plurality of independently operable partitions and includes information specifying a location of requested data in the memory subsystem and a data transfer size for the request. Next, one or more partitions needed to service the request based on the specified location and transfer size are determined and the request is routed to each of the one or more servicing partitions. Following this, the routed request is serviced at each servicing partition independently of the other servicing partitions.One advantage of the present invention is that, for a given memory size, each partition comprises a smaller number of devices. This reduces or eliminates the need for buffered and replicated address and control paths to the devices thereby improving the topology and cycle time of the partition and therefore the memory system.Another advantage of the present invention is that, if a client requires only a small transfer, then only a few partitions, perhaps a single partition, of the total memory array is involved in the transfer. This leaves the other partitions of the memory array free to perform other small transfers, thus improving the deliverable throughput of the array. Larger transfers are still possible by splitting the request into a number of smaller requests and submitting the multiple requests to the individual partitions for service.BRIEF DESCRIPTION OF THE DRAWINGSThese and other features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:FIG. 1 shows a graphics memory controller of the prior art;FIG. 2 shows the fetch efficiency of the prior art;FIG. 3 shows a high-level block diagram of a system in accordance with the present invention;FIG. 4 shows the partitioning of the memory in which the memory as linear array of bytes is mapped to the partitions in an interleaved fashion;FIG. 5A shows a block diagram for the address, command and write data path for the memory system of the present invention;FIG. 5B shows one implementation of the router of FIG. 5A;FIG. 5C shows another implementation of the router of FIG. 5A;FIG. 6A shows a block diagram for the read data path in accordance with one version of the present invention;FIG. 6B shows a block diagram for the read data path in accordance with one version of the present invention with partition-size interface hardware;FIG. 6C shows an implementation of the interface hardware included in the multiplexer circuitry of FIG. 6B;FIG. 6D shows another implementation of the interface hardware included in the multiplexer circuitry of FIG. 6B;FIG. 6E shows another block diagram for the read data path in accordance with another version of the present invention with partition-size interface hardware;FIG. 6F shows an implementation of the interface hardware included in the multiplexer circuitry of FIG. 6E;FIG. 6G shows another implementation of the interface hardware included in the multiplexer circuitry of FIG. 6E;FIG. 6H shows an implementation of the read data path using sequence counts;FIG. 7 shows an alternative embodiment of the command and write data path having an arbiter that permits multiple clients to share a router and client queues;FIG. 8 shows another alternative embodiment of the command and write path in which the arbitration for access to a partition is in two parts;FIG. 9 shows yet another alternative embodiment of the command and write path in which the translation block for requester is removed; andFIG. 10 shows the degree of improvement of the present invention over the prior art.DETAILED DESCRIPTION OF THE INVENTIONFIG. 3 shows a high-level block diagram of a system in accordance with the present invention. The figure shows that the memory array 24 has an number of independently operable partitions 26a, 26b, 26c 26d, each with a bus 28a-28d and a bus 27a-27d having a width w that is preferably a smaller transfer size than the single prior art bus 15 in FIG. 1. In one embodiment, there are four independent partitions P0-P326a, 26b, 26c 26d each with a bus one quarter the size of the non-partitioned bus, i.e., each with a 64 bit bus. Each of the memory system clients 12-20 is connected as before to the memory controller 30 which presents a non-partitioned view of the memory to the clients. The memory controller 30 includes a number of queues 32, 34, 36, 38, 40, 42, 44, 46 that connect to the memory array of each partition and control logic (not shown in FIG. 2) that determines the one or more partitions to which a request should be routed and the one or more partitions from which a response (read data) to a request should be obtained to maintain the appearance of a non-partitioned memory for the clients. Additionally, the control logic in the memory controller arbitrates among the various clients according to a priority assigned to each of the clients.Partitioning of the memory is shown in FIG. 4 and involves mapping a linear array of bytes to the partitions in an interleaved fashion. In particular, ranges of byte addresses in the linear array are mapped to the various partitions 26a-26d. Preferably, the mapping is such that the lower address bits are mapped to a particular partition so that the memory load is approximately evenly balanced over the partitions. In one embodiment, each partition is mapped to an 8 byte range of memory locations. In a different embodiment, each partition is mapped to a 16 byte range of memory locations but any size mapping can be used without departing from the spirit and scope of the invention. A memory access request can start on any partition and end on any partition as long as it does not wrap around and touch the starting partition. If the transfer size of a request is so large that it would violate this rule, then the request, itself, is broken up to multiple requests each satisfying the rule.If the bus size of a partition is smaller than the range of locations mapped to that partition then multiple accesses are made of the partition until the range of locations in the linear map for that partition is satisfied. For example, if a partition is mapped to a range of locations spanning 16 bytes, but the data bus for the partition is only 8 bytes, then two accesses are made of the partition.In operation there are two cases to consider. In the first case, a client makes a request that maps to exactly one partition in size and alignment. This case involves only one partition and some clients are dedicated to only one partition to simplify accesses for that client. In the second case, a client makes a request that spans two or more partitions. This requires that the request be split up and routed to two or more partitions for servicing of the request. The request can start at any partition. In the subcase of a write request, the write data are split and the write data and write request are routed to the various partitions. Once the write is retired in the arrays for the affected partitions, the write is completed. In the subcase of a read request, the read request is routed to the proper partitions and as the read request is completed by each of the affected partitions the read data is collected and returned to the client.The partition data bus width and client data bus width may be different. A client with a wider data bus than that of the partition may consume several clocks of partition transfer time. Alternatively, a client with a data bus width that is smaller than that of a partition can request a burst having a size that requires only one clock cycle of the partition to access the data from that partition but requires many clock cycles to multiplex the partition's data onto the smaller client data bus. The read circuitry for the narrow client is configured to use several clocks to multiplex the data from the partition onto the narrow client bus. This ensures that unused data is not fetched, and isolates the low bandwidth client from the high bandwidth memory system.FIG. 5A shows a block diagram for the address, command and write data path for the memory system of the present invention. For convenience, four memory partitions 26a, 26b, 26c, 26d are shown, but any number of partitions is contemplated for the present invention. Each partition has an arbiter, 56a, 56b, 56c, 56d, having an output that supplies requests to the partitions and a number of inputs, each of which connects to a client queue 62-66, 70-74, 78-82, 86-90. Each arbiter 56a-56d prioritizes the servicing of requests at its inputs according to a priority policy and each client queue 62-66; 70-74; 78-82, 86-90 that is connected to an arbiter input receives a memory partition request derived from the one or more memory clients 100, 102, 104, such as the various graphics engines mentioned above. While the figure shows three clients, any number of memory clients is possible as long as the necessary hardware is provided for each client.Priority policies for the arbiters 56a-56d include a least-recently used policy, a fixed priority policy or a round-robin policy or a static or dynamic set of policies depending on the type and nature of the client requests in the client queues. Requests for time critical processes such as display updates or other isochronous processes are given higher priority over requests that have less strict timing requirements.Any number of clients can request memory operations of the partitions; three clients are shown for convenience. For each client, there is a router 110, 112, 114 that determines, based on an address and reference size contained in the client's request, the one or more partitions 26a=, 26d, that are needed to service the request. If, in the example shown, each partition 26a-26d, has an effective address map width of W*8 bits (W bytes) then the router 110-114 decodes a portion of the address in the request to determine which of the W-byte partitions should service the request. In addition to the decoding function of the router, the router, in some versions of the invention, also performs address mapping or address translation functions if they are required for a particular application. The router can also perform the function of breaking up a request that is so large is violates the no-wrap around rule. The router then issues requests to the client queues each of which meets the no-wrap rule.Memory clients 100-104 can place a partition request into one of the client queues 62-66, 70-74, 78-82, 86-90 independently of any other client placing a partition request into a different queue. This prevents memory clients from blocking or interfering with each other at least insofar as memory operations are involved. This means that no partition waits on another partition before servicing a memory request. This arrangement improves memory system throughput for memory reads and writes that require fewer than an integer number of all of the partitions. For example, if there are four partitions, then requests requiring 1, 2, or 3 or 5, 6, or 7 partitions receive improved service. Requests that require an integer number of all four partitions incur no loss in performance in the partitioned memory system.FIG. 5B shows one implementation of logic present in the router of FIG. 5A. The router passes through the request and its address and transfer size information and derives the starting partition information by means of a starting partition logic block 116. The starting partition and size are used by the router to determine which partition client queue to load with a request.FIG. 5C shows another implementation of logic present in the router of FIG. 5A. In this case, the router passes through the request and its address and transfer size information and derives not only the starting partition information but also a sequence count by means of sequence counter 118. Sequence counter 118 is incremented for each new request as long as there is a not busy condition according to gate 119. The starting partition and size are used by the router to determine which partition client queue to load with a request.FIG. 6A shows a block diagram for the read data path in accordance with one version of the present invention. In this case, the partition data bus width is the same as the client's data bus width. Read requests that touch only a single partition and burst read requests that span a number of partitions must be serviced by those partitions and the read data must be presented in the order requested by the client back to the client. To handle this case and to prevent deadlocks, the hardware of FIG. 6A is required for each client.For a particular client, each partition in FIG. 6A has a read queue 120a-120d that captures the read data from the partition array 26a-26d. This read queue 120a-120d is controlled by a queue controller 122, 124, 126, 128 that is connected to the read queue 120a-120d, the partition 26a-26d, and to a control block 130. Each of the read queue controllers 122-128 receives a data_valid signal from the partition 26a-26d and provides a queue control signal, q_empty, to the control block 130. Each of the read queues 120a-120d connects to the input of a multiplexer 132 whose output provides the read data to the client to which this circuitry is dedicated. Another queue, called a side queue 134, receives information regarding the starting partition for the read request, sub request ID, and the burst size, if any, for the read request. Sub-requestors 153, 155, 157, in FIG. 7 require sub request IDs because such sub clients share a single request path. In FIG. 6A, control block 130 is connected to the multiplexer 132 to selectively couple one of the inputs to the output of the multiplexer as the data becomes available at the output of the read queue 120a-120d, respectively connected to each multiplexer input. The control block also provides signals to the queue controllers for use in controlling the queues and provides the data valid signals, data_valid_req0a, datavalidreq0b, data_validgreq0c, for each of the clients.The hardware according to the block diagram of FIG. 6A, operates to collect the various parts of read data that are supplied by the partitions 26a-26d in response to a read request. The partition loads the read data into read queues corresponding to the client that issued the read by using the data_valid signal as a load signal for the client read queue. For example, if the read request starts at partition 1, 26b as determined by the data in the side queue 134 that is received by the control block 130, then when the queue controller 122-128 for a partition 26a-26d discovers that data is ready in the read queue for that partition by means of the q_empty signal becoming false, the control block 130 sets the multiplexer 132 to output the read queue for partition 1, 26b to the particular client. Next, the queue controller for partition 2, 26c is checked to see if the data is ready from partition 2. If so, then the multiplexer 132 is set to output the read queue for partition 2 to the client. This sequence continues until the full amount of data requested by the client is sent to the client. If the data bus size of a partition is smaller than the range of to locations mapped to that partition then multiple read data of the partition may be returned until the range of locations in the linear map for that partition is satisfied.In some cases a memory client can accept data in an out-of-order fashion. This means that instead of accepting data, lowest byte first for example, the client can accept data in a number of other pre-specified orders. In these cases, the queue controllers 122-128 operate to cause the queues 120a-d to output data in a non-first-in-first-out order in order to provide read data in the particular order in which the clients is accepting the data. This increases throughput of the memory system because a client that can receive data out-of-order can accept data as soon as it is available from a partition.The embodiment of FIG. 6H does not use a side queue, but instead provides each client with a sequence number counter or equivalent mechanism, which increments for each reference that is issued by the client. The sequence count is conveyed to all servicing partitions along with start partition, transfer size and sub request ID if there are multiple clients sharing the same router as shown in FIG. 7. The sequence count is passed through the partition and loaded into the read queues 120a-d with the read data when that partition's read return data returns from memory. The outputs of read queues 120ad transfer this information to controller 130. Controller 130 examines the partition read queue outputs and selects read data tagged with the next sequence number only after exhausting read data tagged with the current sequence count.FIG. 6B shows a block diagram for the read data path in accordance with one version of the present invention with partition-size interface hardware, which is needed when the data bus width of a client is different than that of the partition. The data bus width of a partition may be smaller than that of a client, such that the partition must be cycled more than once to obtain enough data to match the bus width of the client. In this case the control logic operates to cause a burst of data (several data words) to be produced from the partition. Interface hardware blocks 140, 142, 144, 146 for holding the burst data are additionally included in the multiplexer circuitry 133. The multiplexer circuitry selects one of the interface blocks to provide some or all of the burst data.FIG. 6C shows an implementation of the interface hardware included in the multiplexer circuitry of FIG. 6B. This circuitry adapts a W wide partition to a client bus having a width of 2W. A register 150 and multiplexer 152 capture and hold a portion of the burst data and the remaining portion of the burst data is provided by the associated read queue. In other embodiments, both W-wide paths are provided by registers.FIG. 6D shows another implementation of the interface hardware included in the multiplexer circuitry of FIG. 6B. In this implementation, the partition bus is W bytes wide and the client bus is [1/2] W wide. A multiplexer 152 adapts the wider bus to the smaller bus.FIG. 6E shows another block diagram for the read data path in accordance with another version of the present invention with partition-size interface hardware. In this case, the client bus width is larger than the partition address map size. In this example, there are two interfacing blocks included within the multiplexer circuitry 133. The first interface block 141 is connected to receive data from read queues 120a and 120b and the second interface block 143 is connected to receive data from read queues 120c and 120d. FIG. 6F shows an implementation of the interface hardware included in block 143 P of the multiplexer circuitry of FIG. 6E. The interface hardware simply passes through the two W-byte sized busses to provide a bus that is 2W, because the outputs of a pair of read queues are available to the multiplexer circuitry.FIG. 6G shows another implementation of the interface hardware included in the multiplexer circuitry of FIG. 6E. The interface block shown adapts a partition bus having a width W and address map width 2W to a client bus that has a width 4 W. A pair of registers, 154a, 154b, and a pair of multiplexers, 156a, 156b, are used to capture multiple data items from the pair of read queues to which each of the interface block, 141, 143 is connected.FIG. 7 shows an alternative embodiment of the command and write data path. In this embodiment, an arbiter 151 is added to prioritize a number of sub-requestors 153, 155, 157 that share a command and write data path in order to reduce the hardware that would otherwise be needed for each requester. Each sub-requestor has a sub-request ID. The data_valid signals of FIGS. 6A and 6B and the sub-request ID indicate which of sub-requestors 153, 155, 157 should collect the read data.FIG. 8 shows another alternative embodiment of the command and write path in which the arbitration for access to a partition is in two parts, arbiter-0a 56a-56d, and secondary arbiter-0b 0b 160-166, and there is a preferential path for a high priority requester 106, which, in the figure, is Req3. Low priority requesters arbitrate to gain access to the partitions using first the arbiter-0a and then secondary arbiter-0b circuitry but the high priority request via client queues 60, 68, 76, 84 arbitrates to gain access using only the arbiter-0b circuitry 160-166. It is also advantageous to introduce a refresh requestor 170 as shown and memory device initialization requests into the arbiter-0b 160-166 blocks.The above describes the cases in which a (i) processing request is sent to a client, (ii) the client performs the requested processing and makes memory requests and (iii) the memory system routes the memory requests to the proper client queues. However, if a memory client is itself partitioned into an number of processing stages, an alternative is (i) to determine the one or more processing stages required for a processing request, (ii) route the processing requests to the proper stage for processing and then (iii) send the request directly to the appropriate client queue which is connected to a specific partition. A single memory client that is not partitioned into stages may also be restricted to only access data in a single partition and connect directly only to that partition. FIG. 9 shows this alternative embodiment of the command and write path in which there is no router for client 3106 because the client directs its requests via client queue 84 only to a particular partition, partition 3, 26d. For these partitioned clients, such as a raster-operation pipeline, this simplifies and speeds up the access to the memory subsystem. In addition, the read path is simplified because the read data is directly returned to the requester from the dedicated partition. The multiplexer and control circuitry of FIG. 5 is not required for a requester that is dedicated to a particular partition.One or more of the above-described embodiments can be combined according to the embodiments that are needed for the types and number of memory clients and all such combinations are within the spirit and scope of the present invention.FIG. 10 shows a plot illustrating the improved fetching efficiency of the present invention. For the same small triangle size, between approximately 10 and 100 pixels there is a marked improvement in the efficiency.Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein. |
Monolithic three dimensional (3D) flip-flops with minimal clock skew and related systems and methods are disclosed. The present disclosure provides a 3D integrated circuit (IC) (3DIC) that has a flop spread across at least two tiers of the 3DIC. The flop is split across tiers with transistor partitioning in such a way that keeps all the clock related devices at the same tier, thus potentially giving better setup, hold and clock-to-q margin. In particular, a first tier of the 3DIC has the master latch, slave latch, and clock circuit. A second tier has the input circuit and the output circuit. |
What is claimed is: 1. A three dimensional (3D) flip-flop, comprising: a master latch disposed in a first tier of a 3D integrated circuit (IC) (3DIC), the master latch configured to receive an input and a clock input, the master latch configured to provide a master latch output; a slave latch disposed in the first tier of the 3DIC, the slave latch configured to provide a 3DIC flip-flop output; a clock circuit configured to provide the clock input, the clock circuit disposed in the first tier of the 3DIC; and a data input circuit configured to provide data input to the master latch, the data input circuit disposed in a second tier of the 3DIC different from the first tier. 2. The 3D flip-flop of claim 1, further comprising an output circuit configured to receive the master latch output and generate a buffered output of the master latch output, the output circuitry disposed in a tier different from the first tier. 3. The 3D flip-flop of claim 2, wherein the tier different from the first tier is comprised of the second tier. 4. The 3D flip-flop of claim 1, wherein the 3DIC is comprised of a monolithic 3DIC. 5. The 3D flip-flop of claim 1, wherein the slave latch comprises a plurality of slave latches and the master latch comprises only a single master latch. 6. The 3D flip-flop of claim 1, wherein the master latch comprises a plurality of master latches and the slave latch comprises only a single slave latch. 7. The 3D flip-flop of claim 1, wherein the clock circuit comprises two inverters to provide a buffered clock signal and a complementary clock signal. 8. The 3D flip-flop of claim 1, further comprising an input multiplexer configured to select between the data input circuit and a scan input provided as the input to the master latch, the input multiplexer disposed in the second tier. 9. The 3D flip-flop of claim 1, wherein the first tier comprises lower threshold voltage transistors relative to transistors in the second tier. 10. The 3D flip-flop of claim 1, wherein the first tier comprises high-K metal gate transistors, and the second tier comprises polysilicon transistors. 11. The 3D flip-flop of claim 1 integrated into an IC. 12. The 3D flip-flop of claim 1 integrated into a device selected from the group consisting of a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player. 13. A three dimensional (3D) flip-flop, comprising: a master means for receiving an input and a clock input, the master means configured to provide a master latch output, the master means disposed in a first tier of a 3D integrated circuit (IC) (3DIC); a slave means for providing a 3DIC flip-flop output, the slave means disposed in the first tier of the 3DIC; a clock means for providing the clock input, the clock means disposed in the first tier of the 3DIC; and a data input circuit configured to provide data input to the master means, the data input circuit disposed in a second tier of the 3DIC different from the first tier. 14. The 3D flip-flop of claim 13, wherein the clock means comprises a clock circuit. 15. The 3D flip-flop of claim 13, wherein the master means comprises a master latch. 16. A method of designing a flip-flop, comprising: disposing a master latch, a slave latch, and a clock circuit in a first tier of a three dimensional (3D) integrated circuit (IC) (3DIC); and disposing a data input circuit in a second tier of the 3DIC different from the first tier. 17. The method of claim 16, wherein disposing the data input circuit in the second tier comprises configuring the data input circuit to provide data input to the master latch. 18. The method of claim 16, wherein disposing the clock circuit in the first tier comprises configuring the clock circuit to provide a clock input to the master latch. 19. The method of claim 16, wherein disposing the slave latch in the first tier comprises configuring the slave latch to provide a 3DIC flip-flop output. 20. The method of claim 16, further comprising disposing an output circuit in the second tier. |
MONOLITHIC THREE DIMENSIONAL (3D) FLIP-FLOPS WITH MINIMAL CLOCK SKEW AND RELATED SYSTEMS AND METHODS PRIORITY APPLICATIONS [0001] The present application claims priority to U.S. Provisional Patent Application Serial No. 61/846,652 filed on July 16, 2013 and entitled "MONOLITHIC THREE DIMENSIONAL (3D) SCAN D-FLOP DESIGN WITH MINIMAL CLOCK SKEW," which is incorporated herein by reference in its entirety. [0002] The present application also claims priority to U.S. Patent Application Serial No. 14/012,445 filed on August 28, 2013 and entitled "MONOLITHIC THREE DIMENSIONAL (3D) FLIP-FLOPS WITH MINIMAL CLOCK SKEW AND RELATED SYSTEMS AND METHODS," which is incorporated herein by reference in its entirety. BACKGROUND I. Field of the Disclosure [0003] The technology of the disclosure relates generally to monolithic three dimensional (3D) integrated circuits (IC) (3DIC). II. Background [0004] Mobile communication devices have become common in current society. The prevalence of these mobile devices is driven in part by the many functions that are now enabled on such devices. Demand for such functions increases processing capability requirements and generates a need for more powerful batteries. Within the limited space of the housing of the mobile communication device, batteries compete with the processing circuitry. The limited space contributes pressure to a continued miniaturization of components and power consumption within the circuitry. While miniaturization has been of particular concern in the integrated circuits (ICs) of mobile communication devices, efforts at miniaturization of ICs in other devices have also proceeded. [0005] Historically, elements within an IC have all been placed in a single two dimensional active layer with elements interconnected through one or more metal layers that are also within the IC. Efforts to miniaturize are reaching their limits in a two dimensional space and thus, design thoughts have moved to three dimensions. While there have been efforts to connect two or more ICs through a separate set of metal layers outside the IC proper, that solution is not properly a three dimensional (3D) approach. Likewise, two IC chips have been stacked one atop another with connections made between the two IC chips through solder bumps (i.e., the so called "flip chip" format). Likewise, there are system in package (SIP) solutions that stack IC chips atop one another with connections made between the chips with through silicon vias (TSVs). While arguably the flip chip and TSV embodiments represent 3D solutions, the amount of space required to effectuate a flip chip remains large. Likewise, the space required to implement a TSV relative to the overall size of the chip becomes space prohibitive. [0006] In response to the difficulties in effectuating small ICs that meet miniaturization goals, the industry has introduced monolithic three dimensional ICs (3DICs). The advent of monolithic 3DIC has provided a number of interesting possibilities in circuit design, but creates its own design issues. In particular, process variations between layers or tiers of the 3DIC may result in unacceptable clock skew with very large 3-sigma spread. When such skewed clock signals are applied to flip- flops, this clock skew may result in unacceptable setup times, hold times, or clock-to-q margins. The skew introduced by the process variations may further be aggravated by the software that automatically performs chip layout design. SUMMARY OF THE DISCLOSURE [0007] Embodiments disclosed in the detailed description include monolithic three dimensional (3D) flip-flops with minimal clock skew and related systems and methods. The present disclosure provides a 3D integrated circuit (IC) (3DIC) that has a flop spread across at least two tiers of the 3DIC. The flop is split across tiers with transistor partitioning in such a way that keeps all the clock related devices at the same tier, thus potentially giving better setup, hold and clock-to-q margin. In particular, a first tier of the 3DIC has the master latch, slave latch, and clock circuit. A second tier has the input circuit and the output circuit. By placing the elements of the flop requiring a minimal sampling window in a single tier, each of these elements are subject to the same manufacturing process, and thus, process variations between elements in the same tier are minimized. While process variations between tiers may still exist, the process variations for each of the clock related devices are reduced. By reducing or eliminating the process variations between the clock related elements, the clock skew to each element is consistent and able to be addressed readily. [0008] In this regard in one embodiment, a 3D flip-flop is provided. The 3D flip-flop comprises a master latch disposed in a first tier of a 3DIC, the master latch configured to receive an input and a clock input, the master latch configured to provide a master latch output. The 3D flip-flop also comprises a slave latch disposed in the first tier of the 3DIC, the slave latch configured to provide a 3DIC flip-flop output. The 3D flip-flop further comprises a clock circuit configured to provide the clock input, the clock circuit disposed in the first tier of the 3DIC. The 3D flip-flop also comprises a data input circuit configured to provide the data input to the master latch, the data input circuit disposed in a second tier of the 3DIC different from the first tier. [0009] In this regard in one embodiment, a 3D flip-flop is provided. The 3D flip-flop includes a master means for receiving an input and a clock input, the master means configured to provide a master latch output, the master means disposed in a first tier of a 3DIC. The 3D flip-flop also includes a slave means for providing a 3DIC flip-flop output, the slave means disposed in the first tier of the 3DIC. The 3D flip flop also includes a clock means for providing the clock input, the clock means disposed in the first tier of the 3DIC. The 3D flip-flop also includes a data input circuit configured to provide data input to the master means, the data input circuit disposed in a second tier of the 3DIC different from the first tier. [0010] In this regard, in a further embodiment, a method of designing a flip-flop is disclosed. The method includes disposing a master latch, a slave latch, and a clock circuit in a first tier of a 3DIC. The method also includes disposing a data input circuit in a second tier of the 3DIC different from the first tier. BRIEF DESCRIPTION OF THE FIGURES [0011] Figure 1 is a perspective view of an exemplary three dimensional (3D) integrated circuit (IC) (3DIC); [0012] Figure 2 is a block diagram of an exemplary conventional scan D-flop circuit; [0013] Figure 3 is a block diagram highlighting exemplary concepts of the present disclosure within a scan D-flop circuit; [0014] Figure 4 is a simplified exploded perspective view of a 3DIC incorporating the exemplary D-flop of Figure 3; [0015] Figure 5 is an exemplary 3DIC incorporating a scan D-flop according to an exemplary embodiment of the present disclosure; [0016] Figure 6 is a flow chart illustrating a design process that may be used in designing flops according to exemplary embodiments of the present disclosure; and [0017] Figure 7 is a block diagram of an exemplary processor-based system that can include the scan D-flop of Figures 3 through 5. DETAILED DESCRIPTION [0018] With reference now to the drawing figures, several exemplary embodiments of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. [0019] Embodiments disclosed in the detailed description include monolithic three dimensional (3D) flip-flops with minimal clock skew and related systems and methods. The present disclosure provides a 3D integrated circuit (IC) (3DIC) that has a flop spread across at least two tiers of the 3DIC. The flop is split across tiers with transistor partitioning in such a way that keeps all the clock related devices at the same tier, thus potentially giving better setup, hold and clock-to-q margin. In particular, a first tier of the 3DIC has the master latch, slave latch, and clock circuit. A second tier has the input circuit and the output circuit. By placing the elements of the flop requiring a minimal sampling window in a single tier, each of these elements are subject to the same manufacturing process, and thus, process variations between elements in the same tier are minimized. While process variations between tiers may still exist, the process variations for each of the clock related devices are reduced. By reducing or eliminating the process variations between the clock related elements, the clock skew to each element is consistent and able to be addressed readily. [0020] In this regard, Figure 1 is a perspective view of an exemplary 3DIC 10 that may incorporate flops according to the present disclosure. The 3DIC 10 has a first tier 12 with a first active layer 14 in which elements are disposed. The 3DIC 10 has a second tier 16 different than the first tier 12 with a second active layer 18 in which elements are disposed. The elements within the first active layer 14 and the second active layer 18 are interconnected by monolithic intertier vias (MIV) 20. For more information about MIV, the interested reader is referred to "High-Density Integration of Functional Modules Using Monolithic 3D-IC Technology" by Shreedpad Panth et al. in the proceedings of the IEEE/ ACM Asia South Pacific Design Automation Conference, 2013; pp. 681-686 which is hereby incorporated by reference in its entirety. The 3DIC 10 may be formed through hydrogen cutting or similar technique. For more information on an exemplary hydrogen cutting process, the interested reader is referred to U.S. Patent Application Serial Number 13/765,080, filed February 12, 2013, which is herein incorporated by reference in its entirety. The tiers 12, 16 may be electrically isolated (other than the MIV 20) by an electromagnetic shield (not shown) such as a graphene shield. For more information about graphene shields in 3DIC, the interested reader is referred to U.S. Patent Application Serial Number 13/765,061, filed February 12, 2013, the disclosure of which is herein incorporated by reference in its entirety. [0021] With reference to Figure 2, an exemplary conventional scan D-flop 22 is illustrated. For clarification of terminology, a D-flop is a form of a flip-flop. Likewise, a scan flop is a type of flip-flop that allows testing of the flip-flop through some additional circuitry. Because such testing is ubiquitous, many conventional flip-flops are in reality scan flip-flops. In conventional deployments, each element of the D-flop 22 is positioned within a single active layer of an IC (not shown) with interconnections between elements of the D-flop 22 achieved in the metal layers (not shown) of the IC as is well understood. The D-flop 22 includes a master latch 24 and a slave latch 26. The D-flop 22 also includes a clock circuit 28 and a data input circuit 30. While the master latch 24, slave latch 26, clock circuit 28, and data input circuit 30 each include one or more transistors or other elements, these are not explicitly labeled since such elements are conventional and well known in the industry. For more information about flip-flops and D-flops in general, the interested reader is directed to U.S. Patent No. 2,850,566, filed September 8, 1953, which is hereby incorporated by reference in its entirety. As noted above, in the conventional D-flop 22, each of the master latch 24, slave latch 26, clock circuit 28 and data input circuit 30 are all within one plane of the IC. [0022] Problems arise with conventional flip-flops as the number of devices in a particular IC grows. As the number of devices grows, the delay between elements may result in unacceptable clock-to-q skew. The sources of clock skew are a result of local device to device mismatches, which can be due to random or systematic variation (or both). A random variation may be the result of a difference in dopant concentration within the channel of the device which results in the device being slightly slower or faster compared to the target. Similarly, due to shrinking geometries, the local context within the die, or smaller section therein, that a particular device sits in also leads to differences in dopant concentration (due to non-regular absorption of activation energy) as well as differences in the lattice stress that eh channel undergoes, again resulting in a device that is slower or faster than the target. Another source of variation is the non- singular interconnect delays between different devices as not all interconnects (or the metals to connect device terminals) are the same. One technique that has been proposed by the assignee of the present disclosure is to use monolithic 3DIC to shorten the length of connective conductors. While shortening connective conductors does reduce delay, process variations between tiers of a monolithic 3DIC may result in unintentional skew and a large 3-sigma spread. [0023] The present disclosure addresses the process variations across tiers by implementing a flip-flop across multiple tiers of a 3DIC. However, the flip-flop is arranged so that the master and slave latches are on the same tier with the clock circuitry. The input circuitry is on a second, different tier. By placing the master and slave latches on the same tier with the clock circuitry, the process variations are uniform within that tier which reduces the skew and the 3-sigma spread. [0024] In this regard, Figures 3 and 4 illustrate a schematic of a flip-flop 32 having a master latch 34, a slave latch 36 and clock circuitry 38 disposed in a first tier 40 (Figure 4) of a 3DIC 42 (Figure 4). Data input circuit 44 is disposed in a second tier 46 (Figure 4) of the 3DIC 42. Note that the flip-flop 32 may be a scan flop and have a scan input (Sin) 48, although the concepts of the present disclosure work well for both scan flops and normal flip-flops. If the flip-flop 32 is a scan flop, then in addition to the scan input 48, an input multiplexer (not shown) may be used to select between the data input circuit 44 and the scan input 48. The multiplexer is positioned in the second tier of the 3DIC. Additionally an output 50 may be positioned on the second tier. In an exemplary embodiment, the first tier 40 is positioned beneath the second tier 46. It should be appreciated that while not illustrated in Figures 3 or 4, MIV, such as MIV 20 intercouple the first tier 40 with the second tier 46 allowing electrical connections between the elements in the first tier 40 (e.g., the master latch 34, slave latch 36, and clock circuitry 38) and the elements in the second tier 46 (e.g., data input 44, scan input 48, and output 50). [0025] In exemplary embodiments, the materials or characteristics of the tiers may be varied to further improve or optimize performance. For example, the first tier may have transistors having a lower threshold voltage than the transistors of the second tier. Alternatively, the transistors of the first tier may be made from high-K metal gate transistors and the transistors of the second tier may be made from polysilicon transistors. [0026] Figure 5 illustrates an exemplary die layout for the flip-flop 32 in the 3DIC 42. In particular, Figure 5 shows the various conductive and semiconductive elements in a top plan view format as laid out by circuit design software and tested through a program such as Simulation Program with Integrated Circuit Emphasis (SPICE). As with the circuit shown in Figures 3 and 4, the first tier 40 includes the master latch 34, the slave latch 36, and the clock circuitry 38. The second tier 46 includes the data input 44, the scan input 48, and the output 50. [0027] Using the monolithic 3DIC with the folded flip-flop 32 of the present disclosure provides improved power/performance/area (PPA) trade-off for most Application Specific Integrated Circuit (ASIC) designs and eliminates or at least reduces mismatches or unintended skew due to random process variations between different tiers of the 3DIC. This arrangement should result in minimal clock skew and give a flop with a good setup, hold and clock-to-q margins. An additional benefit is that by moving the latches to a different tier, congestion on the input tier is reduced giving enhanced pin accessibility and porosity to the router. [0028] While not illustrated in Figures 3 through 5, it should be appreciated that the monolithic 3DIC 42 may include other circuitry such as memory bitcells, digital signal processors, baseband processors, or the like as needed or desired. Such additional elements may complicate circuit layout. Accordingly, many circuits are designed through the use of a software program that automates the placement and interconnection of elements within a circuit. Such software may allow circuit designers to determine where certain elements may be positioned before running the algorithms that assign placements to the remaining elements. Alternatively, the software may accommodate hard macro commands that allow certain sub-elements to have a particular relative arrangement within the circuit regardless of position. One such hard macro command could be the requirement that the master latch 34, slave latch 36 and clock circuit 38 are all in one tier and the inputs 44, 48, and output 50 are in a second tier. [0029] In this regard, Figure 6 illustrates an exemplary process 60 of circuit design for the flip-flop 32. The process 60 starts when the circuit designer realizes that a flip-flop is needed in the circuit (block 62). The circuit designer, either directly or through the software, disposes the master latch 34, the slave latch 36, and the clock circuit 38 on the first tier 40 (block 64). The circuit designer, either directly or through the software, disposes the data input 44 on the second tier 46 (block 66). [0030] With continued reference to Figure 6, the circuit designer, either directly or through the software, disposes the output 50 in the second tier 46 (block 68). The circuit designer then arranges the interconnections using MIV 20 or other conductive elements to couple the elements (block 70). The rest of the circuit may then be populated. [0031] The monolithic 3D scan D-flop design with minimal clock skew according to embodiments disclosed herein may be provided in or integrated into any processor- based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player. [0032] In this regard, Figure 7 illustrates an example of a processor-based system 80 that can employ the flip-flop 32 illustrated in Figures 3 through 5. In this example, the processor-based system 80 includes one or more central processing units (CPUs) 82, each including one or more processors 84. The CPU(s) 82 may have cache memory 86 coupled to the processor(s) 84 for rapid access to temporarily stored data. The CPU(s) 82 is coupled to a system bus 88 and can intercouple devices included in the processor- based system 80. As is well known, the CPU(s) 82 communicates with these other devices by exchanging address, control, and data information over the system bus 88. [0033] Other devices can be connected to the system bus 88. As illustrated in Figure 7, these devices can include a memory system 90, one or more input devices 92, one or more output devices 94, one or more network interface devices 96 and one or more display controllers 98, as examples. The input device(s) 92 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The output device(s) 94 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The network interface device(s) 96 can be any devices configured to allow exchange of data to and from a network 100. The network 100 can be any type of network, including but not limited to a wired or wireless network, private or public network, a local area network (LAN), a wide local area network (WLAN), and the Internet. The network interface device(s) 96 can be configured to support any type of communication protocol desired. [0034] The CPU(s) 82 may also be configured to access the display controller(s) 98 over the system bus 88 to control information sent to one or more displays 102. The display controller(s) 98 sends information to the display(s) 102 to be displayed via one or more video processors 104, which process the information to be displayed into a format suitable for the display(s) 102. The display(s) 102 can include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc. [0035] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the embodiments disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The arbiters, master devices, and slave devices described herein may be employed in any circuit, hardware component, IC, or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. [0036] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an ASIC, a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0037] The embodiments disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server. [0038] It is also noted that the operational steps described in any of the exemplary embodiments herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary embodiments may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0039] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Method and apparatus for providing logic emulation. Specifically, the present invention provides logic emulation by using waferscale integration. |
1. A logic emulation apparatus comprising:a semiconductor wafer having a plurality of programmable cells, wherein a portion of said plurality of programmable cells are input/output transceiver cells, wherein a portion of said plurality of programmable cells are function cells, wherein a portion of said plurality of programmable cells are routing cells and wherein a portion of said plurality of programmable cells are clock generating cells. 2. The logic emulation apparatus of claim 1, wherein at least one of said function cells is a programmable cell having three inputs and one output.3. The logic emulation apparatus of claim 1, wherein at least two of said programmable cells are cascaded to form a flip-flop.4. The logic emulation apparatus of claim 2, wherein said output of said function cell has a scanable path.5. The logic emulation apparatus of claim 1, wherein at least one of said routing cells is a programmable cell having eight inputs and one output.6. The logic emulation apparatus of claim 5, wherein said output of said routing cell has a scanable path.7. The logic emulation apparatus of claim 4, wherein at least one of said routing cells is a programmable cell having eight inputs and one output, wherein said output of said routing cell has a scanable path, wherein said scanable paths of said at least one function cell and said at least one routing cell form a scan-chain.8. The logic emulation apparatus of claim 1, wherein at least one of said clock generating cell produces two clock signals of different phases from a single input signal.9. The logic emulation apparatus of claim 8, wherein said phase difference of said two clock signals are programmable.10. The logic emulation apparatus of claim 1, wherein at least two of said programmable cells are cascaded to form a compound gate.11. A method for implementing wafer scale emulation, said method comprising the steps of:a) identifying defects on a wafer; and b) mapping a plurality of programmable cells onto non-defective portions of said wafer, wherein:a portion of said plurality of programmable cells are input/output transceiver cells, wherein a portion of said plurality of programmable cells are function cells, wherein a portion of said plurality of programmable cells are routing cells and wherein a portion of said plurality of programmable cells are clock generating cells. 12. The method of claim 11, wherein at least two of said programmable cells are cascaded to form a flip-flop.13. The method of claim 11, further comprising the step of:c) providing a scannable path to an output of at least one of said function cells. 14. The method of claim 11, further comprising the step of:c) providing a scannable path to an output of at least one of said routing cells. 15. A logic emulation system comprising:a semiconductor wafer having a plurality of programmable cells, wherein a portion of said plurality of programmable cells are input/output transceiver cells, wherein a portion of said plurality of programmable cells are function cells, wherein a portion of said plurality of programmable cells are routing cells and wherein a portion of said plurality of programmable cells are clock generating cells; a target board; and communicating means disposed between said semiconductor wafer and said target board. 16. The logic emulation system of claim 15, wherein said communicating means is an electrical cable.17. The logic emulation system of claim 15, wherein said communicating means is a bed of nails board.18. A logic emulation apparatus comprising:a semiconductor wafer having a plurality of identical substructures, wherein each of said identical substructures comprises a plurality of programmable cells, wherein a portion of said plurality of programmable cells are input/output transceiver cells, wherein a portion of said plurality of programmable cells are function cells, wherein a portion of said plurality of programmable cells are routing cells and wherein a portion of said plurality of programmable cells are clock generating cells. 19. The logic emulation apparatus of claim 18, wherein at least two of said programmable cells are cascaded to form a flip-flop.20. The logic emulation apparatus of claim 18, wherein said output of said function cell has a scanable path.21. The logic emulation apparatus of claim 18, wherein said output of said routing cell has a scanable path.22. The logic emulation apparatus of claim 18, wherein said substructures cover substantially all of said wafer. |
The present invention relates to a novel method and apparatus for performing logic emulation. More specifically, the present invention provides logic emulation by using waferscale integration.BACKGROUND OF THE DISCLOSURELogic emulation has become an important aspect of ASIC verification. Logic emulation allows users to create a hardware model of a chip design by using emulation software that maps the design onto reprogrammable circuitry or emulation systems. Specifically, emulation systems often use arrays of discrete programmable logic devices (PLDs), e.g., hundreds of logic processors, such as Field Programmable Gate Arrays (FPGAs), which can mimic the operation of an ASIC design prior to fabrication. This "virtual silicon" is a functional equivalent of the actual chip, operating at close to real time, thereby assuring correct timing relationships.For example, FIG. 1 illustrates a conventional emulation system having a plurality of printed circuit boards (PCBs) that are in communication with each other with each board containing a plurality of FPGAs. When a chip design or design under test (DUT) is emulated using such a system, the DUT is often expressed as a netlist. The netlist represents the interconnection of circuit elements and the descriptions of the connections among those circuit elements. In other words, the netlist describes a list of elements such as gates, rams, flip-flops, and the wires that go between these elements. In operation, the netlist is mapped onto the FPGAs of the emulation system by using mapping software typically provided by the vendor of the emulation system. The mapping process typically requires a partitioning step where the netlist is partitioned into portions or partitions, where each partition is mapped onto a FPGA. In turn, each FPGA is then complied to map the logical functionality encapsulated by that partition onto the programmable resources of that FPGA. Additionally, the designer would have to ensure that the interconnect resources available on each PCB are compatible with that particular partition.Thus, logic emulation has transformed electronic design by enabling early integration and hardware-software coverification. Namely, logic emulation allows designers to test their designs before committing to the costly action of fabricating chip prototypes.However, as chip designs continually increase in complexity, designers are facing various logic emulation pitfalls. First, as chip designs get larger and more complex, the cost and size of the logic emulation system must also increase accordingly. Specifically, referring to FIG. 1, as the size and complexity of the DUT is increased, the emulation system allows for expansion by connecting additional PCBs to the system. Unfortunately, the connections between the PCBs and the connections between the FPGAs are themselves limiting factors, because these interconnections are physically limited. These physical limitations at the boundaries of the FPGAs and at the boundaries of the PCBs create tremendous pressure on the partitioning software to properly identify a partitioning point in the DUT to cut the design so that the limited number of connections available is enough to accommodate a particular partition. In sum, traditional emulation systems have limited routability and mapability due to physical constraints.Second, conventional emulation systems limited by their interconnect and internal architecture of the PLDs will force a designer to implement "time slice" operations, where multiple cycles of time in the emulation system are required to use the limited interconnections to transfer one equivalent clock worth of data as designed in the DUT. For example, if there are a thousand interconnections between two PCBs and the DUT dictates a transfer of three thousand signals for a given clock cycle, then it is necessary in practice to take three clocks to accomplish the transfer. The use of the time slice approach adds another layer of complexity in the emulation process because the software must now track the completion of multiple transfers before a signal can be declared. Thus, as the complexity of the DUT increases, the size and cost of the emulation system also increase rapidly while performance drops.Additionally, the level of confidence that the emulation system is performing its functions is also jeopardized. Namely, the emulation software is severely pressured to address complicated demands arising from limited interconnections and the use of multiple time slices to transfer data. Commonly, debugging operations often uncover software bugs directly caused by the FPGA complier or the partitioning software. Namely, the emulation tool itself creates errors that must be addressed by the designer.Therefore, a need exists for a novel method and apparatus that is capable of providing logic emulation to address increasing complexity of chip designs while providing a lower cost structure, a smaller system size, and natural flexibility in defining and distributing logic and routing functions.SUMMARY OF THE INVENTIONIn one embodiment of the present invention, a novel method and apparatus for providing logic emulation is disclosed. Specifically, the present invention provides logic emulation by using waferscale integration.The present invention uses a semiconductor wafer to build a sea of soft-programmable logic cells and interconnections. One unique aspect of the present invention is that the cells are homogeneous to such an extent that even flip-flops are constructed from these programmable logic cells. In other words, the fine granularity of the programmable logic cells provides routability and mapability that are not achievable by traditional emulation systems.The cells implemented on the wafer include, but are not limited to input/output transceiver cells, function or logic cells, routing cells, and clock generating cells. The input/output transceiver cells provide the physical attachments for receiving and sending signals to and from an external device, e.g., a target board.The function or logic cells are programmable such that the output of the logic cell is an arbitrary function of the inputs. For example, a three-bit input (a, b, c) logic cell will produce an output y(a,b,c). In one embodiment, the three-bit input logic cell will have an eight-bit storage set of values that amount to a truth table.The routing cells are selectable so that one of its inputs is passed or routed to its output. In one embodiment, the routing cell is implemented as an 8-to-1 routing cell.Finally, the clock generating cells produce clock signals for other cells on the wafer. In one embodiment, two different clock signals of differing phases are generated from a single waveform, e.g., a square waveform.In operation, a netlist for a DUT is mapped onto the cells of the wafer, i.e., mapping software is used to map a candidate design onto the programmable resources on the wafer. Since wafers usually have some defective regions, software tools are used to identify those regions so that the design mapping software will avoid mapping portions of the netlist onto the defective regions.Compared to the traditional PLD systems, the present wafer-scale emulation system has several advantages such as lower cost, smaller size, and higher ratio of interconnect (wiring) to logic elements. Programmable elements can be targeted to the logic emulation function, and as a result non-logic-gate functions like RAMs and clock distribution can be implemented with more efficiency and greater flexibility. The logic and routing functions can be defined in tandem with the software which maps a design to those functions, whereas a conventional system must adapt partitioning and routing software to the limited interconnect and internal architecture of PLDs. Finally, software quality and reliability (e.g., number of bugs) for the present wafer-scale system should be better when compared to a traditional emulation system, because the present software is not pressured to address the constraints of FPGA and PCB boundaries.BRIEF DESCRIPTION OF THE DRAWINGSThe teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:FIG. 1 illustrates a block diagram of a traditional emulation system;FIG. 2 illustrates a block diagram of a wafer-scale emulation system of the present invention;FIG. 3 illustrates an alternate block diagram of a wafer-scale emulation system of the present invention;FIG. 4 illustrates a block diagram of an Input/Output transceiver cell of the present invention;FIG. 5 illustrates a block diagram of a function or logic cell of the present invention;FIG. 6 illustrates a block diagram of a routing cell of the present invention;FIG. 7 illustrates a block diagram of a clock generating cell of the present invention;FIG. 8 illustrates a block diagram of a flip-flop formed using the routing and logic cells of the present invention;FIG. 9 illustrates a flowchart of a method for mapping a netlist of a DUT onto a wafer-scale emulation system of the present invention;FIG. 10A illustrates a timing diagram for a clock generating cell;FIG. 10B illustrates an alternate timing diagram for a clock generating cell;FIG. 10C illustrates a second alternate timing diagram for a clock generating cell; andFIG. 11 illustrates a block diagram of a compound gate formed using two logic cells of the present invention.To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.DETAILED DESCRIPTIONFIG. 2 illustrates a block diagram of a wafer-scale emulation system 200 of the present invention. Specifically, the wafer-scale emulation system 200 comprises a semiconductor wafer 210 having a sea of programmable cells 212, and a communication channel 215, e.g., an electrical cable. It should be noted that multiple communication channels 215 can be deployed to provide a more comprehensive and flexible interface between the wafer 210 and the target board 220. The wafer-scale emulation system 200 is implemented to mimic the functions of a DUT, e.g., a chip that is intended to be deployed within a target board 220.In practice, a netlist for the DUT is generated and mapped onto the sea of programmable cells 212. Once the routed design is implemented on the wafer, the DUT can be tested and debugged in conjunction with the components 222 deployed on the target board 220. This emulation approach allows the DUT to be simulated within its intended operating environment without having to fabricate the chip. Although FIG. 2 only illustrates the wafer 210 and cable 215, it should be noted that the overall emulation system may incorporate other modules or devices (not shown) such as a controller or processor, a power supply, a wafer holder and various input/output devices such as a keyboard, a mouse, a display, a storage device and so on. The functions and structures of the sea of programmable cells 212 will be further described below. It should be noted that the fabrication or etching processes necessary to produce these homogeneous cells on a semiconductor wafer are well known in the art and, thus, will not be described herein.FIG. 3 illustrates an alternate block diagram of a wafer-scale emulation system 300 of the present invention. The wafer-scale emulation system 300 is very similar to the wafer-scale emulation system 200 with the exception that the communication channel 315 is now coupled to a bed of nails board or interface 317. This interface allows a more flexible method in providing the necessary interface between the wafer 210 and the target board 220. For example, the I/O transceiver cells can now be deployed arbitrarily on the wafer without being aggregated to a particular area, e.g., grouped together to be coupled to a physical connector. This bed of nail approach furthers the goal of providing an emulation system capable of offering superior routability and mapability over conventional emulation systems.The wafer 210 can be composed of chip-sized substructures, each substructure comprising logic circuits. In one embodiment, all the chip-sized substructures are identical. Chip-sized substructures are used to allow standard semiconductor manufacturing techniques to be used, including wafer-stepper-based photolithography techniques. With a wafer stepper, the alignment between die locations on the wafer are not as precise as the layers within a die location. But, this is overcome with different design rules for interconnect between die locations; for example, a metal layer for interconnecting two die locations can have 10* line width and 10* spacing design rules as compared to the rules for the same metal layer within a die location.FIG. 4 illustrates a block diagram of an Input/Output (I/O) transceiver cell 400 of the present invention. Specifically, the I/O transceiver cell 400 comprises a physical attachment 410, e.g., a pad, for passage of signals between the wafer and the outside world, e.g., a target board. The I/O transceiver cell 400 also comprises an output driver 420 and a receiver 430. Specifically, the output driver 420 and receiver 430 are driven from inside the sea of logic of the wafer. For a typical implementation (e.g., a 200 mm wafer), there may be between 1000 to 2000 I/O transceiver cells 400. However, those skilled in the art will realize that the present invention is not limited to a particular number of I/O transceiver cells 400 or to a particular wafer size.FIG. 5 illustrates a block diagram of a function or logic cell 500 of the present invention. Specifically, the function or logic cell 500 is programmable such that the output of the logic cell is an arbitrary function of the inputs. In one embodiment, a three-bit input (a, b, c) 510 logic cell will produce an output y(a,b,c) 520. The three-bit input logic cell will have an eight-bit storage set of values 530 that express the logic function's truth table. However, although the present function or logic cell 500 is disclosed as a three-input, one output logic cell, those skilled in the art will realize that the present invention is not so limited. The use of a three-to-one logic cell provides a reasonable balance between the degree of desired functionalities while maintaining fine granularity of the programmable cells. Nevertheless, those skilled in the art will realize that other sizes of logic cells can be adapted to the present invention.Additionally, the output of the function or logic cell 500 may optionally employ a scanable or observable pad 540. This allows the state of each logic cell 500 to be sampled after each clock pulse, thereby enabling detailed observation of the behavior of the device under test. Alternatively, a scanable circuit can be put in series with an output 520 or 620, operable in modes such as: (i) pass through, used for normal operation; (i) capture, wherein the value of the output 520 or 640 is stored into a shift register that comprises a plurality of scanable circuits; (iii) shifting, where the data in the shift register is shifted along the shift register bits, providing the ability to shift data in from an external source and to shift data out to an external destination; and (iv) driving, wherein the data in the shift register is driven to the input of blocks, effectively replacing the value that would otherwise be received from the outputs 520 or 640. Scan path techniques are known in the art, and generally employ their own clock signal(s) and control signals. The can paths can be used to test the wafer scale emulation circuits to determine where faults are located.FIG. 6 illustrates a block diagram of a routing cell 600 of the present invention. The routing cells are selectable so that one of its inputs 610 is passed or routed to its output 620, via select lines 630. In one embodiment, the routing cell is implemented as an 8-to-1 routing cell. However, although the present routing cell 600 is disclosed as an eight-input, one output routing cell, those skilled in the art will realize that the present invention is not so limited. Namely, other sizes of routing cells can be adapted to the present invention.FIG. 7 illustrates a block diagram of a clock generating cell 700 of the present invention. The clock generating cell is designed to provide clock signals lo for other cells on the wafer. In one embodiment, two different clock signals of differing phases 720 (e.g., an open phase and a close phase) are generated from a single clock-type waveform 710, e.g., a free-running square waveform. The clock generating cell is capable of providing programmable 2-phase non-overlapping high and low periods, e.g., for up to 32 different clocks. Clocks will be able to run asynchronously and can be stopped via the interface, to allow state readout.In fact, clock generating cell can be implemented to receive a feedback on path 712 to inform the clock generating cell as to the timing to move on to the next phase. However, although the present clock generating cell 700 is disclosed as a two-phase clock generating cell, those skilled in the art will realize that the present invention is not so limited. Namely, other number of phases of clock generating cell can be adapted to the present invention.FIG. 10A illustrates a timing diagram for a clock generating cell producing two-phase non-overlapping clocks, ph1 and ph2. In one embodiment, to avoid having to control clock skew within the netlist, the present invention uses 2-phase non-overlapping clocks. The clock pulse widths can be fixed or programmable, and in FIG. 10A, pulse widths A and C can be set by analog methods, while the period is set by the clk signal. In the example of FIG. 10B, the pulse width are determined digitally from the clk signal. In the example of FIG. 10C, a programmable number of cycles of the clk signal are used to set the pulse widths A and C, and to also set the spacing between the pulses, B and C.The minimum Width of interval A and C should be long enough to ensure that the slave/master latches will capture their data, even after the pulse has traveled through a bounded number of logic or routing cells. The minimum width of interval B and D should be long enough to ensure that for the worst-case clock skewed pair of cells, there is no situation where the beginning of C arrives before the end of A, nor may the beginning of A arrive before the end of C. Finally, the time from the beginning of A to the end of C should be long enough to allow logic propagation.FIG. 8 illustrates a block diagram of an edge-triggered flip-flop 800 formed using the routing and logic cells of the present invention. As shown above, once a plurality of homogeneous programmable cells are defined, it is now possible to form more complex devices such as a flip-flop 800 and so on.Flip-flops can be built as master-slave devices. A flip-flop connected to a pure clock in the design can be built as follows:original: D, Q, CLKmapped: master_q=!(master_qn &!(ph2 & d))master_qn=!(master_q &!(ph2 &!d))slave_q=!(slave_qn &!(ph1 & master_q))slave_qn=!(slave_q &!(ph1 & master_qn))CLK transforms into a signal pair, [ph1,ph2].One goal is to support gated clock designs. The method is to remap structures as follows. For a clock-type signal gated with a normal signal, there are four cases:<tb><sep>original<sep>translation<tb><sep>clk AND signal<sep>[!ph1 AND signal , ph2]<tb><sep>!clk OR signal<sep>[ph1 OR signal , ph2]<tb><sep>clk OR signal<sep>[ph1 , !ph2 OR signal]<tb><sep>!clk AND signal<sep>[ph1 , ph2 AND signal]FIG. 11 illustrates a block diagram of a compound gate 1100 formed using two logic cells of the present invention. Namely, a compound gate, like a 3-input AND with a 2-input AND can be wired to a 2-input NOR to produce an output Y=≈((a & b & c)(d & e)).Specifically, FIG. 11 illustrates an 3-input AND implemented by a first logic cell 1105 and an 2-input AND implemented by a second logic cell 1107 of the present invention. The output of the first logic cell 1105 is fed as an input along with inputs "d" and "e" to the second logic cell 1107, thereby producing the output Y of a compound gate.FIG. 9 illustrates a flowchart of a method 900 for mapping a netlist of a DUT onto a wafer-scale emulation system of the present invention. Method 900 starts in step 905 and proceeds to step 910.In step 910, a register transfer level (RTL) design is obtained for the DUT. The RTL design concentrates on design at the register and logic level and the blocks which join them. The RTL is a means of exploiting the separation of data and control in order to simplify the design process. Thus, RTL is a hierarchical level of abstraction higher than a gate level design and is well known in the art.In step 920, logic synthesis is applied to the RTL design to obtain gate-level designs. Logic synthesis processing tools are readily available from commercial or academic sources. The gate-level designs effectively comprise a list of elemental logic cells such as flip-flops, multiplexers and the like.In step 930, the gate-level designs are converted into the homogeneous logic cells of the present invention, i.e., a wafer cell netlist, which is then fed in step 940 to a placing and routing processing step to produce a routed design on the physical wafer. It should be noted that the placing and routing processing step applies information extracted from a faulty cell map that allows the placing and routing processing step to avoid faulty cells detected on the wafer. Since it is anticipated that the yield on the wafer will be less than 100%, the faulty cell map will guide the software tools to avoid defective programmable cells. Method 900 then ends in step 950.Thus, the present invention discloses the use of whole-wafer or wafers to build a sea of soft-programmable cells, which is large enough to emulate any chip. A 300 millimeter diameter is approximately 300 times the area of a 15*15 mm chip. Thus, compared to the traditional PLD systems, the present wafer-scale emulation system has several major advantages such as lower cost, smaller size, and higher ratio of interconnect (wiring) to logic elements. Additionally, software quality and reliability (e.g., number of bugs) for the present wafer-scale system should be better when compared to the traditional emulation system, because the present software is not pressured to address the constraints of FPGA and PCB boundaries.Although various embodiments which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. In the claims, elements of method claims are listed in a particular order, but no order for practicing of the invention is implied, even if elements of the claims are numerically or alphabetically enumerated. |
The present disclosure relates to memory array access control. An apparatus includes partition control circuitry to control at least one partition of a memory array, the at least one partition control circuitry also to receive a controlled clock signal to enable execution of a legitimate memory access command and to generate an active/idle signal having an active state when executing the legitimate memory access command and an idle state when executing the legitimate memory access command is complete; wherein the clock signal is disabled when the active/idle signal is in an idle state. |
CLAIMSWhat is claimed is:1. An apparatus comprising:partition control circuitry to control at least one partition of a memory array, the partition control circuitry also to receive a controlled clock signal to enable execution of a legitimate memory access command and to generate an active/idle signal having an active state during execution of the legitimate memory access command or having an idle state in response to completion of execution of the legitimate memory access command.2. The apparatus of claim 1, wherein the legitimate memory access command includes at least one of a read command, a write command, a force write command and a reset only write command.3. The apparatus of claim 1, wherein the partition control circuitry also to receive a wake-up command to transition the partition control circuitry from an idle state to an active state.4. The apparatus of any of claims 1-3, wherein the partition control circuitry includes a plurality of partition control circuits each to control a respective partition of the memory array, and wherein each partition control circuit to generate a respective active/idle signal.5. The apparatus of claim 4, wherein each partition control circuit to propagate a respective active/idle signal from a previous partition control circuit to a subsequent partition control circuit, and wherein a last partition control circuit is configured to transmit the active/idle signal indicative of an active or idle state of at least one partition control circuit.6. A method comprising:receiving, by a memory controller, an active/idle signal from partition controlcircuitry; wherein the partition control circuitry to control at least one partition of a memory array and wherein the active/idle signal has a state indicative of one of an active state or an idle state of the partition control circuitry; receiving, by the memory controller, a memory access command;determining, by the memory controller, if the memory access command is legitimate; andenabling, by the memory controller for the partition control circuitry, a clock signal if the memory access command is legitimate and if the active/idle signal is in an idle state.memory access command is legitimate and if the active/idle signal is in an idle state.7. The method of claim 6, wherein the determining, by the memory controller, if the memory access command is legitimate includes parsing the memory access command to discover if the memory access command includes at least one of a read command, a write command, a force write command and a reset only write command.8. The method of claim 6, further comprising determining, by the memory controller, if the memory access command is illegitimate by parsing the memory access command to determine if the memory access command includes a voltage coupling or a power sequence operation.9. The method of claim 6, further comprising:determining, by the memory controller, if the clock signal is enabled from a previous memory access command; andqueuing the memory access command if the clock signal is enabled.10. The method of claim 6, further comprising transmitting a wake-up signal to the partition control circuitry to enable the partition control circuitry to transition from an idle state to an active state.11. A computer readable storage device having stored thereon instructions that when executed by one or more processors result in the following operations including the method according to any one of claims 6 to 10.12. A system including at least one device arranged to perform the method of any one of claims 6 to 10.13. A device that includes means to perform the method of any one of claims 6 to10.14. A system comprising:memory controller circuitry to receive a memory access command and determine if the memory access command is legitimate or illegitimate, and to enable a clock signal if the memory access command is legitimate; and partition control circuitry to control at least one partition of a memory array, the partition control circuitry also to receive a controlled clock signal to enable execution of a legitimate memory access command and to generate an active/idle signal having an active state during execution of the legitimate memory access command or having an idle state in response to completion of execution of the legitimate memory access command; wherein the memory controller circuitry to disable the clock signal to the at least one partition control circuitry when the active/idle signal is in an idle state.15. The system of claim 14, wherein the memory controller circuitry also to parse the memory access command to discover if the memory access command includes at least one of a read command, a write command, a force write command and a reset only write command.16. The system of claim 14, wherein the memory controller circuitry also to parse the memory access command to determine if the memory access command includes a voltage coupling or a power sequence operation.17. The system of claim 14, wherein the memory controller also to determine if the clock signal is enabled from a previous memory access command; and to queue the memory access command if the clock signal is enabled.18. The system of claim 14, wherein the memory controller also to transmit a wake-up signal to the partition control circuitry to enable the partition control circuitry to transition from an idle state to an active state.19. The system of claim 14, wherein the partition control circuitry includes a plurality of partition control circuits each to control a respective partition of the memory array, and wherein each partition control circuit to generate a respective active/idle signal.20. The system of claim 19, wherein each partition control circuit to propagate a respective active/idle signal from a previous partition control circuit to a subsequent partition control circuit, and wherein a last partition control circuit is configured to transmit the active/idle signal to the memory controller circuitry.21. The system of claim 19, further comprising:clock multiplexor circuitry to route the clock signal to a partition control circuit to execute the legitimate memory access command.22. A computer-readable storage device having stored thereon instructions that when executed by one or more processors result in the following operations comprising:receive, by a memory controller, an active/idle signal from partition control circuitry; wherein the partition control circuitry to control at least one partition of a memory array and wherein the active/idle signal has a state indicative of one of an active state or an idle state of the partition control circuitry; receive, by the memory controller, a memory access command;determine, by the memory controller, if the memory access command is legitimate; andenable, by the memory controller for the partition control circuitry, a clock signal if the memory access command is legitimate and if the active/idle signal is in an idle state.23. The computer-readable storage device of claim 22, wherein the instructions result in the following additional operations comprising:parse the memory access command to discover if the memory access commandincludes at least one of a read command, a write command, a force write command and a reset only write command.24. The computer-readable storage device of claim 22, wherein the instructions result in the following additional operations comprising:parse the memory access command to determine if the memory access command includes a voltage coupling or a power sequence operation.25. The computer-readable storage device of claim 22, wherein the instructions result in the following additional operations comprising:determine if the clock signal is enabled from a previous memory access command; and queue the memory access command if the clock signal is enabled. |
MEMORY ACCESS CONTROLInventors:Rezaul Haque,Lady Nataly Pinilla PicoFIELDThe present disclosure relates to memory access control. BACKGROUNDUnwanted memory commands have the ability to corrupt data, which may cause failures at the application and system level. Protection of data in a memory array has been proposed using various complex schemes which increase the cost and complexity of the system. At a system level, maintaining voltage sequences are costly when a system shutdown happens because power management is needed to manage these situations. But, without power management, there is always the potential that array data within memory component may be corrupted, for example, during an unsequenced power shutdownBRIEF DESCRIPTION OF DRAWINGSFeatures and advantages of the claimed subject matter will be apparent from the following detailed description of embodiments consistent therewith, which description should be considered with reference to the accompanying drawings, wherein:FIG. 1 illustrates a system block diagram, consistent with several embodiments of the present disclosure;FIG. 2 illustrates a flowchart of operations of memory controller circuitry consistent with one embodiment of the present disclosure;FIG. 3 illustrates a flowchart of operations of memory controller circuitry consistent with another embodiment of the present disclosure; andFIG. 4 illustrates a flowchart of operations of partition control circuitry consistent with one embodiment of the present disclosure.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. DETAILED DESCRIPTIONGenerally, this disclosure describes a system and method for data protection in a memory array. In some embodiments, the memory array is segmented into a plurality of partitions. At least one partition control circuit is provided that controls memory access and/or power management for at least one partition. The at least one partition control circuit is configured to generate an active/idle state signal indicative of whether the partition control circuitry is in an idle/low-power state or in an operational (memory access) state. Memory controller circuitry is configured to receive the state signal from the partition control circuitry and control a clock signal to the partition control circuitry. The memory controller circuitry is also configured to receive a memory access command that may be legitimate or illegitimate. A legitimate command may include, for example, memory read/write commands, force read command, reset only write command, , etc. Illegitimate commands include unwanted and/or spurious commands that may corrupt data in the array, and may include, for example, voltage coupling, power sequence operations, etc. If the command is legitimate, the memory controller may enable the clock signal for the partition control circuit to enable the partition control circuit to decode and process the legitimate command. While processing the legitimate command, the partition control circuit may change the state of the state signal to indicate an active state, and once completed, may change the state signal to indicate an idle state, so that the memory controller circuitry can decouple, or gate, the clock signal from the partition control circuit. Advantageously, this may enable the array partition to enter an idle and/or low power state while being protected from illegitimate commands.FIG. 1 illustrates a system block diagram 100 consistent with several embodiments of the present disclosure. The system 100 includes a memory array 102, partition control circuitry 104 that includes a plurality of partition control circuits 104A, 104B,...,104N, clock multiplexing (MUX) circuitry 106 and memory controller circuitry 108. In the embodiments described herein, the memory array 102 may be segmented (logically and/or physically) into a plurality of partitions (e.g., plurality of "panels" or "tiles", etc.) 102A, 102B, ..., 102N. The size of each memory partition 102 A, 102B,..., 102N may be based on, for example, the size of the overall array 102, memory addresses, physical location of memory structures, etc. Memory array 102 may include non-volatile memory structures (e.g., phase change or cross- point memory, etc.) and/or volatile memory such as random access memory, cache memory, etc. In some embodiments, partition control circuits 104A, 104B,...,104N are provided to control (e.g., read/write access control, power management control, etc.) of a respective partition 102A, 102B,..., 102N of the memory array 102. In other embodiments, a partition control circuit, e.g., partition control circuitry 104A may control more than one memory partition, e.g., partitions 102A and 102B, and thus there may be less than N number of individual partition control circuits. Memory controller circuitry 108 is generally configured to receive a memory access command 113 and control application of a clock signal 109 to at least one partition control circuit 104A, 104B,...,104N, as will be described below. The memory access command 113 may be generated by, for example, a central processing unit (e.g., system CPU, not shown) and/or subset thereof (e.g., one or more cores of a system CPU, etc., not shown) executing one or more applications (also not shown) which require access to memory array 102.In one embodiment, partition control circuit 104A, 104B,...,104N are each configured to generate and propagate an active/idle signal 105 A, 105B,...,105N. The active/idle signal 105A, 105B,...,105N is indicative of the state of at least one partition control circuit 104A, 104B,...,104N. The "state", as used herein, means either an active state in which at least one partition control circuit 104 A, 104B,..., and/or 104N is decoding and/or processing a legitimate memory access command, or an idle/low-power state in which the partition control circuit 104A, 104B,..., and/orl04N is gated from memory controller circuitry 108. In one embodiment, the first partition control circuit 104A is configured to receive an idle signal 101 and propagate the idle signal as the active/idle signal 105A if the partition control circuit 104A is not processing a memory access command. The idle signal 101 may include, for example, an available reference voltage (e.g., Vcc, etc.). If any of the partition control circuits 104A, 104B,..., and/or 104N is in an active state, that partition control circuit is configured to change the state of the active/idle signal 105A, 105B,..., and/or 105N to indicate an active state. The last partition control circuit 104N is configured to transmit the active/idle signal 105N to the memory controller circuitry 108. Since any of the partition control circuits 104A, 104B,..., and/or 104N can change the state of a respective active/idle signal 105A, 105B,..., and/or 105N, the last active/idle signal 105N is indicative of all partition control circuits 104A, 104B,..., and 104N being in an idle/low-power state, or at least one partition control circuit 104 A, 104B,..., and/orl04N being in an active state. In another embodiment, instead of propagating a respective active/idle signal 105 A, 105B,..., 105N through each partition control circuitry 104A, 104B,...,104N , each active/idle signal 105A, 105B,..., 105N may be transmitted directly to memory controller circuitry 108, at the possible expense of additional pinout requirements and/or bus and bus control requirements.The memory controller circuitry 108 is generally configured to gate the application of clock signal 109 to at least one partition control circuit 104A, 104B,..., and/or 104N based on, at least in part, the type of memory access command 113 received by the memory controller circuitry 108. As described above, a memory access command 113 may generally be legitimate or illegitimate. Accordingly, memory controller circuitry 108 may also include memory access command determination logic 110 generally configured to determine if a memory access command 113 is legitimate or illegitimate. To that end, memory access command determination logic 110 may be configured to parse an incoming memory access command to determine certain features of the command that trend to demonstrate that the memory access command 113 is legitimate or illegitimate. Features that may demonstrate that the memory access command 113 is legitimate include, for example, command decode information,proper clock signaling, etc. while features that may demonstrate that the memory access command 113 is illegitimate include, for example, voltage coupling, power sequence operations, etc. . Memory controller circuitry 108 may remain in a low power/idle state in the absence of a memory access command 113.If a memory access command 113 is determined to be legitimate, memory controller circuitry 108 is configured to turn on a clock signal 109 to enable at least one partition control circuit 104A, 104B,..., and/or 104N to decode and/or process the memory access command 113. The clock signal 109 may include a clock signal 111 received from a system clock generator (not shown), etc., and may further include a clock signal 111 from a different clock domain. The clock MUX circuitry 106 is generally configured to receive a clock signal 109 and route a similar clock signal 107 to at least one partition control circuit 104 A, 104B,..., and/or 104N, depending on, for example, address information identified in the memory access command 113. In some embodiments, clock MUX circuitry 106 is configured to route the clock signal 107 only to the partition control circuit 104A, 104B,..., and/or 104N that will be processing (or is processing) the memory access command 113. Once the clock signal 107 is applied to one or more of the partition control circuitry 104A, 104B,..., and/or 104N, memory controller may transmit a wake up signal, via bus 115, to the place the appropriate partition control circuitry 104A, 104B,..., and/or 104N in a condition to receive and process (decode) the memory access command 113. Once the appropriate partition control circuit 104A, 104B,..., and/or 104N is in a state that is ready to process(which may be verified by a wake-up handshake signal, etc.), the memory access command 113 may be transmitted to the appropriate partition control circuit 104 A, 104B,..., and/or 104N for decoding/processing, via bus 115. Data associated with the memory access command 113 and any data results from the memory access command (e.g., read results) may be transmitted between the memory controller circuitry 108 and the appropriate partition control circuit 104A, 104B,..., and/or 104N, via bus 115.As described above, if at least one partition control circuit 104A, 104B,..., and/or 104N is processing a memory access command 113, the state of the active/idle signal 105A, 105B,..., and/or 105N may indicate an active state. Accordingly, memory controller circuitry 108 is configured to enable the clock signal 109 for as long as the active/idle signal 105A, 105B,..., and/or 105N indicates an active state. Once any or all of the circuits 104A, 104B,..., and/or 104N has completed processing of a memory access command 113, and the corresponding the active/idle signal 105A, 105B,..., and/or 105N changes state from active to idle, memory controller circuitry 108 is configured to disable (e.g., gate) the clock signal 109 (and correspondingly, clock signal 107) to the appropriate circuitry 104A, 104B,..., and/or 104N. More than one legitimate memory access command 113 for a particular partition 102A, 102B,..., and/or 102N may be received by memory controller circuitry 108. In such a case, memory controller circuitry 108 may be configured to queue the memory access commands 113 and maintain the clock signal until all such commands have been executed. This may reduce lag time associated with waking up of partition control circuit 104 A, 104B,..., and/or 104N between memory access commands 113. If memory access command determination logic 110 determines that a memory access command 113 is illegitimate, the memory controller circuitry may keep the clock signal 109 disabled for all or any of the partition control circuits 104A, 104B,..., and/or 104N. In some embodiments, memory controller circuitry 108 may be configured to generate a signal that indicates that the received memory access command 113 is illegitimate. Thus, partition control circuitry 104 A, 104B,..., and/or 104N and/or corresponding partitions 102A, 102B,..., and/or 102N may enter a low power state and may be protected from spurious and/or unwanted (illegitimate) memory access commands.FIG. 2 illustrates a flowchart 200 of operations of memory controller circuitry consistent with one embodiment of the present disclosure. The operations may be performed, for example, by memory controller circuitry 108 (FIG. 1) and/or other memory controller circuitry. Operations of this embodiment include maintaining memory controller circuitry (MCC) in an idle state 202, and determining if a memory access (MA) command is received 204. If no MA is received 204, operations may include maintaining the MCC in an idle state 202. If a MA command is received (204), operations may also include determining if the MA command is legitimate 206. If the MA command is not legitimate (illegitimate) (206), operations may include operations may include maintaining the MCC in an idle state 202. If the MA command is legitimate (206), operations may include determining if a clock (CLK) signal is enabled 208, and if so (indicating that a current MA command is being executed), queuing the MA command 208 to be performed after the current command operations are complete. If the CLK signal is not enabled (208), operations may include enabling the CLK signal to at least one partition control circuit (PCC) 212. Operations may also include transmitting a wake-up signal to the at least one PCC 214. The wake-up signal may be a handshake and/or other signal type to enable the PCC to transition from a low-power and/or idle state to an active state. Operations may also include transmitting the MA command to the PCC 216. Operations may also include determining if there are any queued MA commands 218, and if so, transmitting the queued commands to the PCC 216, thus avoiding unnecessary clock cycling and/or wake-up transitions. Once the MA command is completed by the at least one PCC (and once any results have been transmitted to the MCC), operations may also include disabling the CLK signal 220, to permit, for example, the PCC to transition to a low-power state and to gate illegitimate memory access commands.FIG. 3 illustrates a flowchart 300 of operations of memory controller circuitry consistent with another embodiment of the present disclosure. The operations may be performed, for example, by memory controller circuitry 108 (FIG. 1) and/or other memory controller circuitry. Operations of this embodiment include receiving, by a memory controller, an active/idle signal from a partition control circuit 302. The partition control circuit controls at least one partition of a memory array, and the active/idle signal has a state indicative of one of an active state or an idle state of the partition control circuit. Operations may also include receiving, by the memory controller, a memory access command 304. Operations may also include determining, by the memory controller, if the memory access command is legitimate 306. Operations may also include enabling, by the memory controller for the partition control circuit, a clock signal if the memory access command is legitimate and the active/idle signal is in an idle state 308.FIG. 4 illustrates a flowchart 400 of operations of partition control circuitry consistent with one embodiment of the present disclosure. The operations may be performed, for example, by partition control circuit, e.g., circuit 104A (FIG. 1) and/or other circuitry associated with a partition of a memory array. Operations of this embodiment include maintaining the partition control circuit (PCC) in an idle and/or low-power state 402.Operations may also include maintaining an active/idle (A/I) signal in an idle state 404. Operations may also include determining, by the PCC, if a wake-up signal has been received 406. The wake-up signal may be generated by, for example, a memory controller to enable the PCC to transition from an idle state to an active state. If no wake-up signal is received (406) the PCC may remain in an idle state 402. If a wake-up signal is received (406), operations may include transitioning the A/I signal to an active state 408 and receiving a memory access (MA) command 410 from, for example, the memory controller. Operations may also include processing the MA command and returning any results to the memory controller 412. Operations may also include determining if there are any additional MA commands 414, and if so processing those commands 412. Once all commands have been processed, operations may also include transitioning the A/I signal to an idle state 416.While FIGS. 2-4 illustrate various operations according various embodiments, it is to be understood that not all of the operations depicted in FIG. 2, 3 or 4 are necessary for other embodiments. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted in FIG. 2, 3 and/or 4, and/or other operations described herein may be combined in a manner not specifically shown in any of the drawings, but still fully consistent with the present disclosure. Thus, claims directed to features and/or operations that are not exactly shown in one drawing are deemed within the scope and content of the present disclosure.As used in any embodiment herein, the term "logic" may refer to an application, software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage device. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. "Circuitry" and "circuitry", as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The circuitry may include a processor and/or controller configured to execute one or more instructions to perform one or more operations described herein. The instructions may be embodied as, for example, an application, software, firmware, etc. configured to cause the circuitry to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on a computer-readable storage device. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. The circuitry may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc.Any of the operations described herein may be implemented in a system that includes one or more storage devices having stored thereon, individually or in combination, instructions that when executed by one or more processors perform one or more operations. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location. The storage devices may include any type of tangible device, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD- ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software executed by a programmable control device. As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof.Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In some embodiments, a hardware description language may be used to specify circuit and/or logic implementation(s) for the various modules and/or circuitry described herein. For example, in one embodiment the hardware description language may comply or be compatible with a very high speed integrated circuits (VHSIC) hardware description language (VHDL) that may enable semiconductor fabrication of one or more circuits and/or modules described herein. The VHDL may comply or be compatible with IEEE Standard 1076-1987, IEEE Standard 1076.2, IEEE1076.1, IEEE Draft 3.0 of VHDL-2006, IEEE Draft 4.0 of VHDL-2008 and/or other versions of the IEEE VHDL standards and/or other hardware description standards.ExamplesExamples of the present disclosure include subject material such as a method, means for performing acts of the method, a device, or of an apparatus or system related to controlling access to a memory array, as provided below.Example 1According to this example there is provided an apparatus. The apparatus includes partition control circuitry to control at least one partition of a memory array, the partition control circuitry also to receive a controlled clock signal to enable execution of a legitimate memory access command and to generate an active/idle signal having an active state during execution of the legitimate memory access command or having an idle state in response to completion of execution of the legitimate memory access command.Example 2This example includes the elements of example 1, wherein the legitimate memory access command includes a command includes at least one of a read command, a write command, a force write command and a reset only write command.Example 3This example includes the elements of example 1, wherein the partition control circuitry also to receive a wake-up command to transition the partition control circuitry from an idle state to an active state.Example 4This example includes the elements according to any one of examples 1 through 3, wherein the partition control circuitry includes a plurality of partition control circuits each to control a respective partition of the memory array, and wherein each partition control circuit to generate a respective active/idle signal.Example 5This example includes the elements according to example 4, wherein each partition control circuit to propagate a respective active/idle signal from a previous partition control circuit to a subsequent partition control circuit, and wherein a last partition control circuit is configured to transmit the active/idle signal indicative of an active or idle state of at least one partition control circuit.Example 6According to this example there is provided a method. This method includes receiving, by a memory controller, an active/idle signal from partition control circuitry; wherein the partition control circuitry to control at least one partition of a memory array and wherein the active/idle signal has a state indicative of one of an active state or an idle state of the partition control circuitry; receiving, by the memory controller, a memory access command; determining, by the memory controller, if the memory access command is legitimate; and enabling, by the memory controller for the partition control circuitry, a clock signal if the memory access command is legitimate and if the active/idle signal is in an idle state.Example 7This example includes the elements according to example 6, wherein the determining, by the memory controller, if the memory access command is legitimate includes parsing the memory access command to discover that the command includes at least one of a read command, a write command, a force write command and a reset only write command.Example 8This example includes the elements according to example 6, further comprising determining, by the memory controller, if the memory access command is illegitimate by parsing the memory access command to determine if the memory access command includes a voltage coupling or a power sequence operation.Example 9This example includes the elements according to example 6, further comprising: determining, by the memory controller, if the clock signal is enabled from a previous memory access command; and queuing the memory access command if the clock signal is enabled. Example 10This example includes the elements according to example 6, further comprising transmitting a wake-up signal to the partition control circuitry to enable the partition control circuitry to transition from an idle state to an active state.Example 11According to this example there is provided a system for memory access control. The system includes memory controller circuitry to receive a memory access command and determine if the memory access command is legitimate or illegitimate, and to enable a clock signal if the memory access command is legitimate; and partition control circuitry to control at least one partition of a memory array, the partition control circuitry also to receive a controlled clock signal to enable execution of a legitimate memory access command and to generate an active/idle signal having an active state during execution of the legitimate memory access command or having an idle state in response to completion of execution of the legitimate memory access command; wherein the memory controller circuitry to disable the clock signal to the at least one partition control circuitry when the active/idle signal is in an idle state.Example 12This example includes the elements according to example 11, wherein the memory controller circuitry also to parse the memory access command to discover if the memory access command includes at least one of a read command, a write command, a force write command and a reset only write command.Example 13This example includes the elements according to example 11, wherein the memory controller circuitry also to parse the memory access command to determine if the memory access command includes a voltage coupling or a power sequence operation.Example 14This example includes the elements according to example 11, wherein the memory controller also to determine if the clock signal is enabled from a previous memory access command; and to queue the memory access command if the clock signal is enabled.Example 15This example includes the elements according to example 11, wherein the memory controller also to transmit a wake-up signal to the partition control circuitry to enable the partition control circuitry to transition from an idle state to an active state.Example 16This example includes the elements according to example 11, wherein the partition control circuitry includes a plurality of partition control circuits each to control a respective partition of the memory array, and wherein each partition control circuit to generate a respective active/idle signal.Example 17This example includes the elements according to example 16, wherein each partition control circuit to propagate a respective active/idle signal from a previous partition control circuit to a subsequent partition control circuit, and wherein a last partition control circuit is configured to transmit the active/idle signal to the memory controller circuitry.Example 18This example includes the elements according to example 16, further comprising: clock multiplexor circuitry to route the clock signal to a partition control circuit to execute the at least one memory access command.Example 19According to this example there is provided a computer-readable storage device having stored thereon instructions that when executed by one or more processors result in the following operations comprising: receive, by a memory controller, an active/idle signal from partition control circuitry; wherein the partition control circuitry to control at least one partition of a memory array and wherein the active/idle signal has a state indicative of one of an active state or an idle state of the partition control circuitry; receive, by the memory controller, a memory access command; determine, by the memory controller, if the memory access command is legitimate; and enable, by the memory controller for the partition control circuitry, a clock signal if the memory access command is legitimate and if the active/idle signal is in an idle state.Example 20This example includes the elements of example 19, wherein the instructions result in the following additional operations comprising: parse the memory access command to discover if the memory access command includes at least one of a read command, a write command, a force write command and a reset only write command.Example 21This example includes the elements of example 19, wherein the instructions result in the following additional operations comprising: parse the memory access command to determine if the memory access command includes a voltage coupling or a power sequence operation.Example 22This example includes the elements of example 19, wherein the instructions result in the following additional operations comprising: determine if the clock signal is enabled from a previous memory access command; and queue the memory access command if the clock signal is enabled.Example 23 This example includes the elements of example 19, wherein the instructions result in the following additional operations comprising: transmit a wake-up signal to the partition control circuitry to enable the partition control circuitry to transition from an idle state to an active state.Example 24This example includes a computer readable storage device having stored thereon instructions that when executed by one or more processors result in the following operations including:the method according to any one of example 6 to 10.Example 25This example includes a system including at least one device arranged to perform the method of any one of examples 6 to 10.Example 26This example includes a device that includes means to perform the method of any one of examples 6 to 10.Example 27According to this example there is provided an apparatus to control access to a memory array. The apparatus includes a memory controller to receive a memory access command and determine if the memory access command is legitimate or illegitimate; and to enable a clock signal to at least a portion of a memory array if the memory access command is legitimate; and to receive an active/idle signal associated with at least one portion of a memory array, the active/idle signal having an active state when the at least one portion of the memory array is executing the legitimate memory access command and an idle state when executing the legitimate memory access command is complete; wherein the memory controller to disable the clock signal to the at least one portion of the memory array when the active/idle signal is in an idle state.Example 28This example includes the elements of example 27, wherein the memory controller also to parse the memory access command to discover if the memory access command includes at least one of a read command, a write command, a force write command and a reset only write command.Example 29 This example includes the elements of example 27, wherein the memory controller circuitry also to parse the memory access command to determine if the memory access command includes a voltage coupling or a power sequence operation.Example 30This example includes the elements of example 27, wherein the memory controller also to determine if the clock signal is enabled from a previous memory access command; and to queue the memory access command if the clock signal is enabled.Example 31This example includes the elements of example 27, wherein the memory controller also to transmit a wake-up signal to the at least one portion of the memory array to enable the at least one portion of the memory array to transition from an idle state to an active state.Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications. |
Disclosed are an apparatus and method for transient overstress protection in compound semiconductor circuit applications. An apparatus and methods for compound semiconductor protection clamps are provided herein. In certain configurations, a compound semiconductor protection clamp includes a resistor-capacitor (RC) trigger network and a metal-semiconductor field effect transistor (MESFET) clamp. The RC trigger network detects when an ESD/EOS event is present between a first node and a second node, and activates the MESFET clamp in response to detecting the ESD/EOS event. When the MESFET clamp is activated, the MESFET clamp provides a low impedance path between the first and second nodes, thereby providing ESD/EOS protection. When deactivated, the MESFET clamp provides high impedance between the first and second nodes, and thus operates with low leakage current and small static power dissipation. |
1.A compound semiconductor circuit, including:First nodeSecond nodeA compound semiconductor protection clamp electrically connected between the first node and the second node, wherein the compound semiconductor protection clamp includes:A resistor-capacitor RC trigger network configured to detect the presence of a transient overstress event between the first node and the second node, wherein the RC trigger network is configured to respond to detecting the transient overstress event And generate the activation control signal;A metal-semiconductor field effect transistor MESFET clamp is configured to receive the activation control signal from the RC trigger network, and to selectively activate between the first node and the second node based on the activation control signal的 discharge path;A reverse protection circuit including a Schottky gate diode structure activated in response to a negative polarity transient overstress event; andThe false trigger protection circuit is configured to generate a low-pass filtered voltage based on low-pass filtering of the voltage difference between the first node and the second node, wherein the false trigger protection circuit generates a false low-pass filtered voltage based on the low-pass filtered voltage. Triggering a protection signal, and wherein the MESFET clamp is further configured to selectively activate a discharge path between the first node and the second node based on the false triggering protection signal,Wherein, the compound semiconductor protection clamp is implemented without any p-type implantation region.2.The compound semiconductor circuit according to claim 1, wherein the MESFET clamp includes an enhanced mode, ie, an E-mode high electron mobility transistor HEMT, wherein the E-mode HEMT includes a HEMT configured to receive the activation control signal The gate is electrically connected to the drain of the second node, and is electrically connected to the source of the first node.3.The compound semiconductor circuit according to claim 1, wherein the MESFET clamp includes a depletion mode electrically connected in series, that is, a D-mode HEMT and one or more Schottky gate diodes, wherein the D-mode HEMT It includes a gate configured to receive the activation control signal, a drain electrically connected to the second node, and a source electrically connected to the first node via the one or more Schottky gate diodes pole.4.The compound semiconductor circuit according to claim 1, wherein the MESFET clamp includes a multi-gate HEMT, the multi-gate HEMT includes a first depletion mode, that is, a D-mode gate, and a second D-mode gate , An enhanced mode between the first and second D-mode gates, that is, the E-mode gate, wherein the E-mode gate is configured to receive the activation control signal from the RC trigger network.5.The compound semiconductor circuit of claim 1, wherein the RC trigger network generates the activation control signal in response to a positive polarity transient overstress event, and the positive polarity transient overstress event is relative to the first node The voltage of increases the voltage of the second node, and wherein the negative polarity transient overstress event decreases the voltage of the second node relative to the voltage of the first node.6.The compound semiconductor circuit according to claim 1, wherein the false trigger protection circuit includes a transistor that generates a mirror current with respect to a change in current flowing through the MESFET clamp, wherein the false trigger protection circuit is configured to By providing feedback based on the mirrored current, it controls the duration of time that the discharge path of the MESFET clamp is activated.7.The compound semiconductor circuit according to claim 1, further comprising a high-frequency functional circuit protected by a compound semiconductor protection clamp, wherein the high-frequency functional circuit includes at least one of the following: a power amplifier, a low noise amplifier, a voltage control Oscillator, mixer, tuner, resonator, attenuator, or switch.8.A compound semiconductor protection clamp, including:A resistor-capacitor RC trigger network configured to detect the presence of a transient overstress event between a first node and a second node, wherein the RC trigger network is configured to respond to detecting the transient overstress The event generates an activation control signal; andHEMT clamps for high electron mobility transistors, including:Heterojunction structure;The source region is arranged above the heterojunction structure;The drain region is disposed above the heterojunction structure; andThe gate region is arranged above the heterojunction structure and is located between the source region and the drain region, wherein the gate region receives an activation control signal from the RC trigger network, and selectively activates all of them based on the activation control signal A discharge path between the first node and the second node;One or more Schottky gate diodes electrically connected in series with the HEMT clamper between the first node and the second node; andA false triggering protection circuit configured to generate a low-pass filtered voltage based on low-pass filtering of the voltage difference between the first node and the second node, wherein the false triggering protection circuit is based on the low-pass filtering Voltage to generate the false trigger protection signal, and provide the false trigger protection signal to the gate area,Among them, the HEMT clamp is implemented without any p-type injection region.9.8. The compound semiconductor protection clamp of claim 8, wherein the compound semiconductor protection clamp is fabricated on a gallium arsenide substrate.10.8. The compound semiconductor protection clamp according to claim 8, wherein the heterojunction structure includes an indium gallium arsenide region and an aluminum gallium arsenide region.11.8. The compound semiconductor protection clamp according to claim 8, wherein the gate region includes an enhanced mode, that is, an E-mode gate region.12.The compound semiconductor protection clamp according to claim 11, further comprising a first depletion mode, that is, a D-mode gate located above the heterojunction structure and between the source region and the E-mode gate region Region, and a second D-mode gate region located above the heterojunction structure and between the drain region and the E-mode gate region.13.8. The compound semiconductor protection clamp according to claim 8, wherein the gate region includes a depletion mode, that is, a D mode gate region.14.A method for protecting a compound semiconductor circuit, the method comprising:Use the resistor-capacitor RC trigger network of the compound semiconductor protection clamp to detect the existence of a transient overstress event between the first node and the second node;Generating an activation control signal in response to detecting the transient overstress event using the RC trigger network;Receiving the activation control signal as an input to the metal semiconductor field effect transistor MESFET clamp of the compound semiconductor protection clamp;Using a MESFET clamp to selectively activate the discharge path between the first node and the second node based on an activation control signal;Discharging current through a MESFET clamp and through one or more Schottky gate diodes electrically connected in series with the MESFET clamp between the first node and the second node;In response to a low-pass filtered voltage generated by low-pass filtering based on the voltage difference between the first node and the second node, generating a false trigger protection signal; andUsing a MESFET clamp to selectively activate the discharge path between the first node and the second node based on the false trigger protection signal,Wherein, the compound semiconductor protection clamp is implemented without any p-type implantation region. |
Device and method for transient overstress protection in compound semiconductor circuit applicationTechnical fieldThe embodiments of the present invention relate to electronic systems, and more specifically to compound semiconductor protection devices.Background techniqueElectronic circuits can be exposed to transient overstress events or relatively short duration electrical signals with rapidly changing voltages and high power. Transient overstress events include electrostatic discharge/electrical overload (ESD/EOS) events, such as those caused by the sudden release of charge from an object or person to an electronic circuit. Transient overstress events can damage integrated circuits (ICs) due to overvoltage conditions and/or high power dissipation levels on relatively small areas of the IC. High power dissipation can increase circuit temperature and can cause many problems such as junction damage, metal damage, and/or surface charge accumulation.Summary of the inventionIn one aspect, a compound semiconductor circuit is provided. The compound semiconductor circuit includes a first node, a second node, and a compound semiconductor protection clamp electrically connected between the first node and the second node. The compound semiconductor protection clamp includes a resistor-capacitor (RC) trigger network configured to detect the presence of a transient overstress event between the first node and the second node, and respond to detecting the transient overstress event And generate the activation control signal. The compound semiconductor protection clamp also includes a metal-semiconductor field effect transistor (MESFET) clamp, which is configured to receive an activation control signal from the RC trigger network, and to selectively activate the first node and the second node based on the activation control The discharge path signal between.In another aspect, a compound semiconductor protection clamp is provided. The compound semiconductor protection clamp includes an RC trigger network configured to detect the existence of a transient overstress event between a first node and a second node, and generate activation control in response to detecting the transient overstress event Signal. The compound semiconductor protection clamping circuit also includes a high electron mobility transistor (HEMT) clamping circuit, which includes a heterojunction structure, a source region arranged on the heterojunction structure, and a drain region arranged on the heterojunction structure And is arranged above the heterojunction structure and positioned between the source region and the drain region. The gate region receives the activation control signal from the RC trigger network, and selectively activates the discharge path between the first node and the second node based on the activation control signal.In another aspect, a method of protecting a compound semiconductor circuit is provided. The method includes using an RC trigger network of a compound semiconductor protection clamp to detect the existence of a transient overstress event between a first node and a second node, and generating activation in response to detecting the transient overstress event using the RC trigger network Control signal, receiving the activation control signal as an input of a metal-semiconductor field effect transistor (MESFET) clamp of the compound semiconductor protection clamp, and using the MESFET clamp to selectively activate the control signal based on the activation control signal The discharge path between the first node and the second node is activated.Description of the drawingsFig. 1 is a schematic diagram of a monolithic microwave integrated circuit (MMIC) according to an embodiment.FIG. 2A is a schematic diagram of a compound semiconductor protection clamp according to an embodiment.FIG. 2B is a schematic diagram of a compound semiconductor protection clamp according to another embodiment.FIG. 3 is a circuit diagram of a compound semiconductor protection clamp according to another embodiment.4A is a circuit diagram of a compound semiconductor protection clamp according to another embodiment.4B is a circuit diagram of a compound semiconductor protection clamp according to another embodiment.FIG. 5A is a circuit diagram of a compound semiconductor protection clamp according to another embodiment.Figure 5B is an annotated cross-section of a multi-gate high electron mobility transistor (HEMT) according to one embodiment.FIG. 6 is a schematic diagram of a compound semiconductor protection clamp according to another embodiment.FIG. 7 is a circuit diagram of a compound semiconductor protection clamp according to another embodiment.8 is a graph of transmission line pulse (TLP) current versus TLP voltage of an embodiment of the compound semiconductor protection clamp of FIG. 4B.FIG. 9 is a graph of leakage current versus voltage of an embodiment of the compound semiconductor protection clamp of FIG. 4B.Figure 10 is a cross-section of a HEMT according to one embodiment.FIG. 11 is a cross-section of a non-uniformly integrated compound semiconductor circuit according to an embodiment.Detailed waysThe detailed description of certain embodiments below presents various descriptions of specific embodiments of the present invention. However, the present invention can be embodied in many different ways defined and covered by the claims. In this specification, reference is made to the drawings, in which the same reference numerals denote the same or functionally similar elements.To help ensure that electronic systems are reliable, manufacturers can test electronic systems under defined stress conditions, which can be described by standards set by various organizations, such as the Joint Electronic Equipment Engineering Committee (JEDEC), the International Electrotechnical Association Committee ( IEC) and/or International Organization for Standardization (ISO). The standard can cover a large number of transient overstress events, including electrostatic discharge (ESD) events and/or electrical overload (EOS) events. For example, a monolithic microwave integrated circuit (MMIC) may be specified to withstand an ESD event based on a human body model (HBM) ESD event of about 200V or greater.It may be difficult to realize a compound semiconductor circuit with robust protection against transient overstress events, such as charge/electrical overload (ESD/EOS) events. In one example, the process for manufacturing the MMIC may not include p-type implantation. In such an implementation, a protection circuit implemented using a p-n junction cannot be used to protect the MMIC from ESD/EOS events. In another example, compound semiconductor protection clamps are used to provide sufficient robustness in applications that employ heterogeneous integration of compound semiconductors. For example, various processes, such as an ion cutting process that combines semiconductor wafer bonding and undercutting, can be used to integrate compound semiconductor circuits on a heterogeneous substrate.This article provides devices and methods for compound semiconductor protection clamps. In some configurations, the compound semiconductor protection clamp includes a resistor-capacitor (RC) trigger network and a metal-semiconductor field effect transistor (MESFET) clamp. The RC trigger network detects when there is an ESD/EOS event between the first node and the second node, and activates the MESFET clamp in response to detecting the ESD/EOS event. When the MESFET clamp is activated, the MESFET clamp provides a low impedance path between the first and second nodes, thereby providing ESD/EOS protection. When disabled, the MESFET clamp provides high impedance between the first and second nodes, and therefore operates with low leakage current and small static power dissipation.MESFET clamps can be implemented in a variety of ways, including the use of high electron mobility transistors (HEMT), such as gallium arsenide (GaAs) HEMT, indium phosphide (InP) HEMT, or gallium nitride (GaN) HEMT. Those skilled in the art will understand that HEMT may also be referred to as Modulation Doped Field Effect Transistor (MODFET) or Heterojunction Field Effect Transistor (HFET). In some embodiments, the MESFET clamp includes one or more pseudomorphous HEMTs.In one embodiment, the MESFET clamp includes a depletion mode (D mode) HEMT with a gate controlled by an RC trigger network. In order to keep the D-mode HEMT off in the presence of normal operating voltage levels, the D-mode HEMT is electrically connected in series with one or more Schottky gate diodes to use a negative gate when there is no ESD/EOS event. Source voltage to bias the D-mode HEMT. In some configurations, each Schottky gate diode is implemented using the gate-channel interface of the HEMT. The Schottky gate diode provides a voltage drop that maintains the D-mode HEMT off during normal operating conditions. However, when the RC trigger network detects the presence of an ESD/EOS event, the RC trigger network activates the D-mode HEMT to provide a conductive path between the first and second nodes through the D-mode HEMT and the Schottky gate diode.In another embodiment, a multi-gate implementation of MESFET clamped HEMT is controlled by an RC trigger network. In one embodiment, the multi-gate HEMT includes a first D-mode gate, a second D-mode gate, and an enhancement mode (E-mode) gate, which are located between the first and second D-mode gates and It controls the RC to trigger the network. The source of the multi-gate HEMT is electrically connected to the first node and the first D-mode gate, and the drain of the multi-gate HEMT is electrically connected to the second node and the second D-mode gate.In some configurations, the compound semiconductor protection clamp is further implemented to include a false trigger protection circuit for preventing the RC trigger network from accidentally activating the MESFET clamp during normal operation. The false trigger protection circuit may be used to generate a filtered voltage based on low-pass filtering the voltage difference between the first and second nodes, and to control the activation of the MESFET clamp based on the filtered voltage. The feedback provided by the false trigger protection circuit can prevent certain transient signal transmission conditions from unintentionally activating the false trigger protection circuit. If there is no trigger protection scheme, transient signals associated with normal signal conditions (for example, transient signals related to IC power-up) can cause the RC trigger network to activate the MESFET clamp.The teachings herein can be used to provide robust ESD/EOS protection to compound semiconductor chips or dies (such as MMIC) and/or circuits implemented using heterogeneous integration of compound semiconductors. For example, certain applications may include multi-process technology functional blocks sharing a common substrate, and ESD/EOS protection may be provided by reusing common compound semiconductor protection clamps and/or compound semiconductor protection clamps connected through back-end metallization. It protects adjacent composite semiconductor circuit blocks in individual dies in the common substrate. Compound semiconductor protection clamps can be used to protect electronic circuits associated with various radio frequency (RF) and/or microwave applications, including, for example, power amplifiers, attenuators, mixers, and/or switches. The compound semiconductor protection clamp provides robust ESD/EOS protection by using an RC trigger network to control the MESFET clamp. It actively detects the presence of an ESD/EOS event to provide a fast activation speed with relatively low voltage overshoot.Therefore, the compound semiconductor IC implemented using this protection clamp can meet or exceed the specifications related to ESD/EOS robustness in RF and/or microwave circuit applications. Compound semiconductor protection clamps can provide flexibility in scaling to achieve an appropriate amount of ESD/EOS protection for different power domains on-chip. Compared with diode-triggered power clamps, for example, in applications that operate in a power domain greater than 5V, compound semiconductor protection clamps can exhibit superior performance.FIG. 1 is a schematic diagram of MMIC 20 according to an embodiment. The MMIC 20 includes a high-frequency functional circuit 1, an inductor 2, a first compound semiconductor protection clamp 5, a second compound semiconductor protection clamp 6, a third compound semiconductor protection clamp 7, an input signal pin 8, an output Signal pin 9, control voltage pin 10, first ground pin 11, second ground pin 12, third ground pin 13, fourth ground pin 14, power low power pin 15, and power high power pin Foot 16.The pins shown can be implemented in a variety of ways, including, for example, the use of pads, ports, leads, and/or other structures.Although Figure 1 shows an example of an MMIC, the teachings herein are applicable to a wide variety of configurations. For example, the MMIC 20 may be implemented to include additional circuits, pins and/or other structures, and/or other structures. The MMIC 20 may include components arranged in other ways. In addition, the MMIC 20 may include more or fewer compound semiconductor protection clamps, and/or the compound semiconductor protection clamps may be connected in other configurations.The high-frequency functional circuit 1 can correspond to various high-frequency circuits. For example, the high-frequency functional circuit 1 may include a power amplifier, a low noise amplifier (LNA), a voltage controlled oscillator (VCO), a mixer, a tuner, a resonator, an attenuator (for example, a variable voltage attenuator), and/ Or switch.In the configuration shown, the high-frequency functional circuit 1 receives the radio frequency (RF) input signal RFIN from the input signal pin 8, the control voltage from the control voltage pin 10, the power low voltage pin 15 from the low power supply, and the The power supply high voltage of the power supply pin 16. In addition, the high-frequency functional circuit 1 generates an RF output signal RFOUT on the output signal pin 9. Therefore, the illustrated MMIC 20 can be used to process RF signals, such as those used in cellular communications, including, for example, 3G, 4G, LTE, and LTE-Advanced and 5G communications.However, the high-frequency functional circuit 1 may also be adapted to operate at frequencies other than those associated with RF frequencies used for cellular communication. For example, certain communication systems (such as those used in national defense and/or commercial applications) can be specified in the X band (approximately 7GHz to 12GHz), the Ku band (approximately 12GHz to 18GHz), and the K band (approximately 18GHz to 18GHz). 27GHz), Ka-band (approximately 27GHz to 40GHz), V-band (approximately 40GHz to 75GHz), and/or W-band (approximately 75GHz to 110GHz) operation.As shown in Figure 1, input and output signaling is provided via the ground signal ground (G-S-G) interface. For example, the input signal pin 8 is located between the first and second ground pins 11, 12, and the output signal pin 9 is located between the third and fourth ground pins 13, 14. Configuring the signaling interface in this way can help provide an inductive return path when operating at high frequencies. In addition, the G-S-G configuration can also provide signal shielding, thereby enhancing signal integrity.The illustrated inductor 2 is electrically connected between the input signal pin 8 and the first ground pin 11, and can be used to control the DC bias voltage of the input signal pin 8. However, other configurations are possible, for example where the realization of the DC bias voltage controlling the input signal pin 8 is external to the MMIC 20 or using an on-chip DC bias circuit.The ground pins 11-14 and the power low power pins 15 shown are electrically connected to each other on the MMIC 20 using metallization. In some configurations, the low-power power supply pin 15 is electrically connected to the back metallization layer through a through-substrate via (TSV).The MMIC 20 can be implemented using various compound semiconductor technologies. In some embodiments, the MMIC 20 is manufactured using a compound III-V semiconductor manufacturing process, such as gallium arsenide (GaAs), gallium nitride (GaN), or indium phosphide (InP) manufacturing technology.The first to third compound semiconductor protection clamps 5-7 have been used to provide ESD/EOS protection to the MMIC 20. For example, the first composite protection semiconductor protection clamper 5 is electrically connected to the low power supply pin 15 corresponding to the first node of power and the second node corresponding to the high power supply pin 16 of the power supply, and thus functions as a power supply clamp. In addition, the second compound semiconductor protection clamp 6 is electrically connected between the first node corresponding to the power low power supply pin 15 and the second node corresponding to the output signal pin 9. In addition, the third compound semiconductor protection clamp 7 is electrically connected between the first node corresponding to the power low power supply pin 15 and the second node corresponding to the control voltage pin 10. Although an exemplary transient overstress protection scheme is shown, compound semiconductor protection clamps can be connected in a variety of ways to provide ESD/EOS protection to MMIC or other compound semiconductor circuits.When there is an ESD/EOS event, one or more compound semiconductor protection clamps 5-7 can provide a low-impedance path to the low-power power supply pin 15, thereby removing the charge associated with the ESD/EOS event from sensitive circuits, such as High-frequency functional circuit 1. In some embodiments, the low-power power supply pin 15 may be electrically connected to the back metallization layer using one or more TSVs, and thus may exhibit very low impedance and/or excellent heat dissipation.It may be difficult to implement MMIC 20 with robust protection from ESD/EOS events. For example, the MMIC 20 can be manufactured using a compound semiconductor manufacturing process, which can limit the realization of the compound semiconductor protection clamps 5-7. For example, certain compound III-V semiconductor manufacturing processes may not include p-type implants, so p-n junctions may be limited or unavailable.In some configurations herein, compound semiconductor protection clamps (such as one or more compound semiconductor protection clamps 5-7 of FIG. 1) are implemented using RC trigger networks and MESFET clamps. The RC trigger network detects when there is an ESD/EOS event between the first node and the second node, and activates the MESFET clamp in response to detecting the ESD/EOS event. When the MESFET clamp is activated, the MESFET clamp provides a low impedance path between the first and second nodes, thereby providing ESD/EOS protection. When disabled, the MESFET clamp provides high impedance between the first and second nodes.FIG. 2A is a schematic diagram of a compound semiconductor protection clamp 30 according to an embodiment. The compound semiconductor protection clamp 30 includes an RC trigger network 31 and a MESFET clamp 32 electrically connected to each other in parallel between the first node N1 and the second node N2.The MESFET clamp 32 receives the activation control signal from the RC trigger network 31. The MESFET clamp 32 uses the activation control signal to selectively activate the discharge path between the first node N1 and the second node N2.The RC trigger network 31 detects that there is a transient overstress event between the first and second nodes N1 and N2, such as an ESD/EOS event. When an ESD/EOS event is not detected, the RC trigger network 31 uses the activation control signal to turn off the MESFET clamp 32, so that the MESFET clamp 32 provides high impedance between the first and second nodes N1, N2. However, in response to detecting the presence of an ESD/EOS event, the RC trigger network 31 uses the activation control signal to turn on the MESFET clamp 32 to provide low impedance between the first node N1 and the second node N2.In some configurations, the RC trigger network 31 includes a resistor, a capacitor connected in series between the first node N1 and the second node N2, and the RC trigger network 31 is based on the voltage change between the first and second nodes N1, N2 Rate and generate the activation control signal. The size of the capacitor and resistor may be determined to keep the MESFET clamp 32 closed when there is a normal signal transmission condition, and to turn on the MESFET clamp 32 when there is an ESD/EOS event. For example, an ESD/EOS event produces a voltage change rate between the first and second nodes N1, N2 of relatively large amplitude and relatively long duration, and when there is an ESD/EOS event, the resistance of the resistor and the capacitance of the capacitor The capacitor controls the corresponding gate voltage of the MESFET clamp.The additional details of the compound semiconductor protection clamp 30 may be as previously described.FIG. 2B is a schematic diagram of a compound semiconductor protection clamp 40 according to another embodiment. The compound semiconductor protection clamp circuit 40 includes a forward protection circuit 37 and a reverse protection circuit 38 electrically connected in parallel to each other between the first node N1 and the second node N2. The forward protection circuit includes an RC trigger network 31 and a MESFET clamp 32, which can be as described above. The reverse protection circuit 38 includes a Schottky gate diode structure 33 including an anode electrically connected to the first node N1 and a cathode electrically connected to the second node N2.The illustrated compound semiconductor protection clamp 40 provides bidirectional protection against positive polarity ESD/EOS events that increase the voltage of the second node N2 relative to the first node N1 and decrease the voltage of the second node N2 relative to the first node N1 Negative ESD/EOS event. Providing bidirectional protection can enhance the robustness of MMIC to harsh operating environments.The MESFET includes a metal gate located on the semiconductor channel. In some configurations, the Schottky gate diode structure 33 is implemented using one or more MESFET gate-to-channel interfaces.The illustrated compound semiconductor protection clamp 40 provides ESD/EOS protection and can be implemented without using a p-n junction. Therefore, the compound semiconductor protection clamp 40 can be used to provide pin protection of the MMIC, which is manufactured using a compound semiconductor manufacturing process in which the p-n junction is limited or unavailable.The first and second nodes N1 and N2 can operate with voltages under normal circuit operating conditions within a defined range. For example, in some implementations, normal circuit operating conditions may be associated with a voltage difference between the third node N2 and the first node N1 between about 3V and about 7V. However, other suitable working voltage conditions will be easily available to people with ordinary skills in the art.In one embodiment, the second node N2 is connected to the signal pad of the IC, and the first node N1 is connected to the power low or ground power. However, other implementations are possible, such as a configuration in which the first and second terminals N1, N2 are connected to a low-power power source and a high-power power source, respectively.The additional details of the compound semiconductor protection clamp 40 may be as previously described.FIG. 3 is a circuit diagram of a compound semiconductor protection clamp 50 according to another embodiment. The compound semiconductor protection clamp circuit 50 includes an RC trigger network 41, a MESFET clamp circuit 42 and a Schottky gate diode structure 43, which are electrically connected to each other in parallel between the first node N1 and the second node N2.The illustrated RC trigger network 41 includes a resistor 57 and a capacitor 58. The resistor 57 includes a first end electrically connected to the first node N1 and a second end electrically connected to the first end of the node of the capacitor 58 that generates an activation control signal for the MESFET clamp 42. The capacitor 58 also includes a second terminal electrically connected to the second node N2. Although Figure 3 shows one embodiment of an RC trigger network, the teachings herein are applicable to various configurations of RC trigger networks, including, for example, implementations in which transistors and/or diodes are used to control triggering.Resistor 57 and capacitor 58 may have resistance and capacitance values selected based on a variety of considerations, including, for example, the characteristics of ESD/EOS events in a particular application and/or the threshold voltage of the MESFET clamp. In one embodiment, The resistor 57 and the capacitor 58 have a resistor-capacitor (R*C) time constant selected in the range of 50 ns to 1 us, for example, 500 ns. However, other R*C time constant values are also possible.The MESFET clamp circuit 42 includes an E-mode HEMT 55. The E-mode HEMT 55 includes a gate, which receives the activation control signal from the RC trigger network 41, a source, which is electrically connected to the first node N1, and a drain, which is electrically connected to the second node N2. The E-mode HEMT 55 shown has a threshold voltage. Therefore, during normal operating conditions, the RC trigger network 41 controls the gate-to-source voltage of the E-mode HEMT 55 to be approximately equal to 0V, thereby turning off the E-however, when the ESD/EOS event increases with respect to the voltage of the first node N1 by a second At the voltage of the node N2, the displacement current can flow through the capacitor 58 and into the resistor 57, thereby generating enough to make the E-mode HEMT 55 conductive across the resistor 57.The illustrated compound semiconductor protection clamp circuit 70 also includes a Schottky gate diode structure 43 that provides protection against ESD/EOS events that reduce the voltage of the second node N2 relative to the voltage of the first node N1. The illustrated Schottky gate diode structure 43 includes a first HEMT 61 and a second HEMT 62. The first HEMT 61 includes a gate electrically connected to the first node N1, and a source and drain second HEMT 62 electrically connected to each other and electrically connected to each other. The second HEMT 62 also includes a source electrode and a drain electrode that are electrically connected to each other and to the second node N2.The HEMT includes a metal gate and a semiconductor channel, so the gate-to-channel interface of the HEMT works as a Schottky gate diode. FIG. 3 has been annotated to show the first HEMT 61 including the first Schottky gate diode 66 and the second HEMT 62 including the second Schottky gate diode 67. As shown in FIG. 3, the first and second Schottky gates 66, 67 are electrically connected to each other in series from the anode to the cathode between the first and second nodes N1, N2.Although the Schottky gate diode structure 43 is illustrated as including two HEMTs, the Schottky gate diode structure 43 may include more or fewer HEMTs to achieve the desired reverse protection characteristics. For example, more or fewer HEMTs can be included to provide the reverse trigger voltage required for a specific application. The Schottky gate diode structure 43 can be implemented using E-mode transistors, D-mode transistors or a combination thereof.The additional details of the compound semiconductor protection clamp 70 can be as previously described.FIG. 4A is a circuit diagram of a compound semiconductor protection clamp 80 according to another embodiment. The compound semiconductor protection clamp 80 includes an RC trigger network 41, a MESFET clamp 52 and a Schottky gate diode structure 43, which are electrically connected to each other in parallel between the first node N1 and the second node N2.The compound semiconductor protection clamp 80 of FIG. 4A is similar to the compound semiconductor protection clamp 70 of FIG. 3, except that the compound semiconductor protection clamp 80 of FIG. 4A includes a different embodiment of the MESFET clamp. Specifically, the MESFET clamp 52 of FIG. 4A includes a D-mode HEMT 75, a first cut-off state control HEMT 71 and a second cut-off state control HEMT 72.The D-mode HEMT 75 includes a gate that receives the activation control signal from the RC trigger network 41, a drain that is electrically connected to the second node N2, and a source that is electrically connected to the gate of the first off-state control HEMT 71. The first off-state control HEMT 71 also includes source and drain electrodes electrically connected to each other and electrically connected to the gate of the second off-state control HEMT 71. The second off-state control HEMT 72 also includes electrically connected to each other and with the first node N1. Electrically connected source and drain.As shown in FIG. 4A, the first off-state control HEMT 71 has a gate-to-channel interface associated with the first Schottky gate diode 76, and the second off-state control HEMT 72 has a gate-to-channel interface associated with it.的 has a second Schottky gate diode 77. The first and second Schottky gate diodes 76, 77 are electrically connected to each other in series from the anode to the cathode between the source of the D-mode HEMT 75 and the first node N1.The illustrated D-mode HEMT 75 is a depletion mode or normally-on transistor with a threshold voltage less than or equal to 0V. In addition, the first and second cut-off states control the HEMT 71, 72 to keep the D-mode HEMT 75 turned off under normal operating conditions. In particular, the voltage drop across the first and second Schottky gate diodes generates a negative gate-source voltage for the D-mode HEMT 75, thereby keeping the D-mode HEMT 75 off.Although the illustrated MESFET clamp 52 is shown as including two off-state control HEMTs, the MESFET clamp 52 may include more or fewer off-state control HEMTs. For example, multiple off-state control HEMTs may be selected based on the threshold voltage of the D-mode HEMT and/or the forward voltage of the Schottky gate diode associated with a specific manufacturing process. E-mode transistors, D-mode transistors, or a combination thereof can be used to realize the off-state control HEMT.The additional details of the compound semiconductor protection clamp 80 may be as previously described.FIG. 4B is a circuit diagram of a compound semiconductor protection clamp 90 according to another embodiment. The compound semiconductor protection clamp 90 includes an RC trigger network 81 electrically connected to each other in parallel between the first node N1 and the second node N2, a MESFET clamp 82 and a Schottky gate diode structure 43.The compound semiconductor protection clamp 90 of FIG. 4B is similar to the compound semiconductor protection clamp 80 of FIG. 4A, except that the compound semiconductor protection clamp 90 of FIG. 4B includes a different implementation of the MESFET clamp and a different implementation of the RC trigger network.As shown in FIG. 4B, the MESFET clamp 82 includes a D-mode HEMT 75, a first off-state controlling HEMT 71, a second off-state controlling HEMT 72, a third off-state controlling HEMT 73, and a fourth off-state controlling HEMT 74. The first to fourth off-state control HEMT 71-74 have been annotated to show the first to fourth Schottky gate diodes 76-79, which are associated with the gate-to-channel interface of the HEMT. As shown in FIG. 4B, the D-mode HEMT 75 includes: a gate, which receives the activation control signal from the RC trigger network 81; a drain, which is electrically connected to the second node N2; and a source, which is electrically connected to the second node N1. Connect to the first node N1. The first to fourth Schottky gate diodes 76-79 are combined in series.The illustrated RC trigger network 81 includes a first node N1 and a first node N1 electrically connected in series between a first thin film resistor (TFR) 87a, a second TFR 87b, a third TFR 87c, and a metal-insulator-metal (MIM) capacitor 88 The second node N2. Although the illustrated RC trigger network 81 is shown to be implemented using MIM capacitors and TFR structures, other configurations are also possible. For example, in another embodiment, the RC trigger network includes mesa resistors and/or a combination of TFR and mesa resistors.The additional details of the compound semiconductor protection clamp 90 may be as previously described.FIG. 5A is a circuit diagram of a compound semiconductor protection clamp 100 according to another embodiment. The compound semiconductor protection clamp circuit 100 includes an RC trigger network 41, a MESFET clamp circuit 112 and a Schottky gate diode structure 43, which are electrically connected to each other in parallel between a first node N1 and a second node N2.The compound semiconductor protection clamp circuit 100 of FIG. 5 is similar to the compound semiconductor protection clamp circuit 70 of FIG. 3 except that the compound semiconductor protection clamp circuit 100 of FIG. 5A includes a different embodiment of the MESFET clamp circuit.Specifically, a multi-gate HEMT 115 is used to implement the MESFET clamp 112 of FIG. 5A. The multi-gate HEMT 115 includes a first D-mode gate, a second D-mode gate, and an E-mode gate. The E-mode gate Located at the first and second D-mode gates. The first and second D-mode gates are depletion mode or normally-on gates having a threshold voltage less than or equal to about 0V. In contrast, the E-mode gate is an enhancement mode or normally-off gate with a threshold voltage greater than about 0V. In one embodiment, the first and second D-mode gates have threshold voltages in the range of about -1.0V to about -2.0V, and the E-mode gates are in the range of about 0.3V to about 0.5V.As shown in FIG. 5A, the first D-mode gate is electrically connected to the source of the multi-gate HEMT 115 and the first node N1. In addition, the second D-mode gate is electrically connected to the drain of the multi-gate HEMT 115 and the second node N2. In addition, the E-mode gate receives an activation control signal from the RC trigger network 41.During the normal operating voltage condition between the first and second nodes N1, N2, the RC trigger network 41 biases the MESFET clamp 112 in an off or high impedance state, wherein one of the first and second nodes N1, N2 The flow of current between is blocked. For example, the RC trigger network 41 can control the voltage of the E-mode gate to be approximately equal to the voltage of the first node N1. Therefore, the compound semiconductor protection clamp 100 operates in a low leakage/high impedance state under normal operating voltage conditions.However, during an ESD/EOS event, the compound semiconductor protection clamp 100 provides a low impedance path between the first and second nodes N1, N2 to provide ESD/EOS protection. For example, in response to an ESD/EOS event, increasing the voltage of the first node N1 relative to the voltage of the second node N2, the Schottky gate diode structure 43 may be activated to provide discharge between the first and second nodes N1, N2 path. In addition, in response to an ESD/EOS event that increases the voltage of the second node N2 with respect to the voltage of the first node N1, the RC trigger network 41 can control the D-mode gate to turn on the multi-gate HEMT 115 and switch between the first and second A low impedance path is provided between nodes N1 and N2. Therefore, the discharge path through the multi-gate HEMT 115 is selectively activated based on the activation control signal from the RC trigger network 41.When the multi-gate HEMT 115 is turned on, a low-impedance forward conduction path is provided between the first and second nodes N1 and N2 through the channel of the multi-gate HEMT 115. In addition, at a sufficiently high voltage, the Schottky gate diode associated with the second D-mode gate can become forward biased and provide additional current flow between the first and second nodes N1, N2. path.Although the multi-gate HEMT 115 is shown as including three gates, the multi-gate HEMT 115 may be modified to include more or fewer gates and/or different gate arrangements.FIG. 5B is an annotated cross-section 110 of the multi-gate HEMT 115 of FIG. 5A according to one embodiment. The annotated cross-section 110 includes an RC trigger network 41, a first node N1 and a second node N2, which may be as described above. The multi-gate HEMT115 is implemented on a gallium arsenide (GaAs) substrate 121 and includes a heterojunction structure 122, a source region 126, a drain region 127, a first D-mode gate region 135a, and a second D-mode gate Region 135b and E-mode gate region 136. As shown in FIG. 5B, the GaAs substrate 121 includes a back conductor 139.The heterojunction structure 122 includes an indium gallium arsenide (InGaAs) layer 123 disposed on a GaAs substrate 121, a spacer layer 124 disposed on the InGaAs layer 123, and an n-type aluminum gallium arsenide (InGaAs) disposed above the spacer layer 124 ( n-AlGaAs) layer 125. The source region 126 is disposed above the heterojunction structure 122 and includes a first n-type GaAs region 130a, a first highly doped n-type GaAs region 131a disposed on the first n-type GaAs region 130a, and a first highly doped n-type GaAs region 131a disposed on the first n-type GaAs region 130a. The first contact region 132a on the n-type GaAs region 131a is doped. In addition, the drain region 127 is disposed on the heterojunction structure 122, and includes a second n-type GaAs region 130b, a second highly doped n-type GaAs region 131b disposed on the second n-type GaAs region 130b, and a second The contact region 132b is disposed on the second highly doped n-type GaAs region 131b. In the configuration shown, the first and second highly doped n-type GaAs regions 131a, 131b have a higher doping concentration than the first and second doped n-type GaAs regions 130a, 130b.The E-mode gate region 136 is disposed on the heterojunction structure 122 between the source region 126 and the drain region 127. In addition, the first D-mode gate region 135a is provided on the E-mode gate region 136 and the source region 126 on the heterojunction structure 122. In addition, the second D-mode gate region 135b is provided on the E-mode gate region 136 and the source region 126. On the heterojunction structure 122 between the drain regions 127. In the illustrated embodiment, the first and second D-mode gate regions 135a, 135b and the E-mode gate region 136 include metal. In one example, the first and second D-mode gate regions 135a, 135b and the E-mode gate region 136 include at least one of nickel (Ni), gold (Au), titanium (Ti), or platinum (Pt). A sort of. As those skilled in the art will understand, the metal-semiconductor junction associated with the gate of the HEMT operates as a Schottky gate diode.The GaAs substrate 121 may be an intrinsic substrate with a relatively low doping concentration. In some embodiments, the GaAs substrate 121 may have a relatively thin substrate thickness, for example, a thickness in the range of about 0.5 μm to about 1 μm. Constructing the GaAs substrate 121 relatively thin facilitates the formation of through-wafer vias (TWV) for connecting the circuit fabricated on the GaAs substrate 121 to the backside conductor 139. Although the specific doping concentration and thickness have been described, a person of ordinary skill in the art will easily determine other suitable values.The heterojunction structure 122, the source region 126, the drain region 127, the first D-mode gate region 135a, the second D-mode gate region 135b, and the E-mode gate region 136 serve as a multi-gate HEMT. For example, those skilled in the art will understand that the diffusion of electrons from the n-AlGaAs layer 125 to the InGaAs layer 123 may result in the formation of a two-dimensional electron gas (2DEG) region or channel in the InGaAs layer 123. The conductivity can be changed or changed by controlling the gate voltage of the first D-mode gate region 135a, the second D-mode gate region 135b, and the E-mode gate region 136.In one embodiment, the n-AlGaAs layer 125 has a thickness in the range of about 300 nm to about 500 nm, and a doping concentration in the range of about 1×10 18 atoms/cm 3 to about 9×10 18 atoms/cm 3. The InGaAs layer 123 may be configured to have a relatively low doping concentration in order to enhance electron mobility by reducing collisions between electrons and doping impurities. For example, in one embodiment, the InGaAs layer 123 has a thickness in the range of about 5 nm to about 15 nm, and a doping concentration of less than about 1×10 18 atoms/cm 3. The spacer layer 124 can help reduce interface traps or defects associated with the interface between the InGaAs layer 123 and the n-AlGaAs layer 125 and the different lattice constants of the layers. In one embodiment, the spacer layer 124 includes an AlGaAs layer with a thickness ranging from about 3 nm to about 6 nm. In certain embodiments, one or more layers of the heterojunction structure 122 may be formed using an epitaxial growth process. Although the specific doping concentration and thickness have been described, a person of ordinary skill in the art will easily determine other suitable values.The backside conductor 139 is disposed adjacent to the GaAs substrate 121 on the side of the GaAs substrate 121 opposite to the heterojunction structure 122. The backside conductor 139 may be electrically biased using a low-power or grounded power source, and the TWV substrate 121 formed in GaAs may be used to provide electrical connection between the circuit and the grounded power source. For example, in one embodiment, the second terminal N1 is electrically connected to the backside conductor 139 using one or more TWVs. In some embodiments, the back conductor 139 includes at least one of gold (Au) or copper (Cu). Although the backside conductor 139 is shown as a single layer, the backside conductor 139 may include multiple sublayers, including, for example, seed and/or barrier sublayers.The source region 126 and the first D-mode gate region 135a are electrically connected to the first terminal N1. In addition, the drain region 127 and the second D-mode gate region 135b are electrically connected to the second terminal N2. The multi-gate HEMT 115 may undergo back-end processing to form contacts and metallization. For clarity, these details are omitted to support the use of annotated electrical connections.As shown in FIG. 5B, the RC trigger network 41 is electrically connected between the first and second nodes N1, N2. Although the RC trigger network 41 is depicted in the form of annotations, the RC trigger network 41 can be implemented on the GaAs substrate 121.Although Figure 5B shows one implementation of a multi-gate HEMT, other configurations can be used. In addition, although protection devices have been shown in the context of GaAs processes, the teachings herein are applicable to other compound semiconductor technologies, including, for example, gallium nitride (GaN) and indium phosphide (InP) technologies.FIG. 6 is a schematic diagram of a compound semiconductor protection clamp 200 according to another embodiment. The compound semiconductor protection clamp 200 includes an RC trigger network 31, a MESFET clamp 32 and a false trigger protection circuit 203, which are electrically connected to each other in parallel between the first node N1 and the second node N2. Although not shown in FIG. 6, the compound semiconductor protection clamp 200 may also include a reverse protection circuit, such as the Schottky gate diode structure 33 of FIG. 2B.Except that the compound semiconductor protection clamp circuit 200 also includes a false trigger protection circuit 203, the compound semiconductor protection clamp circuit 200 of FIG. 6 is similar to the compound semiconductor protection clamp circuit 30 of FIG. 2A. The false trigger protection circuit 203 generates a trigger protection signal, which is added to the activation control signal from the RC trigger network 31 at the combination node 204. The MESFET clamp 32 selectively activates the discharge path between the first node N1 and the second node N2 based on the activation control signal and the first node N1. Trigger protection signal.In some embodiments, when there is no ESD/EOS event, the false trigger protection circuit 203 can pull down the voltage of the combined node 204, thereby preventing the RC trigger network 31 from unintentionally activating the MESFET clamp when the ESD/EOS event does not exist. Positioner 32.For example, in the absence of a trigger protection scheme, transient signals associated with normal signaling conditions (for example, transient signals associated with MMIC power-up) can be coupled to the gate of the MESFET clamp via the RC trigger network 31 pole.Therefore, in the case of normal transient activity on the first and second nodes N1, N2, the false triggering of the protection circuit 203 helps prevent unintended activation of the MESFET clamp 32. However, during the ESD/EOS event, the signal from the RC trigger network 31 becomes relatively large when the control is activated, and the MESFET clamp 32 can selectively activate the discharge path between the first and second nodes N1, N2 .The additional details of the compound semiconductor protection clamp 200 may be as previously described.FIG. 7 is a circuit diagram of a compound semiconductor protection clamp circuit 240 according to another embodiment. The compound semiconductor protection clamp 240 includes an RC trigger network 41 electrically connected to each other in parallel between the first node N1 and the second node N2, a MESFET clamp 212, and a false trigger protection circuit 213. Although not shown in FIG. 7, the compound semiconductor protection clamp 240 may also include a reverse protection circuit, such as the Schottky gate diode structure 33 of FIG. 2B.The RC trigger network 41 includes a resistor 57 and a capacitor 58 which provide an activation control signal to the combined node 244. The MESFET clamp 212 includes a D-mode HEMT 75, a first cut-off state controls the HEMT 71, a second cut-off state controls the HEMT 72, and a third cut-off state controls the HEMT 73. As shown in Figure 7, the gate of the D-mode HEMT 75 is electrically connected to the combined node 244. Additional details 41 of the RC trigger network and the MESFET clamp 212 may be similar to those previously described.The illustrated buffer protection circuit 213 includes a feedback D-mode HEMT 221, a feedback E-mode HEMT 222, a feedback resistor 231, an offset protection E-mode HEMT 223, a trigger protection resistor 232, and a scraper protection capacitor 233.As shown in FIG. 7, the feedback D-mode HEMT 221 includes a gate having a drain electrically connected to the second node N2, a gate electrically connected to the combination node 244, and a feedback resistor 231 at the node where the feedback voltage VFBK is generated. The source of the first end. The feedback resistor 231 also includes a second end electrically connected to the first node N1. The feedback E-mode HEMT 222 includes a gate that receives the feedback voltage VFBK, a drain that is electrically connected to the second node N2, and a source that is electrically connected to the combined node 244. The trigger protection capacitor 233 includes a second end electrically connected to the first node N1 and electrically connected to the first end of the trigger protection resistor 232 at the node where the low-pass filtered voltage VLP is generated. The leakage protection resistor 232 also includes a second end electrically connected to the second node N2. The triggered protection E-mode HEMT 223 includes a gate that receives the low-pass filtered voltage VLP, a source that is electrically connected to the first node N1, and a drain that is electrically connected to the combined node 244.The false trigger protection resistor 232 and the false trigger protection capacitor 233 function as a low-pass filter that generates a low-pass filtered voltage VLP by low-pass filtering the voltage difference between the second node N2 and the first node N1. As shown in FIG. 7, the low-pass filtered voltage VLP is provided to the gate of the false trigger protection E-mode HEMT 223. The mis-trigger protection circuit 213 is configured in this way so that the mis-trigger protection circuit 213 pulls down the combined node 244 and turns off the MESFET clamp circuit 212 when a steady-state voltage condition is reached between the first and second nodes N1, N2. However, when an ESD/EOS event exists between the first and second nodes N1, N2, the RC trigger network 414 can pull up the voltage of the combined node 75 and activate the MESFET clamp 212.In one embodiment, the false trigger protection resistor 232 and the false trigger protection capacitor 233 are combined to achieve an R*C time constant in the range of about 1 us to about 100 us (for example, 50 us). However, other R*C time constant values are also possible.The feedback D-mode HEMT 221 and the feedback resistor 231 operate to generate the feedback voltage VFBK, which helps to maintain the MESFET clamp 212 on for the entire duration of the ESD/EOS event. For example, the gate of the feedback D-mode HEMT 221 is electrically connected to the gate of the D-mode HEMT 75 to provide a current mirror, so the current flowing through the feedback D-mode HEMT 221 flows through the MESFET clamp circuit 212 relative to the current a of the D-mode HEMT. 75. The current through the feedback D-mode HEMT 221 is provided to the feedback resistor 231 to generate the feedback voltage VFBK, which controls the activation of the feedback E-mode HEMT 222. Therefore, when the MESFET clamp 212 is turned on, the feedback E-mode HEMT is turned on by the feedback voltage VFBK, thereby providing feedback of the pull-up combination node 244. After the ESD/EOS event ends, the current flowing through the D-mode HEMT 75 of the MESFET clamp circuit 212 decreases, resulting in a corresponding decrease in the feedback voltage V FBK and the feedback E-mode HEMT 222 is turned off.Therefore, the illustrated false trigger protection circuit 213 provides robust control of the activation of the MESFET clamp 212, which not only prevents the unintended activation of the MESFET clamp 212, but also ensures that the MESFET clamp 212 is activated on the RC trigger network 41. The clamp 212 remains on for the duration of the ESD/EOS event.The additional details of the compound semiconductor protection clamp 240 may be as previously described.8 is a graph 300 of transmission line pulse (TLP) current versus TLP voltage for one embodiment of the compound semiconductor protection clamp 90. FIG. 4B. Voltage is represented along the horizontal axis, and current is represented along the vertical axis. TLP applies a pulse with a rise time of about 600 ps and a pulse width of about 100 ns. Capture current and voltage readings under "quasi-static" conditions as the average voltage and current readings between approximately 20 ns and 80 ns corresponding to each data point in Figure 8.The graph 300 includes the current versus voltage response 301 for one embodiment of the compound semiconductor protection clamp 90 of FIG. 4B with a protection level of >200V according to the HBM (Human Body Model) classification criteria. The current versus voltage response 301 shown shows a trigger voltage of approximately 14.5V and a holding voltage of approximately 12.5V. Although an example of TLP data is shown in FIG. 8, TLP data may vary with various factors, including circuit implementation and/or manufacturing process.FIG. 9 is a graph 310 of leakage current versus voltage for one embodiment of the compound semiconductor protection clamp 90 of FIG. 4B as described above with respect to FIG. 8. The voltage is represented along the horizontal axis, and the current is along the vertical axis.The graph 310 includes a first current-to-voltage response 311 at 25°C, a second current-to-voltage response 311 at 85°C, and a third current-to-voltage response 311 at 125°C. As shown in FIG. 9, the compound semiconductor protection clamp exhibits a leakage current of less than about 6 μA at an operating voltage of about 5V up to about 125°C. Although an example of leakage current vs. voltage data is shown in FIG. 9, the leakage current vs. voltage data may vary with various factors, including circuit implementation and/or manufacturing process.FIG. 10 is a cross-section of a HEMT 400 according to an embodiment. The HEMT 400 is implemented on a GaAs substrate 421 and includes a heterojunction structure 422, a source region 426, a drain region 427, and a gate region 428. The heterojunction structure 422 includes an InGaAs layer 423 disposed on the GaAs substrate 421, an AlGaAs spacer layer 424 disposed on the InGaAs layer 423, and an n-type aluminum gallium arsenide (n-AlGaAs) layer disposed on the GaAs substrate 421 425.The source region 426 is disposed on the heterojunction structure 422 and includes a first n-type GaAs region 430a, a first highly doped n-type GaAs region 431a disposed on the first n-type GaAs region 430a, and a contact region 432a disposed on the Above the first highly doped n-type GaAs region 431a. In addition, the drain region 427 is disposed on the heterojunction structure 422, and includes a second n-type GaAs region 430b, a second highly doped n-type GaAs region 431b disposed on the second n-type GaAs region 430b, and a second The contact region 432b is disposed on the second highly doped n-type GaAs region 431b.The gate region 436 is disposed on the heterojunction structure 422 between the source region 426 and the drain region 427. The gate region 436 is implemented using metal, and may be an E-mode gate or a D-mode gate, depending on the embodiment. For example, the gate region 436 may be implemented using at least one of nickel (Ni), gold (Au), titanium (Ti), or platinum (Pt). The metal-semiconductor junction associated with the gate region 428 and the heterojunction structure 422 operates as a Schottky gate diode.The HEMT 400 of FIG. 10 shows an example of a structure that can be used to implement the HEMT described herein. However, HEMT can be implemented in other ways. For example, although HEMT 400 has been shown in the context of GaAs processes, the teachings herein are applicable to other compound semiconductor technologies, including, for example, GaN and InP technologies.Additional details of the HEMT 400 can be as previously described.FIG. 11 is a cross-section of a non-uniformly integrated compound semiconductor circuit 500 according to an embodiment. The non-uniform integrated compound semiconductor circuit 500 includes a silicon (Si) substrate 501, a first buffer structure 502, a chip-level template 503, a second buffer structure 504, a third buffer structure 505, a fourth buffer structure 506, and a first compound semiconductor circuit 511, a first isolation structure 514, a second compound semiconductor circuit 512, a second isolation structure 515 and a Si circuit 513.The shown non-uniformly integrated compound semiconductor circuit 500 shows an example of integrating a compound semiconductor circuit on a heterogeneous substrate. Although a Si substrate 501 is used in this example, other implementations of the substrate are possible, including but not limited to silicon carbide (SiC) substrates.Although a specific embodiment is shown, various configurations are possible, including, for example, implementations with different arrangements of buffer structures, isolation structures, and/or chip size templates. In addition, although FIG. 11 shows an embodiment including two compound semiconductor circuits and one Si circuit, more or fewer compound semiconductor circuits and/or Si circuits may be included. In addition, the non-uniformly integrated compound semiconductor circuit 500 may include other circuits and/or structures.The illustrated first compound semiconductor circuit 511 includes a first compound semiconductor protection clamp 521, and the second compound semiconductor circuit 512 includes a second compound semiconductor protection clamp 522. However, other configurations are possible, including having more or fewer compound semiconductor protection clamps. The first and second compound semiconductor protection clamps 521, 522 can provide ESD/EOS protection for compound semiconductor circuits 511, 512 and/or other heterogeneous substrate circuits 513 (including, for example, metal oxide semiconductor (MOS) transistors).Therefore, the illustrated embodiment realizes a multi-process technology functional block sharing a common Si substrate 501. In addition, ESD/EOS protection is provided by compound semiconductor protection clamps 521 and 522. In some embodiments, the compound semiconductor protection clamps 521 and 522 are connected by back-end metallization to protect adjacent circuit blocks of the common substrate 501.Additional details of the non-uniformly integrated compound semiconductor circuit 500 may be as described above.Terms such as above, above, above, etc. used herein refer to devices oriented as shown in the figure, and should be interpreted accordingly. It should also be because the regions within a semiconductor device (such as a transistor) are defined by doping different parts of the semiconductor material with different impurities or different impurity concentrations, so there may actually be no discrepancy between different regions in the completed device. The physical boundary, but the area can transition from one to another. Some of the boundaries shown in the drawings are of this type, and are shown as abrupt structures only for the reader's help.applicationThe device adopting the above protection scheme can be implemented in various electronic devices and interface applications. Example electronic devices may include, but are not limited to, consumer electronic products or parts of consumer electronic products. For example, compound semiconductor protection devices may be included on monolithic microwave integrated circuits (MMICs) including radio frequency and/or microwave circuits, such as power amplifiers, low noise amplifiers, voltage controlled oscillators, mixers, tuners, resonators, Attenuator and/or switch. Consumer electronics products can include, but are not limited to, mobile phones, telephones, televisions, computer monitors, computers, handheld computers, personal digital assistants (PDAs), automobiles, vehicle engine management controllers, transmission controllers, seat belt controllers, and Lock brake system controllers, camcorders, cameras, digital cameras, portable memory chips, washing machines, dryers, washing machines/dryers, copiers, fax machines, scanners, multifunction peripherals, etc. In addition, electronic devices can include unfinished products, including products for industrial, medical, and automotive applications.The foregoing description and claims may refer to elements or features being "connected" or "coupled" together. As used herein, unless expressly stated otherwise, "connected" means that one element/feature is directly or indirectly connected to another element/feature element/feature, and not necessarily mechanically. Likewise, unless expressly stated otherwise, “coupled” means that one element/feature is directly or indirectly coupled to another element/feature, and not necessarily mechanically. Therefore, although the schematic diagrams shown in the various drawings depict exemplary arrangements of elements and components, additional intermediate elements, devices, features, or components may exist in practice (assuming that the function of the depicted circuit is not disadvantageous). Influence).Although the embodiments of the present invention have been described in terms of certain aspects, other embodiments that are obvious to a person of ordinary skill in the art, including examples that do not provide all the features and advantages set forth herein, are also within the scope of the present invention. In addition, the various embodiments described can be combined to provide additional embodiments. In addition, it is shown that certain functions in the context of one embodiment can also be incorporated into other embodiments. Therefore, the scope of the present invention is only limited by reference to the appended claims. |
A method and apparatus for estimating signal related delays in a PLD design is disclosed. The PLD design is modeled in relation to one or more stages, each of the stages including a driver and one or more receivers coupled to the driver with a wiring tree. The modeling is based on a selected set of parameters that include: slope related delays associated with the driver; a delay related to a layout of the wiring tree; and a parameter related to a slope transfer from a previous driver input. A predetermined set of values for each of the selected parameters are accessed; the estimated signal related delays are computed for each of the modeled stages; and are written to a computer-readable storage medium. |
CLAIMSWhat is claimed, is:1. A method for estimating signal related delays in a programmable logic device (PLD) design, the method comprising: modeling the PLD design in relation to one or more stages, the stages respectively comprising a driver and one or more receivers coupled to the driver with a wiring tree, the modeling based on a selected set of parameters comprising: one or more slope related delays associated with the driver; a delay related to a layout of the wiring tree; and a parameter related to a slope transfer from a previous driver input, the previous driver upstream from the driver sequentially in relation, ordinal, to the one or more stages; accessing a predetermined set of values for the selected parameters of the modeled stages from a first computer readable storage medium; computing the estimated signal related delays for the modeled stages based on a sum of the corresponding accessed selected parameter values; and writing the computed estimated signal related delays for the modeled stages in a second computer-readable storage medium.2. The method of Claim 1, wherein the wiring tree comprises one or more programmable switches, and wherein the selected set of parameters comprise parameters related to the one or more programmable switches.3. The method of Claim 2 wherein the computed estimated signal related delays for the modeled stages is written as one or more configuration files.4. The method of Claim 2 wherein the selected set of parameters comprise parameters related to the one or more programmable switches: a capacitance factor corresponding to the one or more programmable switches in an 'on' state; and
a resistance factor corresponding to a path that includes the 'on' state programmable switches and one of the receivers.5. The method of Claim 1 wherein the computed estimated signal related delays for the modeled stages is written as one or more configuration files.6. The method of Claim 1 wherein the one or more slope related delays associated with the driver include an arc delay having a fixed duration, or a delay with a duration dependent on a slope of a transition time of the driver.7. The method of Claim 6 wherein the one or more slope related delays associated with the driver include a slope-dependent driver transition time delay that is a sum of a linear component constrained to a value greater than or equal to zero, and a quadratic component constrained to a value less than or equal to zero.8. The method of Claim 1 wherein the delay related to a layout of the wiring tree relates to a fanout of the respective stages from the driver to each of the one or more receivers.9. The method of Claim 1 wherein the modeling the PLD design includes aggregating a first one of the stages with a second stage into an aggregated stage.10. The method of Claim 9 wherein the second stage includes a fixed load.11. A method of determining values for a set of parameters related to one or more delay models of a programmable logic device (PLD) design, the method comprising:
populating a first data set and a second data set, the first data set and the second data set comprising distinct independent data corresponding to a plurality of target parameters, wherein the PLD design is modeled in relation to one or more stages, the one or more stages respectively comprising a driver and one or more receivers coupled to the driver with a wiring tree, and wherein the plurality of target parameters comprise: one or more slope related delays associated with the driver; a delay related to a layout of the wiring tree; and a parameter related to a slope transfer from a previous driver input, the previous driver upstream from the driver sequentially in relation, ordinal, to the one or more stages; computing a first simulation of a circuit corresponding to the modeled PLD design based on the first data set wherein a corresponding first set of target parameters is fitted; computing a second simulation of the circuit corresponding to the modeled PLD design based on the second data set wherein a corresponding second set of values related to a plurality of guard bands are defined; and saving values identified in the first simulation and values identified in the second simulation to a computer-readable storage medium.12. The method of claim 11, wherein the wiring tree comprises one or more programmable switches, and wherein the selected set of parameters comprise parameters related to the one or more programmable switches.13. The method of Claim 11 comprising:(a) determining if the slope transfer parameter of the current stage equals zero; and when the slope transfer parameter of the current stage equals zero, saving values identified in the first simulation and values identified in the second simulation and values identified in a recursive routine to a computer-readable storage medium; when the slope transfer parameter of the current stage does not equal zero, estimating a slope at a beginning of the current stage before estimating the slope at the end of the current stage and then returning to (a).14. The method of Claim 13 wherein the saved first and second set of values are written to the computer-readable storage medium as code, which when executed by one or more processors are operable for estimating the signal related delays corresponding to the saved first and second set of values upon accessing and executing the code.15. The method of Claim 13 comprising saving the values identified in the first simulation and the values identified in the second simulation as code and execute to estimate signal related delays corresponding to the saved first and second set of values.16. The method of Claim 13 comprising when the slope transfer parameter of the current stage does not equal zero, then for the current stage in which the slope transfer parameter does not equal zero, estimating slope at a beginning of the current stage before estimating the slope at the end of the current stage, and executing a recursive routine which traces the slope transfer parameter through multiple ordinal previous stages to estimate slope of the current stage until the slope transfer parameter equals zero, then returning to (a).17. A method for computing a first circuit simulation with respect to a driver comprising:(a) inserting a recorded pair of slope related delays associated with the driver into one selected set of data points; fitting a respective delay related to a layout of a wiring tree and parameters related to a set of switches that adds capacitive loading to the set of stages, wherein a maximum of an absolute value of one or more computed prediction errors is minimized; computing corresponding values for the delay related to the layout of the wiring tree and the parameters related to the set of switches that adds capacitive loading to the set of stages; recording the computed corresponding values for the delay related to a layout of the wiring tree and the parameters related to the set of switches that adds capacitive loading to the set of stages, in which the recorded pair of the slope related delays associated with the driver, and the
recording values for the delay related to the layout of the wiring tree and the parameters related to the set of switches that adds capacitive loading to the set of stages are written to a computer readable storage medium; and recording a pair of slope related delays associated with the driver, in which the set of stages includes at least one fanout, each of at least one fanout spanning from the driver of the set of stages to one or more receivers thereof, and coupled to the driver with the wiring tree thereof between the driver and the receiver, and an active path in the at least one fanout from the driver to the receiver includes at least one switch in a conductive ‘on’ state.18. The method of claim 17 comprising selecting a set of data points from a first saved set of values related to one or more delays related to the driver and then proceeding to (a).19. A method for computing a second circuit simulation for one or more guard bands in a PLD comprising: determining one or more of an allowable rate of one or more underestimates related to set up times, or one or more overestimates related to hold times for a set of delay models and generating one or more delay prediction errors for a second set of values in a second data set; and ordering the generated one or more delay prediction errors from a smallest ordinal value thereof to a largest value thereof, and selecting delayed prediction errors having a rising signal when a stage of the PLD has a rising signal or delayed prediction errors having a falling signal when the stage of the PLD has a falling signal in which, in relation to the set-up times, the one or more delay prediction errors computed such that the one or more allowable rate includes a fraction of the generated delay prediction errors with an ordinal value smaller than the computed delay prediction error, in which the guard band is set to a value and in relation to the hold times, the one or more delay prediction error is computed such that the allowable rate includes a fraction of the generated delay prediction errors with an ordinal value larger than the computed delay prediction error, in which the guard band of the one or more guard bands is the value.20. The method of Claim 19 wherein the one or more guard bands includes a first guard band including an estimate of at least one set-up time when the stage of the PLD has a rising output signal, a second guard band that includes an estimate of at least one set-up time when the stage of the PLD has a falling output signal, a third guard band that includes an estimate of at least one hold time when the stage of the PLD has a rising output signal, and a fourth guard band that includes an estimate of at least one hold time when the stage of the PLD has a falling output signal. |
Method and Apparatus for Estimating Signal Related Delays in a PLD DesignInventors: Jonathan W. Greene, Gabriel Barajas, Fei Li, Hassan Hassan, and James SumitTandonCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present Application claims priority to U.S. Provisional Patent ApplicationSerial Number 63/190,237 filed on May 18, 2021 and U.S. Non-Provisional Patent Application Serial Number 17/740,644 filed on May 10, 2022, the entire contents of each of which are incorporated by reference as if fully set forth herein.BACKGROUND[0002] Some Integrated Circuits (ICs) have a structural design dedicated to a specific operational function. Such ICs are generally referred to as an Application Specific IC (ASIC). In designing an ASIC, a simulation program such as 'SPICE' ('Simulation Program with IC Emphasis') is run to predict operational behavior of the ASIC.[0003] The structure and corresponding operational function of some ICs however are programmable in relation to performing one or more logical functions. An IC with such programmable characteristics is generally referred to as a Programmable Logic Device (PLD). There are various types of programmable logic devices (PLDs).[0004] As used herein, one type of PLD is referred to as a Field Programmable Gate Array(FPGA) and has an array of transistors. Each of the transistors has a conduction ('on/off) state controllable by a gate voltage supplied thereto. A logic function performed by the FPGA or PLD is thus programmable based on configuring the on/off state of the transistors ("switches") of the array.[0005] PLDs are sometimes (e.g. FPGAs) programmed "in the field," for example by an end user. While running circuit simulation programs such as 'SPICE' ('Simulation Program with IC Emphasis') to predict operational behavior of the device is efficient and convenient on an IC
supplier's end (e.g., with IC fabricators, manufacturers, vendors), in which the supplier designs and manufactures an IC device and has access to (and/or perhaps even generated) detailed circuit netlists relevant thereto, it is generally inconvenient, expensive, inefficient and excessively time consuming to use simulation tools such as SPICE in the field, where PLDs such as FPGAs are routinely deployed and programmed in situ.[0006] An example implementation relates to a method for producing a set of delay models for the circuit elements on the PLD, allowing deployment of the set of delay models in design toolsets for the PLDs, and for analyzing circuit timing across a design toolset chain (e.g., to determine speeds at which their circuit designs are expected to perform on the PLDs).[0007] Toolsets with various delay models however generally generate different predictions, depending on how the delay models are constructed and how well they approximate the real delays on the silicon. Unfortunately, the inaccuracy of delay models generated using conventional techniques demands the use of many guard bands to be conservative. Such excessive use of the guard bands increases the delay estimate and thus generally reduces the predicted operating frequency in an effort to ensure that a user design is at least functionally correct and operable.[0008] Therefore, although the PLD can run the user design at higher frequencies, conventional toolsets generally predict a lower operating frequency. This constrains the user to setting up a clock frequency for the PLD according to the lower frequency prediction generated by the toolset. When the PLD is ultimately programmed based on configuration data so constrained, its operable performance (e.g., speed) is likely thus less than (e.g., slower than) that, which the PLD is actually capable of achieving if not so constrained.[0009] What is needed is a method of modeling delays in designs of a PLD, which is high- level and concise to be suitable for integration into the FPGA design toolset, yet expressive to model the different configurations of PLDs for different user designs and predict the on-the-silicon operating frequency of the PLD with greater accuracy.
BRIEF SUMMARY[0010] A method for estimating signal related delays in a programmable logic device(PLD) design includes modeling the PLD design in relation to one or more stages, each of the stages including a driver and one or more receiver inputs coupled to the driver with a wiring tree, where the wiring tree includes none, or one or more programmable switches. The modeling is based on a selected set of parameters that include: one or more slope related delays associated with the driver; a delay related to a layout of the wiring tree; and a parameter related to a slope transfer from a previous driver input, the previous driver upstream from the driver sequentially in relation, ordinally, to the one or more stages. In the event that the wiring tree includes one or more programmable switches, the modeling is additionally based on a plurality of parameters related to each of the switches, since the switches add capacitive loading to each of the stages. A predetermined set of values for each of the selected parameters of each of the modeled stages are accessed from a first computer readable storage medium. The estimated signal related delays for each of the modeled stages are computed based on a sum of the corresponding accessed selected parameter values. The computed estimated signal related delays for each of the modeled stages are written to a computer-readable storage medium.[0011 ] A tangible, computer readable storage medium comprising code is disclosed, which when executed by one or more processors, causes or controls the performance of a process related to the previously described method for estimating signal related delays in a PLD design, for estimating the signal related delays.[0012] A method of determining values for each of a set of parameters related to one or more delay models of a PLD design includes: populating a first dataset and a second dataset, which each data set comprises distinct, and independent data corresponding to a plurality of target parameters, wherein the PLD design is modeled in relation to one or more stages, each of the stages including a driver and one or more receiver inputs coupled to the driver with a wiring tree, the wiring tree includes none, or one or more programmable switches. The target parameters include: one or more slope related delays associated with the driver; a delay related to a layout of the wiring tree; a plurality of parameters related to each of the switches, if any, that adds capacitive loading to each of the stages; and a parameter related to a slope transfer from a previous driver input, the
previous driver upstream from the driver sequentially in relation, ordinally, to the one or more stages. A first simulation of a circuit corresponding to the modeled PLD design is computed based on the first dataset where a corresponding first set of values related to the target parameters is fitted. A second simulation of the circuit corresponding to the modeled PLD design is computed based on the second dataset where a corresponding second set of values related to a plurality of guard bands are defined. The first set of values and the second set of values are saved, wherein the saved first and second set of values are written to a computer-readable storage medium as code, which when executed by one or more processors are operable for estimating the signal related delays corresponding to the saved first and second set of values upon accessing and executing the code.[0013] The first data set and the second data set generally have no overlapping test cases and are independent of each other because overlapping test cases do not give new information.[0014] The method and apparatus of the present disclosure allows for modeling delays in designs of a PLD, that allows a toolset to predict the on-the-silicon operating frequency of the PLD with greater accuracy than that obtained using conventional techniques in which many restrictive guard bands are used to generate a low frequency prediction and in which the clock frequency for the PLD is set according to the low frequency prediction generated by the toolset.[0015] As noted, the method and apparatus of the present disclosure allows for modeling delays in designs slated for a PLD. Since any model is an approximation to reality, necessarily the model and/or the parameters therein are often “fitted” to an acceptable level of “error” from reality. Thus “fit”, “fitted”, and “fitting” and similar terms are to be understood as adjusting the model and/or parameters to acceptable values based on some engineering predefined “error” from reality.[0016] Take for example a modeling of a resistance wherein the model only has resistance values in increments of 10 ohms, that is a resistance can be 10 ohms, 20 ohms, ... , 3004850 ohms,... 10G ohm, ... , without limitation. If the actual resistance is 111 ohms then a decision needs to be made how to model the 111 ohms. In one approach the actual value is “fitted” to the nearest model with the least “error”. One option in this example is to model the 111 ohm actual resistance as a 110 ohm resistance, with a resulting “error” of -1 ohm (110-111 = -1). The other nearest
model is to model the 111 ohm actual resistance as a 120 ohm resistance, with a resulting “error” of +9 ohms (120-111 = +9). Choosing a model of 110 ohms underestimates the actual value and a model of 120 ohms overestimates the actual value. Depending on the user selected criteria one or the other model value would be used. For example, if the actual resistance is directly related to a circuit timing then selecting the lower model of 110 ohms will result in a faster response than reality, and selecting the upper model of 120 ohms will result in a slower response than reality. If the user criteria is to make sure the circuit works, then choosing the 120 ohm model is more prudent.[0017] Similar to the example of the resistor above the modeling of timing, delays, capacitance and other parameters influence if the user wants to err on underestimating or overestimating.[0018] The “error” can be considered a predicted error sometimes denoted ‘e’ if we can compute its likely range.[0019] The goal of the modeling is to get as close as possible to reality so as, for example, to run a design at the highest frequency possible. If a user designs to the absolute edge then there is no margin. For example, if the design edge is suited for operation at 1.1013GHz operation and the temperature changes 1 deg C it’s likely the design will stop operating. Thus, engineers look to use guard bands which are outside the absolute edge of a design and allow for proper operation by “guarding” the design, timing, without limitation. For example, in the 1.1013GHz design mentioned above, a set of simulations with conditions changed, for example, operation from - 40deg C to + 125 deg C might yield that if the clock frequency of 1.1013GHz is lowered to 1 0GHz the design will operate over the -40deg C to + 125 deg C range. This may be an acceptable tradeoff. Guard bands are determined in models by multiple simulations where parameters are changed to see the overall effect on a design. Often the multiple simulations will lead to a range of guard bands where the user can decide what is acceptable. For example, in the 1.1013GHz example above if the user knows that the system will only be in operation from 25deg C to 60deg C, then the user may view guard bands that cover that range only and decide on the acceptable maximum clock frequency. What is to be appreciated is that guard bands can cover a variety of parameters
and are used by an engineer, designer, or user to try and guarantee acceptable performance whether that be frequency, low power, or any other factor.[0020] In logic design, for example using a flip flop in which data and a clock enter there are set-up times for data with relation to the clock both for a rising output and a falling output. Likewise for a flip flop there are data hold times for a rising output and falling output. Accordingly, it is possible to have guard bands for each of these four scenarios mentioned.[0021] Similar to the resistance example above, with respect to the flip flop example directly above, there are overestimates and underestimates. That is, one can overestimate a data hold time to guarantee that the data is clocked in (which is good), versus underestimating a data hold time in which case data is not guaranteed to be clocked in (bad). Likewise for set-up time one is good and the other not desirable. Accordingly, depending upon the choice, different guard bands can be established to assure proper operation.[0022] A model can also be based on a signal transition. For example, a simple inverter using a pull-up and pull-down transistor arrangement (e.g. PMOS-NMOS) can have a different delay based on a high to low signal transition, versus a low to high transition. This can be due to a variety of factors, such as, but not limited to differing transistor size (e.g. L/W), differing electron mobility, gate oxide thickness (e.g. Cox), without limitation. What is to be appreciated is that a functional block, for example a driver, may have a different high to low transition model and a low to high transition model. Accordingly, functional blocks often have a pair of models associated with them.[0023] To explain in greater detail, the fundamental reason for an error is that the transistors in a PLD each have a non-linear behavior, which usually requires iterative numerical simulation methods to simulate. That is what a SPICE simulation does. The method and apparatus of the present disclosure create high-level and concise delay models that are closed form and use polynomial functions. Accordingly, the delay models are relatively fast to compute and suitable for use in FPGA design toolsets. However, since these delay models only approximate the real non-linear equations that dictate the physical behavior of the transistors, it is unavoidable to have some errors. The method and apparatus of the present disclosure strikes a balance between the model’s conciseness and the model’s expressiveness, and hence accuracy.
[0024] In the discussion above the guard band was vastly simplified to get the concept across. The method and apparatus of the present disclosure has another way of deriving guard bands. Normally an aggregated model error is determined by using the maximum or average error of the particular model for a few test cases. A model is usually fitted to minimize that aggregated model error. In the method and apparatus of the present disclosure our case, we may minimize the maximum absolute error by the technique discussed below.[0025] Plotting the modeling error by each individual test case, reveals a bell-shaped curve like a normal distribution. Most test cases have very small absolute errors, but a few test cases may become the tail of the distribution. A bell-shaped distribution has tails on both sides, the left side tail being an underestimate of the delay and a right side tail being an overestimate of the delay. In the case of estimating circuit operating frequency, which is equivalent to performing a setup timing check in timing analysis terminology, then the left side tail population is not desirable because it gives underestimates of delays and hence overestimates of operating frequency. Therefore, we treat the amount of delay error for the left tail population as an additional guard band to be added to the model predicted delays. That is, because the left tail contains cases of delay underestimates, we decide the guard band based on the delay error at the left tail.[0026] The guard band is obtained from a bell-shaped error distribution from the second set of test cases (data). The first set of test cases (data) is well controlled and has meaningful attributes (such as all pairs of fanouts are on) to help reduce the number of simulations to create the model. The second set of test cases (data) are more random and more evenly distributed in terms of fanout on/off combinations. It tends to capture more outliers and gives a more exact tail distribution. It is not strictly necessary to guard band all the tail points. A small portion, such as 2~5% of tail populations, may remain slightly underestimated in delays. This is because the delay models are for individual stages. As circuit operating frequency is determined by the critical circuit path consisting of multiple stages, some of the stages have positive errors and others have negative prediction errors, and they tend to cancel each other along the path. So statistically, leaving a very small portion of tail populations being mitigated for its underestimate magnitude but without completely eliminating its underestimate actually does not compromise the prediction of a circuit path delay, or the circuit performance. The benefit is a reduced need to overly guard band the model.
[0027] In hold timing analysis (also called minimum delay analysis) which relates to making synchronous circuits operate functionally correctly, preferably the prediction is not overly overestimated. That is, all circuit paths are to have some minimum delay value otherwise the circuit may have race conditions and may malfunction. However, an overestimate of delay in a delay model would give a false positive in hold timing check, while the real silicon runs the risk of violating the hold timing and malfunctioning. So the guard band is applied to the right side tail population of the bell-shaped error distribution in a manner similar to that discussed above, i.e. a small portion is remains overestimatedBRIEF DESCRIPTION OF THE DRAWINGS[0028] Some illustrative aspects, features and elements related to example implementations of the present disclosure are described herein with reference to the following description and drawings. Various ways in which the principles disclosed herein are practically implementable are thus described, and all aspects and equivalents thereof are intended to fall within the scope of the claimed subject matter. The foregoing, and other features and uses of the present disclosure, become more apparent in view of the following description in conjunction with each enumerated figure (FIG.) of the accompanying drawings. Throughout the specification of the present disclosure, the like reference numerals (as shown in each FIG. of the drawings) generally refer to the like components, features and/or elements. In the drawing figures, therefore:[0029] FIG. 1 depicts an example PLD implementation;[0030] FIG. 2A depicts an example model of the PLD implementation;[0031] FIG. 2B depicts an example model of the PLD implementation showing example stages;[0032] FIG. 3 depicts an example model of a PLD implemented with aggregated Utrees;[0033] FIG. 4A and FIG. 4B depicts an example PLD model;[0034] FIG. 5 depicts a flowchart of an example method for modeling a PLD;[0035] FIG. 6 depicts a flowchart of an example method for estimating a delay related to a PLD model;[0036] FIG. 7 depicts a flowchart of an example method for computing a first circuit simulation;[0037] FIG. 8 depicts a flowchart of an example method related to a second circuit simulation computation;
[0038] FIG. 9 depicts an example simulator space tool chain;[0039] FIG. 10 depicts an example user designer space;[0040] FIG. 11 depicts an example computer system;[0041] FIG. 12 depicts a first delay equation; and[0042] FIG. 13 depicts a second delay equation.DETAILED DESCRIPTIONOVERVIEW[0043] In the description that follows “delay” and “time” and “delay time” and similar phrases are used interchangeably as one of skill in the art understands their units of measurement are time.[0044] In the description that follows “delay” and “time” and “delay time” and similar phrases and “frequency” are used interchangeably as one of skill in the art understands they are the reciprocal of each other. Delay = 1/Frequency, and Frequency = 1/Time. The units of Frequency are Hertz, and those of time/delay are seconds.[0045] An example implementation relates to methods for modeling delays in a PLD and estimating signal related delays in a design to be implemented on the PLD. The method includes modeling the PLD design in relation to one or more stages. Each of the stages has a driver and one or more receiver inputs coupled to the driver by a wiring tree. The wiring tree includes none, or one or more programmable switches. The modeling is based on a selected set of parameters, which include one or more slope related delays associated with the driver, a delay related to a layout of the wiring tree, a plurality of parameters related to each of the switches, if any, that adds capacitive loading to each of the stages, and a parameter related to a slope transfer from a previous driver output, the previous driver upstream from the driver sequentially in relation, ordinally, to the one or more stages.[0046] A predetermined set of values is accessed for each of the selected parameters of each of the modeled stages from a first computer readable storage medium. The estimated signal related delays are computed for each of the modeled stages based on a sum of the corresponding accessed selected parameter values. The parameters can be coefficients of independent variables (e.g. LT) or the coefficient of the square of an independent variable (e.g. QTA2). The computed estimated signal related delays for each of the modeled stages is written to a second computer- readable storage medium which is used to determine a guard band for the PLD maximum operating frequency or a maximum delay analysis.
EXAMPLE PLD[0047] An example implementation relates to determining delays in a PLD. In relation to the present description, a PLD represents an Integrated Circuit (IC), which is programmably operable to perform specified processes, such as one or more logic functions. An example implementation relates to an FPGA, which represents a PLD that has an array of programmable tiles. The programmable tiles may include, for example (and without limitation), input/output blocks (IOs), configurable logic blocks (CLBs), dedicated random access memory blocks (RAM), processors, multipliers, digital signal processing blocks (DSPs), clock (CLK) managers, delay lock loops (DLLs), and interconnect lines (INT).[0048] The programmable tiles are generally programmed by loading a stream of configuration data into internal configuration memory cells that define how the programmable elements are configured. The configuration data are read from memory (e.g., from an external PROM) or written into the FPGA by an external device. The corresponding collective states of the individual memory cells then determine the operable function of the FPGA.[0049] FIG. 1 depicts an example PLD implementation 100. The PLD 100 is disposed on a semiconductor die 110. A fabric, network and/or pattern of conductors 120 are disposed within the semiconductor die 110 and effectuate electrical interconnection between the various tiles. The conductors 120 include electrically conductive traces and/or vias (also known, e.g., as "VIAs," or Vertical Interconnect Accessways). Thus, the PLD 100 should be understood to have a three dimensional (3D) spatial, structural, and/or electrical conductor architecture.[0050] The PLD 100 includes columns of logic tiles including configurable logic blocks(CLBs), input/output blocks (IOs), and programmable interconnect tiles (INTs) that are used to programmably interconnect the logic tiles. Terminating tiles (TERMs) surround the columns of logic tiles and can connect the PLD 100, through the conductors 120, with a programmer for loading a user’s design onto the PLD. The TERMs also couple the PLD 100 with other devices that, upon its programming, are operably controlled or otherwise interactive with the PLD 100.[0051] The programmable tiles are programmed upon loading a stream of configuration data into internal configuration memory cells of the PLD 100, which define how the programmable
elements thereof are configured. The configuration data are generally read from memory (e.g., from an external PROM) or written into the PLD 100. The corresponding collective states of the individual memory cells then determine and program the operable function of the PLD 100. For example, one or more of the CLBs are thus configurable to implement Digital Signal Processing (DSP), Digital Lock Loop (DLL), Clock (CLK), or other logic functions.[0052] While an example implementation of the delay calculations for PLD 100 is described in relation to an FPGA, it should be understood and appreciated that additional and/or alternative implementations relate to other types of PLDs. For example, an example implementation relates to a PLD programmed with application of a processing layer, such as a conductive (e.g., metallic) layer, which interconnects the various components of the PLD 100. Such PLDs are sometimes referred to as "mask programmable" PLDs.[0053] In an additional or alternative implementation, the operability state of the PLD 100 is configured using fuse and/or anti-fuse processing. The terms "PLD," and "programmable logic device," as well as the example FPGA implementation described herein, should be understood, without limitation, to describe these devices, and devices that are partially (but not wholly) programmable, such as an IC that includes a combination of hard-coded transistor logic and a programmable switch fabric, which programmably interconnects the hard-coded transistor logic.EXAMPLE PLD MODELS[0054] FIG. 2A and FIG. 2B each depict an example model 200 of the PLD implementation. In an example implementation, the model 200 represents, e.g., "abstracts" a portion of the PLD 100 (FIG. 1). The features and elements described in relation to FIGs. 2A - 2B should be understood to be programmed based on a stream of configuration data, loaded into internal configuration memory cells of the PLD 100. The model 200 represents an implementation of a programmed FPGA configuration of at least a portion of the PLD 100.[0055] The model 200 represents the PLD 100 as having one or more delay stages, which are also referred to herein as 'Utrees'. The model 200 depicted has a driver ('d') 210 and one or more receivers (e.g., 220, 230, 240, 250 denoted rl, r2, r3, r4 respectively), which are coupled to the driver 210 with a first resistive/capacitive ('RC') wiring tree 215 connecting to rl and r2, and
through a second RC wiring tree 245 to r3 and r4. The first RC wiring tree 215 connects to programmable switches arranged in a plurality of fanouts from the driver 210 to each of the receivers rl and r2 (and in the case of r3 and r4 through a second RC wiring tree 245 after first going through programmable switch 241).[0056] A first fanout from driver 210 to receiver 220 includes the driver 210 output, RCWiring Tree 115, programmable switches 221 and 222, and to the input of receiver rl 220. This first fanout is illustrated in FIG. 2B by the dashed line labeled 280. Note that this fanout includes the driver (d) 210 output, a portion of the 215 RC wiring tree that connects with switch 221, and switch 222, and to the input of the receiver (rl) 220.[0057] A second fanout from 210 to 230 (denoted 210-230) includes a second receiver 230(r2) input, which is coupled to the driver 210 output through the RC wiring tree 215 and a switch 231.[0058] A third fanout 210-240/250 includes a fourth fanout 210-240, and a fifth fanout210-250. The fourth fanout 210-240 includes the driver d 210, part of RC wiring tree 215, switch 241, part of RC wiring tree 245, switch 242, and to a third receiver 240 (r3) input. The fifth fanout 210-250 includes the driver d 210 output, part of RC wiring tree 215, the switch 241, part of RC wiring tree 245, the switch 253, and to a fourth receiver (r4) 250. For illustrative purposes only, in FIG 2B at 290 is shown the fifth fanout 210-250 with a dash-dot line.[0059] FIG. 3 depicts an example model of a PLD. An example implementation relates to aggregating a portion of delay stages 310 'UtreeT with at least a portion of a second of the delay stages 320 'Utree2' into the aggregated stage 320.[0060] Utreel 310 includes a driver 311 (1), a first receiver 315 (2), and a second receiver319 (3). The first receiver 315 (2) is coupled to the first driver 311 (1) through a wiring tree, which includes the switch 312. The second receiver 319 (3) is coupled to the first driver 311 (1) through a wiring tree, which includes the switch 317. It should be noted that the wiring tree connects the output of driver 311 (1) to switches 312, 317, and optionally other switches. As shown in FIG. 3 the wiring tree connects the output of driver 311 (1) to switch 312, switch 317, and any other branches with switches that may exist as denoted by the ellipsis at 327.
[0061 ] Aggregated Utree2320 includes driver 315 (2) (which is implemented as a function of the first receiver 315 (2) and a fourth receiver 329 (4). The fourth receiver 329 (4) is coupled to the second driver 315 (2) through a respective portion of the wiring tree, which is implemented to have a fixed load (e.g., without active switches).[0062] Driver 315 is aggregated with receiver 329 to define Utree2 320. Utree 300 includes driver 311 (1), receiver 319 (3) and the receiver 329 (4). Thus, from the standpoint of the driver 311 (1) it has two receiver endpoints, receiver 319 (3), and receiver 329 (4).[0063] It should be noted for aggregation purposes that the aggregated Utree, i.e. Utree2320 has a direct connection from Utreel 310 driver 315 output and the direct connection has a fixed load, that is the connection has no active switches. For example, as illustrated the output of 315 is directly connected to the input of 329, showing no switches present.[0064] An example implementation relates to methods for estimating signal related delays in the design of the PLD, and includes modeling the PLD design in relation to one or more stages, as described with reference to FIG. 4A, FIG. 4B, FIG. 5, FIG. 6, FIG. 7, FIG. 8 and/or FIG. 9, below. In executing the methods, implementing the aggregated Utree 300 simplifies the computations by reducing the number of models needed.[0065] FIG. 4A depicts an example PLD model 400. The PLD model 400 includes a driver 410 (d), receiver 420 (rl), and receiver 430 (r2), which are each coupled to the driver 410 (d) through a wiring tree that includes a common resistance 415 (R(l,2)).[0066] Receiver 420 (rl) is coupled to the driver 410 (d) through the common resistance415 (R(l,2)) and a first fanout coupled therewith. The first fanout includes a resistance 411 and a switch 417 (SI).[0067] Receiver 430 (r2) is coupled to the driver 410 (d) through the common resistance415 (R(l,2)) and a second fanout coupled therewith. The second fanout includes a resistance 412 and a switch 418 (S2).
[0068] In an example implementation, signal related delays for the PLD model 400 are computed according to Equation 1, below also shown in FIG. 12 at 1200.(Equation 1)In Equation 1, 'D' represents the signal related delay of the model 400. (j, T, P, X) represent the variables that 'D' is a function of as detailed on the right hand side of Equation 1. j, Ί I* are explained below and X denotes the set of switches that are on. Underlined X is the set of switches that are on (underline indicates a vector.) X(s) is 1 if switch s is on, else 0.In Equation 1, 'A' represents a fixed arc delay associated with the driver 410. An arc delay is a delay across a functional block, in this case the driver 410.In Equation 1 'B(j, P)' represents a baseline wire delay to a fanout '/' in a wiring layout P. P is dimensionless and is an index to a list of physical layouts.In Equation 1, the sum 'LT + QP' represents a slope-dependent delay related to the driver. An example implementation reduces overfitting by constraining the linear component Z, which is the slope transfer coefficient, of the sum LT + QP to a value greater than or equal to zero ( L=>0 ), and constraining the quadratic component Q thereof to a value of less than or equal to zero (Q<=0) . Q has the units of 1/T.In Equation 1, the term '' represents incremental delays added by switches in their 'on' states adding capacitance on a branch of the wiring path to the fanout '/, which adds accuracy to the delay calculation with respect to the baseline wire delay.
When there are no switches that add capacitive loading, the terms in the summation (K, R, and X) disappear and only B parameters and the slope-dependent parameters (L/Q) and A remain.[0069] The following independent variables in Equation 1 are represented by the symbols described in Table 1, below.TABLE 1Symbol Represents j F anout to Receiver of Intere stS Set of all Switches driven by the Driver Output's-E S ' '" s' is an Element of the Set 'S '"X(s) Value = 1 (one) if Switch 5 is 'on'; else 0 (zero)T Transition Time at the Driver InputP Physical Layout of Wiring TreeR(sj,P) Resistance of Common Path including switch 5 and receiver j in layout P X Switches that are ‘on’The common-path resistance term R(sj,P) allows K [or K ’] values to be independent of wire layout type P.K(s,j) represents the effective capacitance introduced when turning on switch s, when measuring delay from the driver to receiver j.[0070] Example implementations thus relate to a method for estimating the delay to particular fanouts of particular delay stages (e.g., Utrees) based on the conduction state of the switches thereof, using a parameterized delay model. The method is computed using the transition time dependent parameters (e.g., L and Q; Equation 1), each with their constrained signs, and the additive term (e.g., K(s); Equation 1) for each switch that adds capacitive loading to the stage, which includes the common path resistance factor (e.g., R(s); Equation 1).[0071] In an example implementation, transition times are estimated, as well. The transition time, also referred to as the 'slope' of the stages, is estimated at the input of each delay stage. Like the delay estimates discussed above with reference to Equation 1, the slope estimates
are computed from a previous delay stage. For each stage, an example implementation computes the delay to, and the slope at, each fanout.[0072] In an example implementation, the slope related delays for the PLD model 400 are computed according to Equation 2, below, and as shown in FIG. 13 at 1300.(Equation 2)In Equation 2, ' represents the transition time related delay of the model 400, 'L'Tm' represents a slope transfer from a previous driver input, and 'B(j,P)' represents a baseline slope to a fanout '/' in the wiring layout P and X denotes switches ‘on’ . Underlined X is the set of switches that are on (underline indicates a vector.) X(s) is 1 if switch s is on, else 0.When there are no switches that add capacitive loading, the terms in the summation (K’, R, and X) disappear and only B’ parameters and the slope-dependent parameter (L’) remains.[0073] In Equation 2, the term(s,j, P ) (s) ' (similar to that in Equation1) represents the incremental delays added by the switches in their 'on' states adding capacitance on the branch of the wiring path to the fanout f ' , which adds accuracy to the model.[0074] The following independent variables in Equation 2 are represented by the symbols described in Table 2, below.TABLE 2Symbol Represents j F anout to Receiver of Intere stS Set of all Switches driven by the Driver Output"'s' is an Element of the Set 'S '" m Value = 1 (one) if Switch 5 is 'on1; else 0 (zero)U Slope Transfer Term (coefficient)
TinTransition Time at the Driver InputP Physical Layout of Wiring TreeR(sj,P) Resistance of Common Path including switch 5 and receiver j in layout PX Switches that are ‘on’The common-path resistance term R(sj,P) allows K [or K ’] values to be independent of wire layout type P.K’(s,j) represents the effective capacitance introduced when turning on switch s, when measuring delay from the driver to receiver j.[0075] In an example implementation, the slope models use data that overlaps, partially or completely, with a data set used in the delay models, which are described above with reference to Equation 1. One or more of the stages potentially have zero slope transfer (the term 'Z' Equation 2)·[0076] FIG. 4B depicts the PLD model 400, coupled to an ordinally previous stage 499.[0077] In view of a zero value for the slope transfer term L an example implementation estimates the slope at the beginning of the current delay stage 400, based on the slope determined in relation to a stage 499 previous thereto, and prior to estimating the slope at the end (output) of the current delay stage 400. In an example implementation, computation of the transition time model thus includes a recursive routine, which traces through a plurality of sequential stages (e.g., multiple previous stages), including at least the previous stage 499.[0078] In an example implementation, the recursive routine used in computing the transition time model terminates, upon computation of a result corresponding to reaching a stage, in which the slope transfer term L which is the slope transfer coefficient, has a value of zero (0). An approach is thus implemented that relates to linear programming, and analogous to the approach used in computing the delay models, so as to fit the slope model parameters. The inclusion of the slope transfer function U in computing the transition time models increases the accuracy achievable using this approach, e.g., compared with conventional approaches. For example, for some drivers (e.g. inverters) the slope at the input of the driver affects the slope at the output. Failure to capture this effect leads to less accurate delay models.
[0079] Example implementations thus relate to a method for estimating the transition time, or slope delay to particular fanouts of particular delay stages using a parameterized delay model. The method is computed using the transition time dependent parameters.[0080] The method computes aggregated delay stages (where the aggregation of the Utrees does not appreciably expand the size of the delay model). Example implementations also relate to a method of determining the parameter values for the delay model.[0081] An example implementation relates to methods for estimating signal related delays and slope related delays in the design of the PLD model 400 and computation of Equation 1 and Equation 2, above, and includes modeling the design thereof as described with reference to FIG. 4, FIG. 5, FIG. 6, FIG. 7, FIG. 8, FIG. 9, and/or FIG. 10, below.EXAMPLE METHODS[0082] FIG. 5 depicts a flowchart of an example method 500 for modeling a PLD. The method 500 relates to determining values for a set of parameters related to one or more delay models of a PLD design (e.g., example PLD 100, PLD 300, PLD model 400; FIG. 1, FIG. 3, FIG. 4A-4B, respectively), according to an example implementation.[0083] In step 510, a PLD design is modeled in relation to one or more stages, the stages respectively comprising a driver and one or more receivers coupled to the driver with a wiring tree. The modeling based on a selected set of parameters comprising: one or more slope related delays associated with the driver; a delay related to a layout of the wiring tree; and a parameter related to a slope transfer from a previous driver input, the previous driver upstream from the driver sequentially in relation, ordinal, to the one or more stages. In step 518, the wiring tree includes one or more programmable switches, and the plurality of parameters include parameters related to each of the one or more programmable switches. It is to be noted, that the programmable switches add capacitive loading to the stages.[0084] The plurality of parameters refers to each term in the summations over s,j (K,R,X for Equation 1 and K’,R,X for Equation 2) .[0085] In an example implementation shown in optional block 511 the slope related delays
associated with the driver include an arc delay having a fixed duration, and/or a delay with a duration dependent on a slope of a transition time of the driver.[0086] In an example implementation shown in optional block 512 the slope related delays associated with the driver include a slope-dependent driver transition time delay that is a sum of a linear component constrained to a value greater than or equal to zero, and a quadratic component constrained to a value less than or equal to zero.[0087] In an example implementation shown in optional block 513 the delay related to a layout of the wiring tree relates to a fanout of the stages from the driver to the receivers.[0088] In an example implementation shown in optional block 514 the plurality of parameters related to the switches that adds capacitive loading to the stages include: a capacitance factor corresponding to the switches in an ‘on’ state; and a resistance factor corresponding to a path that includes the ‘on’ stage switches and one of the receivers.[0089] Further, in an example implementation shown in optional block 515 the modeling includes aggregating a first one of the stages into an aggregated stage and at optional block 516 the at least second stage includes a fixed load.[0090] In a step 520, a predetermined set of values is accessed for the selected parameters of the modeled stages from a first computer readable storage medium.[0091] In a step 530, the estimated signal related delays are computed for the modeled stages based on a sum of the corresponding accessed selected parameter values.[0092] In a step 540, the computed estimated signal related delays for the modeled stages are written to a second computer-readable storage medium, optionally as one or more configuration files.[0093] In an optional step 550, code of a design tool is executed by one or more processors, the code operable to utilize the computed estimated signal related delays for the modeled stages to estimate the signal related delays.[0094] FIG. 6 depicts a flowchart of an example method for estimating a delay related to
a PLD model. The method 600 relates to determining values for a set of parameters related to one or more delay models of a PLD design, such as the example PLD 100, according to an example implementation.[0095] In step 610, a first data set and a second data set are populated, each data set comprises distinct, and independent data corresponding to a plurality of target parameters, wherein the PLD design is modeled in accordance with steps 510, 520, 530 and 540 (and optionally steps 511-516) of method 500 of FIG. 5.[0096] Optionally, in step 610, the populating includes reading one or more configuration files that indicate how the plurality of target parameters are to be generated, the configuration files including computed estimated signal delays for each stages of a model.[0097] In a step 620, a first simulation of a circuit corresponding to the modeled PLD design is computed based on the first data set, in which a corresponding first set of target parameters is fitted. In one optional example, upon the computation of the first simulation of the circuit, the corresponding first set of values related to the target parameters is fitted based on an absolute value of one or more computed prediction errors. In one example the fitting includes reducing a maximum of the absolute value of one or more computed prediction errors until a lowest reduced value is obtained. In this example, the code/data written to the computer readable storage medium in step 699 includes the one or more resulting delays related to the driver.[0098] For example, the target parameters are fit using the observations in the data set.Each observation consists of chosen independent variables and a measured delay (from SPICE simulation). That is, each observation corresponds to a spice-simulated delay through a circuit from a driver to a receiver. The observation includes the measured delay and the independent variable values (e.g. on-switches and wire layout). Linear programming is used to fit the target parameters by minimizing the maximum delay prediction error in the data set. For example, errors are differences between SPICE simulated delay and the respective predicted delay from Equation 1[0099] In one optional example, the values saved at 699 are saved as code 662 and when executed estimate signal related delays corresponding to the saved first and second set of values.
[00100] In one optional example, each of the stages includes at least one fanout. Each of the fanouts spans from the driver of the stage to one of the receivers thereof, which is coupled to the driver with the wiring tree thereof that includes the driver and the receiver.[00101] FIG. 7 depicts a flowchart of an example method for computing a first circuit simulation. FIG. 7 thus represents other (e.g., optional) details related to the step 620 of FIG. 6.[00102] These aspects of the step 620 are described below, with reference to detail blocks 722 through 727.[00103] In the example of FIG. 7, as shown by block 722 recording a pair of slope related delays associated with the driver, in which each of the set of stages includes at least one fanout, each of at least one fanout spanning from the driver of the set of stages to one or more receivers thereof, and coupled to the driver with the wiring tree thereof between the driver and the receiver, and an active path in the at least one fanout from the driver to the receiver includes at least one switch in a conductive ‘on’ state.[00104] In block 723, optionally, a set of data points from a first saved set of values related to one or more delays related to the driver is selected.[00105] In a block 724, each of a recorded pair of slope related delays associated with the driver is inserted into one of the selected set of data points.[00106] In a block 725, a delay related to a layout of a wiring tree and parameters related to each of a set of switches that adds capacitive loading to each of a set of stages are fit, wherein a maximum of an absolute value of one or more computed prediction errors is minimized.[00107] In a block 726, corresponding values for the delay related to the layout of the wiring tree and the parameters related to each of the set of switches that adds capacitive loading to each of the set of stages are computed.[00108] In a block 727, the computed corresponding values for the delay related to a layout of the wiring tree and the parameters related to each of the set of switches that adds capacitive loading to each of the set of stages are recorded, in which the recorded pair of the slope related
delays associated with the driver, and the recording values for the delay related to the layout of the wiring tree and the parameters related to each of the set of switches that adds capacitive loading to each of the set of stages are written to a computer readable storage medium.[00109] In step 630, a second simulation of the circuit corresponding to the modeled PLD design is computed based on the second data set wherein a corresponding second set of values related to a plurality of guard bands are defined.[00110] Optionally, in step 630, the second set of values related to the plurality of guard bands includes a broad array of varying independent variables related to which of the programmable switches are in a conductive state, which of the one or more stages drives a stage under test, and which of one or more physical layouts of the PLD correspond to the stages, there being no overlap between the first data set and the second data set.[00111 ] FIG. 8 depicts a flowchart of an example for computing a second circuit simulation. FIG. 8 thus represents other (e.g., optional) aspects of the step 630 (FIG. 6) described below.[00112] In a block 831, one or more of an allowable rate (R) of one or more underestimates related to set-up times, or one or more overestimates related to hold times for each of a set of delay models, are determined, and one or more delay prediction errors for each of a second set of values in a second data set are generated.[00113] In a block 832, each of the generated one or more delay prediction errors (‘e’) are ordered from a smallest ordinal value thereof to a largest value thereof, and delayed prediction errors are selected having a rising signal when the input of the device, or stage, within the PLD has a rising signal or delayed prediction errors having a falling signal when the PLD has a falling signal. In relation to the set-up times, the one or more delay prediction errors (‘e’) is computed such that the one or more allowable rate (R) includes a fraction of the generated delay prediction errors with an ordinal value smaller than the computed delay prediction error, in which the guard band is set to the value ‘e’ and in relation to the hold times, the delay prediction error ‘e’ is computed such that the allowable rate R includes a fraction of the generated delay prediction errors with an ordinal value larger than the computed delay prediction error, in which the guard band is value ‘e’.
[00114] In a block 833, optionally, a plurality of guard bands includes a first guard band including an estimate of at least one set-up time when the PLD has a rising output signal, a second guard band that includes an estimate of at least one set-up time when the PLD has a falling output signal, a third guard band that includes an estimate of at least one hold time when the PLD has a rising output signal, and a fourth guard band that includes an estimate of at least one hold time when the PLD has a falling output signal.[00115] Referring back to FIG. 6, in a step 638 values identified in the first simulation and values identified in the second simulation are saved to a computer-readable storage medium.[00116] Referring back to FIG. 6, in a step 640, it is determined whether the slope transfer coefficient for the current stage is equal to zero. If the slope transfer coefficient L' for the current stage is equal to zero (L'= 0), then the method 600 termination is achieved at a step 699, and values identified in the first simulation and values identified in the second simulation and values identified in a recursive routine, to be discussed below, are saved to a computer-readable storage medium.[00117] Optionally, at 662 values are save as code and executed to estimate signal related delays corresponding to the saved first and second set of values.[00118] If however it is determined in step 640 that the slope transfer coefficient for the current stage is not equal to zero (L' ¹ 0), then a step 650 is performed. In the step 650, the slope at the beginning of the current stage is estimated, based on the slope of the stage ordinally previous thereto, prior to estimating the slope at the end of the current stage. The method 600 then loops back and re-performs the step 640, until the slope transfer coefficient L ' for the current stage equals zero (0), and is thus a recursive routine.[00119] In one optional example 660, for the current stage in which L’ ¹ 0, a recursive routine is executed, which traces L’ through multiple ordinal previous stages to estimate slope of the current stage until L’=0. That is, the recursive routing persists until the value estimated for the current stage is equal to zero (0).[00120] In an example implementation, the method 500 and 600, respectively described with reference to FIG. 5 and FIG. 6, and the steps and blocks thereof described with reference to FIG. 7, FIG. 8 and FIG. 9 are executed by one or more computer systems. In example
implementations, the data computed in relation of these methods is implemented across a tool chain for designing PLDs.EXAMPLE TOOL CHAIN AND COMPUTER SYSTEM[00121] FIG. 9 depicts an example tool chain 900 for simulation of PLDs, including FPGAs (and/or other ICs) delay models. The tool chain 900 includes a simulator space 910. Generally, the simulator space 910 is deployed with the supplier, manufacturer, designer or vendor of the subject PLD. A designer space 1030 (in FIG. 10), on the other hand, is generally deployed with an end (or midstream) user of the PLDs.[00122] The simulator space 910 includes a simulator computer 911, which is operable for executing and/or performing an IC simulation program such as SPICE, and has access to all relevant databases, circuit netlists, and product data relevant to designing the subject PLDs. Moreover, the simulator computer (and/or computers operable with the data generated therewith) are operable based on a set of program files 916, which are encoded tangibly on a computer readable storage medium operable with the simulator computer 911.[00123] In example implementations, the program files 916 include data, which when executed and/or performed by one or more processors of the simulator computer 911 cause the execution, performance and/or or control of one or more of the method 500 or the method 600 (FIG. 5, FIG. 6; respectively). 917 is a model fitter downstream from the SPICE simulations. In an example implementation, the simulator computer outputs a set of delay models 912. Thus, the simulator computer 911 computes, model fitter 917 fits, and simulator computer 911 stores a set of delay models 912, e.g., for the PLD 100 (FIG. 1), based (at least in part) on the method 500 and/or the method 600.[00124] FIG. 10 depicts an example of a tool chain 1000 having a designer space 1030 which includes the design toolset 1033. The set of delay models 1012 are included in a design toolset 1030. The delay models 1012 are derived from, or are the same as, the set of delay models 912 in FIG. 9. The design toolset 1033 is operable in relation to preparing a user design implemented on a PLD such as an FPGA.[00125] The design toolset 1033 also includes a design library 1035 of predesigned circuit
designs related to a selection (e.g., catalog) of PLDs. The design library 1035 optionally has information related to the operational frequency of the predesigned circuit designs.[00126] An example implementation relates to methods (e.g., 500, 600; FIG. 5, FIG. 6, respectively) for producing a set of delay models 912 for the circuit elements on the PLD (e.g., 100; FIG. 1), and allow deployment of the delay models 912 in design toolsets 1033 for PLDs as set of delay models 1012.[00127] An example implementation relates to processes for analyzing circuit timing functionality across the tool chain 1000. Design toolset 1033 based on the example implementations described herein allow users to effectively and efficiently compute speeds at which their circuit designs are accurately expected to perform on the PLDs they design therewith.[00128] While the set of delay models 912 themselves are generally not used directly in programming PLD devices (e.g., PLD 100; FIG. 1) to run a user's design, they allow a given bit stream set to be loaded into a PLD to program and run the exact same, and/or amended and revised variants of the particular user design. Thus, while the parametric delay models and their parameters are not directly disposed on the PLD so as to physically configure the programmable elements thereof, they are deployed in the design toolset 1033, with which they are programmably configured.[00129] Conventional design toolsets with various delay model sets generally generate different predictions, based on the frequency at which the user’s design is run. Unfortunately, the inaccuracy of delay models generated using conventional techniques demands the use of excessive guard bands to be conservative. Such excessive guard bands increase the delay estimate, generally rendering delay estimates that are excessively conservative in view of the actual capability of the PLD, and thus needlessly constrain the predicted operating frequency in an effort to ensure that a user design is at least functionally correct and operable.[00130] Therefore, although the PLD 100 can run the user’s design at higher frequencies, conventional toolsets generally predict a lower operating frequency. This constrains the user to setting up a clock frequency for the PLD 100 according to the lower frequency prediction generated by the toolset. When the PLD 100 is ultimately programmed based on configuration
data so constrained, its operable performance (e.g., speed) is likely thus less than (e.g., slower than) that, which the PLD is actually capable of achieving if not so constrained.[00131] Example implementations described herein provide a method of modeling delays in designs of a PLD, which allows a design toolset 1033 to model on-the-silicon operating frequency of the PLD with greater accuracy. Set of delay models 912 and 1012 implemented according to the disclosure herein provide a more exact reflection of the true operational capabilities of the PLD, as eventually configured in the silicon (or other semiconductor) on which the PLD is disposed.[00132] As denoted by their names, PLDs are programmable integrated circuit (IC) devices and thus allow connection of their circuit elements, based on programmed configuration data, in a variety of ways to enable various user design functionality, design performance, and operating frequency. In view of the flexibility and variability of the PLDs, the design toolset 1033 implemented based on the present disclosure allows users to design using set of delay models 1012. Notwithstanding how the circuit elements on the PLD 100 are connected, example implementations allow the same set of delay models 912 and 1012 to compute a prediction of the performance of each of the user's proposed designs.[00133] As there are many ways to connect the circuit elements of a PLD, exhaustively enumerating all the possible connection pathways becomes impracticable. In example implementations, a limited first set of connection approaches are distilled into a first data set. The first data set connects circuit elements and collects data points on those cases to fit a corresponding set of parametric delay models.[00134] The delay models are refined with a second set of connection approaches different than the first set of connection approaches, which are distilled into a second data set. The first data set and the second data set connect circuit elements and collects data points on those cases to fit a corresponding set of parametric delay models.[00135] The first data set and the second data set generally have no overlapping test cases and are independent of each other because overlapping test cases do not give new information.[00136] In an example implementation, the delay models are verified by using a validation
set. The validation set is independent of the first and second data sets. The validation set verifies the accuracy of the parametric delay models, and verifies that these delay models cover many possible ways of connecting circuit elements with acceptable accuracy.[00137] Example implementations allow PLD users to design PLDs such as FPGAs "in the field" with increased accuracy, relative to contemporary, current conventional approaches to programming processes. The example implementations obviate many of the additional excessive guard bands associated therewith. Example implementations thus increase the performance of field programmers' PLD designs, e.g., in comparison to conventional programming approaches that generally use the additional guard bands.[00138] It should be appreciated that the flowcharts related to the methods 500 and 600 (FIG. 5; FIG. 6, respectively), and the more detailed flow diagrams depicted in FIG. 7 through FIG. 8, inclusive, depict architecture, functionality, and operation of various implementations of methods, computer program products and related tangible computer readable media and computer systems, according to various implementations described in the present disclosure. In relation therefore, each block and/or step in the flowcharts herein represents a portion or segment of code, included in one or more portions of computer-usable program code, which implements one or more of the logical functions described in relation to the flowcharts.[00139] The methods and media described herein are implemented in hardware, software, or a combination of hardware and software. These methods and media are implemented, alternatively, in a centralized fashion in one computer system, or in a distributed fashion, in which different elements are spread across several interconnected computer systems.[00140] While any kind of computer system or other apparatus adapted for carrying out the methods described herein is suitable, an example implementation is disposed, deployed or programmed on a dedicated computer system platform, specialized for performing the computations described herein. In an example implementation, a combination of hardware and software includes a general-purpose computer system with a computer program. Upon loading, execution and performance therewith, the program controls the computer system such that it carries out the methods described in the present disclosure, as a special purpose device.
[00141] Example implementations are also encoded and/or embedded in a computer program product and/or related tangible computer readable storage media. These implementations include all the features enabling the implementation of the methods described herein and which, when loaded in a computer system, are able to carry out these methods and related processes, and to program, configure, direct and control the computer system to perform these methods and related processes.[00142] As used herein, the term "software" refers or relates to any expression, in any language, including but not limited to Hardware Descriptive Language (HDL), a related language, or another language, code or notation, and/or a set of encoded instructions therein, which has the effect of causing a system having an information processing capability to perform a particular function either directly or, upon conversion to another language, code or notation, or reproduction in a different material form. For example, software programs implemented according to the disclosure herein include, but are not limited to, a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library, relational and other database queries and related searches and replies, and instructions, and/or other sequences of instructions and related data designed for execution on the computers described herein.[00143] In example implementations, such software runs (e.g., is read, executed and operably active and operationally functional) in a simulator space and/or a designer space, and on various computer systems, as described with reference to FIG. 9 and FIG. 10, below.[00144] FIG. 11 depicts an example computer system 1150, which in is operable with the design toolset 1033. The computer 1150 has a bus 1151. One or more processors are coupled to the bus 1151. For example, the computer 1150 has a central processing unit (CPU) 1152. The CPU 1152 performs general processing operations related to the operation of the computer 1150, based in part on code such as a basic input/output system (BIOS) stored in a read-only memory (ROM) 1153, to which the CPU 1152 is coupled through the bus 1151.[00145] During performance of its computations, the CPU 1152 is operable for reading data from, and writing data to a random access memory (RAM) 1154. In an example implementation, the RAM 1154 represents one or more memory related components, each operable as computer
readable storage media (CRM) with the CPU 1152 and/or one or more other processors, as described below.[00146] In an example implementation, at least one coprocessor (COP) 1158, such as a mathematics ('Math') coprocessor and/or graphics processing unit (GPU), is coupled to the bus 1151 and operable with the RAM 1154 and/or program code stored on a computer readable storage medium (CRM1) 1155, which is also coupled to the bus 1151. In one example implementation there is as second computer readable storage medium (CRM2) 1159, which is also coupled to the bus 1151.[00147] In an example implementation, the program code stored on the CRM1 1155 also allows the computer 1150 to operate with the design toolset 1033. In an example implementation, an instance of the design toolset 1033 is stored on the CRM1 1155, with a specialized library 1157 (also coupled to the bus 1151), and/or in independent media included within the computer 1150.[00148] The computer 1150 has one or more interfaces 1156 coupled to the bus 1151. The interfaces 1156 are operable for communicatively coupling the computer 1150 to one or more peripherals used by the designer, including (but not limited to) a display, mouse, keyboard, external storage, and/or one or more communications networks.[00149] The simplified models and methods of various example implementations and described with reference to, and Equation 1 and Equation 2, above, are thus computed using a limited, amount of data and this lessens the amount of data used to fit the parameters of the delay models data, and improves the speed with which the delay estimates are computed. Moreover, the example implementations avoid errors in undesired directions by, for example, avoiding underestimates for setup timing.[00150] As described above, each of the stages of the PLD design includes a driver and one or more receivers coupled to the driver with a wiring tree. The wiring tree includes none, or one or more programmable switches. The modeling is based on the predetermined models and delay estimates 1033 as a selected set of parameters, which were pre-computed by the simulator computer 910. The selected set of the parameters includes one or more slope related delays associated with the driver, a delay related to a layout of the wiring tree, a plurality of parameters
related to each of the switches, if any, that adds capacitive loading to each of the stages, and a parameter related to a slope transfer from a previous driver input, the previous driver upstream from the driver sequentially in relation, ordinally, to the one or more stages.[00151] For clarity and brevity, as well as to avoid unnecessary or unhelpful obfuscating, obscuring, obstructing, or occluding features or elements of an example of the disclosure, certain intricacies and details, which are known generally to artisans of ordinary skill in related technologies, have been omitted or discussed in less than exhaustive detail. Any such omissions or discussions are unnecessary for describing examples of the disclosure, and/or not particularly relevant to an understanding of significant features, functions and aspects of the examples of the disclosure described herein.[00152] The term "or" is used herein in an inclusive, and not exclusory sense (unless stated expressly to the contrary in a particular instance), and use of the term “and/or” herein includes any and all combinations of one or more of the associated listed items, which are conjoined/disjoined therewith. Within the present description, the term "include," and its plural form "includes" (and/or, in some contexts the term "have," and its conjugate "has") are respectively used in same sense as the terms "comprise" and "comprises" are used in the claims set forth below, any amendments thereto that are potentially presentable, and their equivalents and alternatives, and/or are thus intended to be understood as essentially synonymous therewith. The figures are schematic, diagrammatic, symbolic and/or flow-related representations and so, are not necessarily drawn to scale unless expressly noted to the contrary herein. Unless otherwise noted explicitly to the contrary in relation to any particular usage, specific terms used herein are intended to be understood as in a generic and/or descriptive sense, and not for any purpose of limitation.[00153] An example implementation is thus described in relation to a method for estimating signal related delays in the design implement on a PLD, such as a FPGA, and a system operable based on the method. The method includes modeling the PLD design in relation to one or more stages. Each of the stages has a driver and one or more receivers coupled to the driver with a wiring tree. The wiring tree includes none, or one or more programmable switches. The modeling is based on a selected set of parameters, which include one or more slope related delays associated with the driver, a delay related to a layout of the wiring tree, a plurality of parameters related to
each of the switches that adds capacitive loading to each of the stages, and a parameter related to a slope transfer from a previous driver input, the previous driver upstream from the driver sequentially in relation, ordinally, to the two or more stages.[00154] A predetermined set of values is accessed for each of the selected parameters of each of the modeled stages from a first computer readable storage medium. The estimated signal related delays are computed for each of the modeled stages based on a sum of the corresponding accessed selected parameter values. The computed estimated signal related delays for each of the modeled stages is written to a second computer-readable storage medium as code, which when executed by one or more processors is operable for estimating signal related delays in the user’s design slated for programming into a PLD.[00155] In the specification and figures herein, examples implementations are thus described in relation to the claims set forth below. The present disclosure is not limited to such examples however, and the specification and figures herein are thus intended to enlighten artisans of ordinary skill in technologies related to integrated circuits in relation to appreciation, apprehension and suggestion of alternatives and equivalents thereto. |
For fabricating a field effect transistor, a gate structure is formed on a gate dielectric on an active device area of a semiconductor substrate. An amorphization dopant and an extension dopant are implanted into exposed regions of the active device area to form drain and source extension junctions extending down to an extension depth within the semiconductor substrate. First and second spacers are formed at sidewalls of the gate structure. Any exposed regions of the active device area of the semiconductor substrate are etched down beyond the extension depth. The drain and source extension junctions remain disposed under the first and second spacers. A layer of doped amorphous semiconductor material is deposited to cover the structures on the semiconductor substrate and is doped with a contact dopant in an in-situ deposition process using a temperature of less than about 500° Celsius. The amorphous semiconductor material is polished down until the top surfaces of the gate structure and the first and second spacers are level with a top surface of the amorphous semiconductor material. The amorphous semiconductor material remaining to the first sidewall of the gate structure forms an elevated drain contact structure, and the amorphous semiconductor material remaining to the second sidewall of the gate structure forms an elevated source contact structure. A thermal anneal is performed using a temperature less than about 600° Celsius to activate the dopants within the drain and source extension junctions and within the drain and source contact structures. Such low temperatures preserve the gate dielectric comprised of a high-K dielectric material. |
I claim: 1. A method for fabricating a field effect transistor within an active device area of a semiconductor substrate, the method including the steps of:A. forming a gate structure on a gate dielectric on said active device area of said semiconductor substrate, wherein said gate dielectric has a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2); B. implanting an amorphization dopant and an extension dopant into exposed regions of said active device area of said semiconductor substrate; C. forming a first spacer at a first sidewall of said gate structure and said gate dielectric, and forming a second spacer at a second sidewall of said gate structure and said gate dielectric; wherein said first spacer is disposed over a drain extension junction extending down to an extension depth within said active device area of said semiconductor substrate and having said amorphization dopant and said extension dopant implanted therein; and wherein said second spacer is disposed over a source extension junction extending down to said extension depth within said active device area of said semiconductor substrate and having said amorphization dopant and said extension dopant implanted therein; D. etching down any exposed regions of said active device area of said semiconductor substrate beyond said extension depth; wherein said drain extension junction remains disposed under said first spacer; and wherein said source extension junction remains disposed under said second spacer; E. depositing a layer of amorphous semiconductor material to cover said first and second spacers and said gate structure and on any exposed regions of said semiconductor substrate; wherein said layer of amorphous semiconductor material is formed to be doped with a contact dopant in an in-situ deposition process using a temperature of less than about 500[deg.] Celsius; F. polishing down said amorphous semiconductor material until top surfaces of said gate structure and said first and second spacers are exposed such that said top surfaces of said gate structure and said first and second spacers are level with a top surface of said amorphous semiconductor material; wherein said amorphous semiconductor material remaining to said first sidewall of said gate structure forms an elevated drain contact structure; and wherein said amorphous semiconductor material remaining to said second sidewall of said gate structure forms an elevated source contact structure; and G. performing a thermal anneal using a temperature less than about 600[deg.] Celsius to activate said extension dopant within said drain and source extension junctions and to activate said contact dopant within said drain and source contact structures. 2. The method of claim 1, wherein said amorphization dopant is comprised of one of germanium(Ge), silicon (Si), antimony (Sb), or xenon (Xe).3. The method of claim 1, wherein said first and second spacers are comprised of silicon dioxide (SiO2) having a width in a range of from about 50 angstroms to about 100 angstroms, and wherein said silicon dioxide (SiO2) of said first and second spacers are formed in an oxide deposition process using a temperature of less than about 400[deg.] Celsius.4. The method of claim 1, wherein said extension depth is less than about 200 angstroms, and wherein exposed regions of said active device area of said semiconductor substrate are etched down by between about 200 angstroms to about 400 angstroms in said step D.5. The method of claim 1, wherein said layer of amorphous semiconductor material is comprised of amorphous silicon having a thickness in a range of from about 2000 angstroms to about 5000 angstroms.6. The method of claim 1, further including the step of:etching down said drain and source contact structures by between about 300 angstroms to about 500 angstroms after said step F. 7. The method of claim 1, further including the step of:forming a drain silicide within said drain contact structure and forming a source silicide within said source contact structure using a temperature in a range of from about 400[deg.] Celsius to about 500[deg.] Celsius after said step G. 8. The method of claim 7, wherein said drain and source silicide are comprised of nickel silicide (NiSi).9. The method of claim 1, wherein said extension dopant and said contact dopant are an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor).10. The method of claim 1, wherein said extension dopant and said contact dopant are a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor).11. The method of claim 1, wherein said gate structure is comprised of metal, and wherein said gate dielectric is comprised of a metal oxide.12. A method for fabricating a MOSFET (metal oxide semiconductor field effect transistor) within an active device area of a semiconductor substrate, the method including the sequential steps of:A. forming a gate structure comprised of metal on a gate dielectric comprised of a metal oxide on said active device area of said semiconductor substrate, wherein said gate dielectric has a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2); B. implanting an amorphization dopant and an extension dopant into exposed regions of said active device area of said semiconductor substrate; wherein said amorphization dopant is comprised of one of germanium(Ge), silicon (Si), antimony (Sb), or xenon (Xe); C. forming a first spacer at a first sidewall of said gate structure and said gate dielectric, and forming a second spacer at a second sidewall of said gate structure and said gate dielectric; wherein said first spacer is disposed over a drain extension junction extending down to an extension depth within said active device area of said semiconductor substrate and having said amorphization dopant and said extension dopant implanted therein; wherein said second spacer is disposed over a source extension junction extending down to said extension depth within said active device area of said semiconductor substrate and having said amorphization dopant and said extension dopant implanted therein; and wherein said first and second spacers are comprised of silicon dioxide (SiO2) having a width in a range of from about 50 angstroms to about 100 angstroms, and wherein said silicon dioxide (Si02) of said first and second spacers are formed in an oxide deposition process using a temperature of less than about 400[deg.] Celsius; D. etching down any exposed regions of said active device area of said semiconductor substrate beyond said extension depth; wherein said drain extension junction remains disposed under said first spacer; wherein said source extension junction remains disposed under said second spacer; and wherein said extension depth is less than about 200 angstroms, and wherein exposed regions of said active device area of said semiconductor substrate are etched down by between about 200 angstroms to about 400 angstroms; E. depositing a layer of amorphous semiconductor material to cover said first and second spacers and said gate structure and on any exposed regions of said semiconductor substrate; wherein said layer of amorphous semiconductor material is formed to be doped with a contact dopant in an in-situ deposition process using a temperature of less than about 500[deg.] Celsius; and wherein said layer of amorphous semiconductor material is comprised of amorphous silicon having a thickness in a range of from about 2000 angstroms to about 5000 angstroms; F. polishing down said amorphous semiconductor material until top surfaces of said gate structure and said first and second spacers are exposed such that said top surfaces of said gate structure and said first and second spacers are level with a top surface of said amorphous semiconductor material; wherein said amorphous semiconductor material remaining to said first sidewall of said gate structure forms an elevated drain contact structure; and wherein said amorphous semiconductor material remaining to said second sidewall of said gate structure forms an elevated source contact structure; G. etching down said drain and source contact structures by between about 300 angstroms to about 500 angstroms; H. performing a thermal anneal using a temperature in a range of from about 500[deg.] Celsius to about 600[deg.] Celsius to activate said extension dopant within said drain and source extension junctions and to activate said contact dopant within said drain and source contact structures; and I. forming a drain silicide within said drain contact structure and forming a source silicide within said source contact structure using a temperature in a range of from about 400[deg.] Celsius to about 500[deg.] Celsius; wherein said drain and source silicide are comprised of nickel silicide (NiSi); wherein said extension dopant and said contact dopant are an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor); and wherein said extension dopant and said contact dopant are a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor). |
TECHNICAL FIELDThe present invention relates generally to fabrication of field effect transistors having scaled-down dimensions, and more particularly, to a process for forming elevated drain and source contact structures using relatively low temperatures to preserve the gate dielectric having a high dielectric constant for the field effect transistor having scaled down dimensions of tens of nanometers.BACKGROUND OF THE INVENTIONA long-recognized important objective in the constant advancement of monolithic IC (Integrated Circuit) technology is the scaling-down of IC dimensions. Such scaling-down of IC dimensions reduces area capacitance and is critical to obtaining higher speed performance of integrated circuits. Moreover, reducing the area of an IC die leads to higher yield in IC fabrication. Such advantages are a driving force to constantly scale down IC dimensions.Referring to FIG. 1, a common component of a monolithic IC is a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) 100 which is fabricated within a semiconductor substrate 102. The scaled down MOSFET 100 having submicron or nanometer dimensions includes a drain extension 104 and a source extension 106 formed within an active device area 126 of the semiconductor substrate 102. The drain extension 104 and the source extension 106 are shallow junctions to minimize short-channel effects in the MOSFET 100 having submicron or nanometer dimensions, as known to one of ordinary skill in the art of integrated circuit fabrication.The MOSFET 100 further includes a drain contact junction 108 with a drain silicide 110 for providing contact to the drain of the MOSFET 100 and includes a source contact junction 112 with a source silicide 114 for providing contact to the source of the MOSFET 100. The drain contact junction 108 and the source contact junction 112 are fabricated as deeper junctions such that a relatively large size of the drain silicide 110 and the source silicide 114 respectively may be fabricated therein to provide low resistance contact to the drain and the source respectively of the MOSFET 100.The MOSFET 100 further includes a gate dielectric 116 and a gate electrode 118 which may be comprised of polysilicon. A gate silicide 120 is formed on the polysilicon gate electrode 118 for providing contact to the gate of the MOSFET 100. The MOSFET 100 is electrically isolated from other integrated circuit devices within the semiconductor substrate 102 by shallow trench isolation structures 121. The shallow trench isolation structures 121 define the active device area 126, within the semiconductor substrate 102, where the MOSFET 100 is fabricated therein.The MOSFET 100 also includes a spacer 122 disposed on the sidewalls of the gate electrode 118 and the gate dielectric 116. When the spacer 122 is comprised of silicon nitride (Si3N4), then a spacer liner oxide 124 is deposited as a buffer layer between the spacer 122 and the sidewalls of the gate electrode 118 and the gate dielectric 116.As the dimensions of the MOSFET 100 are further scaled down, the thickness of the gate dielectric 116 is also scaled down. However, with a thinner gate dielectric 116, more charge carriers tunnel through the thin gate dielectric 116 to result in undesired leakage current at the gate of the MOSFET 100, as known to one of ordinary skill in the art of integrated circuit fabrication. To minimize such undesired leakage current, a dielectric material having a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2) (i.e., a high-K dielectric material) is used for the gate dielectric 116, as known to one of ordinary skill in the art of integrated circuit fabrication. The gate dielectric 116 has a higher thickness when comprised of such a high-K dielectric material than when comprised of silicon dioxide (SiO2) for the same drive current of the MOSFET 100 to minimize undesired tunneling current through the gate dielectric 116.In addition, as the dimensions of the MOSFET 100 are further scaled down, the thickness of the drain and source silicides 110 and 114 is also scaled down as the depth of the drain and source contact junctions 108 and 112 is scaled down. However, thinner drain and source silicides 110 and 114 with the lower volume of silicide result in higher resistance at the drain and source of the MOSFET 100.Referring to FIG. 2, to increase the volume of silicide, an elevated drain structure 132 is formed to be coupled to the drain extension junction 104, and an elevated source structure 134 is formed to be coupled to the source extension junction 106, as known to one of ordinary skill in the art of integrated circuit fabrication. Referring to FIG. 3, a drain silicide 142 is formed within the elevated drain structure 132, and a source silicide 144 is formed within the elevated source structure 134. Because the elevated drain and source structures 132 and 134 have higher thickness and are not limited to the depth of the drain and source contact junctions 108 and 112, thicker drain and source silicides 142 and 144 may be formed with the elevated drain and source structures 132 and 134 to minimize the resistance at the drain and source of the MOSFET 100.In the prior art, the elevated drain and source structures 132 and 134 are comprised of silicon deposited by an epitaxy deposition process using a relatively high temperature in the range of from about 1100[deg.] Celsius to about 1200[deg.] Celsius. In addition, referring to FIG. 2, a contact dopant is implanted into the elevated drain and source structures 132 and 134 and activated in a thermal anneal process using a relatively high temperature in the range of from about 800[deg.] Celsius to about 1000[deg.] Celsius.However, to minimize charge carrier tunneling through the gate dielectric 116, a high-K dielectric material having a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2) is used for the gate dielectric 116. When the semiconductor substrate 102 is comprised of silicon, such high-K dielectric material, such as metal oxide for example, may react with the silicon semiconductor substrate 102 at any temperature greater than about 750[deg.] Celsius to degrade the gate dielectric 116.Nevertheless, elevated drain and source structures are desired for increasing the volume of the drain and source silicides while a gate dielectric comprised of a high-K dielectric material is also desired for minimizing charge carrier tunneling through the gate dielectric, as the dimensions of the MOSFET are further scaled down. Thus, a mechanism is desired for fabricating drain and source silicides with elevated drain and source contact structures using temperatures below about 750[deg.] Celsius to preserve the integrity of the gate dielectric comprised of a high-K dielectric material.SUMMARY OF THE INVENTIONAccordingly, in a general aspect of the present invention, elevated drain and source contact structures are formed with deposition of an in-situ doped amorphous semiconductor material using a temperature of less than about 500[deg.] Celsius. In addition, an amorphization dopant is implanted into the drain and source extension junctions such that extension dopant within the drain and source extension junctions and contact dopant within the elevated drain and source contact structures are activated using a temperature of less than about 600[deg.] Celsius.In one embodiment of the present invention, a field effect transistor is fabricated within an active device area of a semiconductor substrate. A gate structure is formed on a gate dielectric on the active device area of the semiconductor substrate, and the gate dielectric has a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2). An amorphization dopant and an extension dopant are implanted into exposed regions of the active device area of the semiconductor substrate.A first spacer is formed at a first sidewall of the gate structure and the gate dielectric, and a second spacer is formed at a second sidewall of the gate structure and the gate dielectric. The first spacer is disposed over a drain extension junction extending down to an extension depth within the active device area of the semiconductor substrate and having the amorphization dopant and the extension dopant implanted therein. The second spacer is disposed over a source extension junction extending down to the extension depth within the active device area of the semiconductor substrate and having the amorphization dopant and the extension dopant implanted therein.Any exposed regions of the active device area of the semiconductor substrate are etched down beyond the extension depth. The drain extension junction remains disposed under the first spacer, and the source extension junction remains disposed under the second spacer. A layer of amorphous semiconductor material is deposited to cover the first and second spacers and the gate structure and on any exposed regions of the semiconductor substrate. The layer of amorphous semiconductor material is formed to be doped with a contact dopant in an in-situ deposition process using a temperature less than about 500[deg.] Celsius.The amorphous semiconductor material is polished down until top surfaces of the gate structure and the first and second spacers are exposed such that the top surfaces of the gate structure and the first and second spacers are level with a top surface of the amorphous semiconductor material. The amorphous semiconductor material remaining to the first sidewall of the gate structure forms an elevated drain contact structure, and the amorphous semiconductor material remaining to the second sidewall of the gate structure forms an elevated source contact structure. A thermal anneal is performed using a temperature less than about 600[deg.] Celsius to activate the extension dopant within the drain and source extension junctions and to activate the contact dopant within the drain and source contact structures.In another aspect of the present invention, a drain silicide is formed within the drain contact structure, and a source silicide is formed within the source contact structure using a temperature in a range of from about 400[deg.] Celsius to about 500[deg.] Celsius. Such drain silicide and source silicide may be comprised of nickel silicide (NiSi) for example.In this manner, temperatures less than about 600[deg.] Celsius are used for formation of the structures of the field effect transistor such as the drain and source extension junctions, the drain and source elevated contact structures, and the drain and source silicides. With such low temperatures, the gate dielectric comprised of a high-K dielectric material does not react with the semiconductor substrate to preserve the integrity of such a gate dielectric. In addition, thicker silicides may be formed with the elevated drain and source contact structures to minimize resistance at the drain and source of the field effect transistor.These and other features and advantages of the present invention will be better understood by considering the following detailed description of the invention which is presented with the attached drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a cross-sectional view of a conventional MOSFET (Metal Oxide Semiconductor Field Effect Transistor) without elevated drain and source contact structures;FIG. 2 shows a cross-sectional view of a conventional MOSFET (Metal Oxide Semiconductor Field Effect Transistor) with elevated drain and source contact structures formed with an epitaxy deposition process using relatively high temperatures, according to the prior art;FIG. 3 shows a cross sectional view of drain and source silicides formed in the elevated drain and source contact structures of FIG. 2, according to the prior art;FIGS. 4, 5, 6, 7, 8, 9, 10, and 11 show cross-sectional views for illustrating the steps for fabricating a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) with drain and source silicides formed within elevated drain and source contact structures using relatively low temperatures to prevent reaction of a gate dielectric comprised of a high-K dielectric material with the semiconductor substrate to preserve the integrity of such a gate dielectric, according to one embodiment of the present invention.The figures referred to herein are drawn for clarity of illustration and are not necessarily drawn to scale. Elements having the same reference number in FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11 refer to elements having similar structure and function.DETAILED DESCRIPTIONReferring to FIG. 4, a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) 200 is fabricated within an active device area 202 of a semiconductor substrate 204 defined by shallow trench isolation structures 206. The semiconductor substrate 204 is comprised of silicon in one embodiment of the present invention. Processes for formation of shallow trench isolation structures for electrically isolating integrated circuit devices within a semiconductor substrate are known to one of ordinary skill in the art of integrated circuit fabrication.A gate dielectric 207 and a gate structure 208 are formed on the active device area 202 of the semiconductor substrate 204. The gate dielectric 207 is comprised of a high-K dielectric material, such as a metal oxide for example, having a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2) for minimizing undesired tunneling current through the gate dielectric 207. The gate structure 208 is comprised of metal, such as copper or tungsten for example, in one embodiment of the present invention. Processes for formation of such gate dielectric 207 and gate structure 208 are known to one of ordinary skill in the art of integrated circuit fabrication.A drain extension junction 210 and a source extension junction 212 are formed by implantation of an amorphization dopant into exposed regions of the active device area 202 of the semiconductor substrate 204. The drain and source extension junctions 210 and 212 extend down to an extension depth that is less than about 200 angstroms for the MOSFET 200 having scaled down dimensions of tens of nanometers. When the semiconductor substrate is comprised of silicon, the amorphization dopant is comprised of one of germanium (Ge), silicon (Si), antimony (Sb), or xenon (Xe). Processes for implanting such amorphization dopant are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 5, an extension dopant is also implanted into the drain and source extension junctions 210 and 212. The parameters of the process for implanting the extension dopant are adjusted such that the extension dopant is implanted into the drain and source extension junctions 210 and 212 formed by the former implantation of the amorphization dopant in FIG. 4. The extension dopant is an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor) and is a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor). Such processes for implanting the extension dopant are known to one of ordinary skill in the art of integrated circuit fabrication.The amorphization dopant implanted into the drain and source extension junctions 210 and 212 renders the semiconductor substrate to have an amorphous crystal structure within the drain and source extension junctions 210 and 212, as known to one of ordinary skill in the art of integrated circuit fabrication. A dopant within an amorphous semiconductor region may be activated at a lower temperature, as known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 6, a first spacer 214 is formed at a first sidewall of the gate structure 208 and the gate dielectric 207, and a second spacer 216 is formed at a second sidewall of the gate structure 208 and the gate dielectric 207. The first and second spacers 214 and 216 are comprised of silicon dioxide (SiO2) having a width in a range of from about 50 angstroms to about 100 angstroms, according to one embodiment of the present invention. The first and second spacers 214 and 216 are formed in an oxide deposition process using a temperature of less than about 400[deg.] Celsius. Such a low temperature for formation of the first and second spacers 214 and 216 avoids recrystallization of the amorphous silicon regions and activation of the extension dopant within the drain and source extension junctions 210 and 212. Such oxide deposition processes are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 7, any exposed region of the active device area 202 of the semiconductor substrate 204 is etched down beyond the extension depth of the drain and source extension junctions 210 and 212. For example, when the extension depth of the drain and source extension junctions 210 and 212 is less than about 200 angstroms, any exposed region of the active device area 202 of the semiconductor substrate 204 is etched down by between about 200 angstroms to about 400 angstroms, according to one embodiment of the present invention. Processes, such as an anisotropic plasma etching process, for etching down the exposed regions of the semiconductor substrate 204 are known to one of ordinary skill in the art of integrated circuit fabrication.After etching down the exposed regions of the active device area 202 of the semiconductor substrate in FIG. 7, the drain extension junction 210 remains under the first spacer 214, and the source extension junction 212 remains under the second spacer 216. Referring to FIG. 8, a layer of amorphous semiconductor material 220 is blanket deposited to cover the first and second spacers 214 and 216 and the gate structure 208.In one embodiment of the present invention, the layer of semiconductor material 220 is comprised of amorphous silicon having a relatively high thickness of from about 2000 Å (angstroms) to about 5000 Å (angstroms). Such a layer of semiconductor material 220 having the relatively high thickness extends above the top surfaces of the first and second spacers 214 and 216 and the gate structure 208. According to an embodiment of the present invention, the amorphous silicon 220 is doped with a contact dopant in an in-situ doped amorphous silicon deposition process using a temperature less than about 500[deg.] Celsius. The contact dopant is an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor) and is a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor).Chemical vapor deposition processes for forming the layer of in-situ doped amorphous silicon 220 are known to one of ordinary skill in the art of integrated circuit fabrication. The relatively low temperature of less than about 500[deg.] Celsius used for deposition of the layer of amorphous silicon 220 avoids recrystallization of the amorphous silicon regions and activation of the extension dopant within the drain and source extension junctions210 and 212 and the contact dopant within the layer of amorphous silicon 220.Referring to FIG. 9, the layer of semiconductor material 220 is polished down until the first and second spacers 214 and 216 and the gate structure 208 are exposed. Thus, the top surfaces of the first and second spacers 214 and 216 and the gate structure 208 are level with the top surfaces of the layer of semiconductor material 220. Processes, such as CMP (chemical mechanical polishing) processes, for polishing down the layer of semiconductor material 220 are known to one of ordinary skill in the art of integrated circuit fabrication.Further referring to FIG. 9, the amorphous silicon material 220 remaining to the first sidewall of the gate structure 208 forms an elevated drain structure 222, and the amorphous silicon material 220 remaining to the second sidewall of the gate structure 222 forms an elevated source structure 224. Because the semiconductor substrate 204 has been etched down as illustrated in FIG. 7, the drain extension junction 210 contacts the drain contact structure 222, and the source extension junction 212 contact the source contact structure 224.Referring to FIG. 10, the drain contact structure 222 and the source contact structure 224 are etched down by between about 300 angstroms to about 500 angstroms after the CMP (chemical mechanical polishing) process of FIG. 9. With such an etch, the top surfaces of the drain contact structure 222 and the source contact structure 224 are below the top surface of the gate structure 208. Such lower top surfaces of the drain contact structure 222 and the source contact structure 224 ensure that silicides formed from the top surfaces of the drain contact structure 222 and the source contact structure 224 do not undesirably bridge with the metal gate structure 208. Processes, such as an anisotropic plasma etching process, for etching down the drain contact structure 222 and the source contact structure 224 are known to one of ordinary skill in the art of integrated circuit fabrication.Further referring to FIG. 10, a thermal anneal using a relatively low temperature in a range of from about 500[deg.] Celsius to about 600[deg.] Celsius is performed to activate the extension dopant within the drain and source extension junctions 210 and 212 and to activate the contact dopant within the drain and source contact structures 222 and 224. Such a low temperature may be used to activate the extension dopant within the drain and source extension junctions 210 and 212 because the drain and source extension junctions 210 and 212 have been rendered to be amorphous silicon regions with implantation of the amorphization dopant. In addition, such a low temperature may be used to activate the contact dopant within the drain and source contact structures 222 and 224 because the drain and source contact structures are comprised of in-situ doped amorphous silicon. Thermal anneal processes are known to one of ordinary skill in the art of integrated circuit fabrication. The gate structure 208 is comprised of metal in one embodiment of the present invention such that a dopant is not activated within the gate structure 208 in a thermal anneal process.Referring to FIG. 11, a drain silicide 232 is formed with the elevated drain contact structure 222, and a source silicide 234 is formed with the elevated source contact structure 224. Preferably, a silicidation process using a relatively low temperature in a range of from about 400[deg.] Celsius to about 500[deg.] Celsius forms the drain and source silicides 232 and 234 which may be comprised of nickel silicide (NiSi) in one embodiment of the present invention. Such silicidation processes are known to one of ordinary skill in the art of integrated circuit fabrication.In this manner, temperatures less than about 600[deg.] Celsius are used for formation of the structures of the MOSFET 200 such as the drain and source extension junctions 210 and 212, the drain and source elevated contact structures 222 and 224, and the drain and source silicides 232 and 234. With such low temperatures, the gate dielectric 207 comprised of a high-K dielectric material does not react with the semiconductor substrate 204 to preserve the integrity of such a gate dielectric 207. In addition, thicker drain and source silicides 232 and 234 may be formed with the elevated drain and source contact structures 222 and 224 to minimize resistance at the drain and source of the MOSFET 200.The foregoing is by way of example only and is not intended to be limiting. For example, any specified material or any specified dimension of any structure described herein is by way of example only. In addition, as will be understood by those skilled in the art, the structures described herein may be made or used in the same way regardless of their position and orientation. Accordingly, it is to be understood that terms and phrases such as "top," "side," and "on" as used herein refer to relative location and orientation of various portions of the structures with respect to one another, and are not intended to suggest that any particular absolute orientation with respect to external objects is necessary or required. The present invention is limited only as defined in the following claims and equivalents thereof. |
A variable attenuator can be used with high-voltage radio-frequency signals. The attenuator can provide wide dynamic range with little loss at the lowest attenuation level. The attenuator may be implemented in digital integrated circuit processes and occupies small integrated circuit area. Additionally, the use of circuit elements external to the SoC may be reduced. The attenuator uses multiple attenuator cells (100, 110, 120, 130) connected in parallel to an RF input (RFp, RFn) and RF output (OUTp, OUTn). Each attenuator cell uses capacitive dividers comprising a coupling capacitor (101, 102, 111, 112, 121, 122, 131, 132) and a dividing capacitor (103, 104, 113, 114, 123, 124, 133, 134) connected by a switch (107, 108, 117, 118, 127, 128, 137, 138) to ground. The coupling capacitor and the dividing capacitor are laid out in the same integrated circuit area. The capacitors are also laid out so that the RF input shields the RF output from ground to avoid parasitic capacitance on the RF output. |
1.A high voltage radio frequency (RF) attenuator for selectively attenuating an RF input to produce an RF output, the attenuator comprising:Attenuator unit, includedA coupling capacitor having a first terminal connected to the RF input and a second terminal connected to the RF output, andA voltage dividing capacitor having a first terminal connected to the RF output and a second terminal connected to a switch to a ground reference,Wherein the coupling capacitor and the voltage dividing capacitor are formed in the same integrated circuit area.2.The attenuator of claim 1, wherein the RF input is arranged to shield the RF output from the ground reference.3.The attenuator of claim 1, wherein the attenuator unit further comprises a metal plate connected to the RF input,Wherein the voltage dividing capacitor is formed of a metal-insulator-metal capacitor having a plurality of metal strips,Wherein the metal plate is between the metal-insulator-metal capacitor and the integrated circuit substrate, andWherein a terminal of the coupling capacitor is formed by a portion of the metal plate and the metal-insulator-metal capacitor connected to the RF output.4.The attenuator of claim 3, wherein the metal plate extends beyond the metal-insulator-metal capacitor.5.The attenuator of claim 3, wherein the metal plate is formed in the first metal layer.6.The attenuator of claim 5, wherein the plurality of metal strips of the metal-insulator-metal capacitor comprise a third metal layer and a fourth metal layer.7.The attenuator of claim 5, wherein the plurality of metal strips of the metal-insulator-metal capacitor do not include a second metal layer.8.The attenuator of claim 1, further comprising a clamp circuit connected to the RF output.9.The attenuator of claim 1, wherein the switch is an n-channel transistor.10.The attenuator of claim 9, wherein the n-channel transistor is a low-leakage transistor.11.The attenuator of claim 1, further comprising a second attenuator unit, said second attenuator unit comprisingA second coupling capacitor having a first terminal connected to the RF input and a second terminal connected to the RF output, andA second voltage dividing capacitor having a first terminal connected to the RF output and a second terminal connected to a second switch to the ground reference,Wherein the second coupling capacitor and the second voltage dividing capacitor are formed in the same integrated circuit area.12.A high-voltage radio frequency attenuator for selectively attenuating an RF input to produce an RF output including a positive RF input and a negative RF input, the RF output including a positive RF output and a negative RF output, the attenuator include:Attenuator unit, includedA positive side capacitive voltage divider including a coupling capacitor having a first terminal connected to the positive RF input and a second terminal connected to the positive RF output and a voltage divider capacitor having a first terminal connected to the positive RF input, A voltage dividing capacitor having a first terminal connected to the positive RF output, a second terminal of the voltage dividing capacitor being connected to a first switch to a ground reference, wherein the coupling capacitor and the voltage dividing capacitor are formed in the same integrated circuit area In, as wellA negative-side capacitive voltage divider including a coupling capacitor having a first terminal connected to the negative RF input and a second terminal connected to the negative RF output, and a voltage dividing capacitor, A voltage divider capacitor is connected to the first terminal of the negative RF output, a second terminal of the voltage dividing capacitor is connected to a second switch to the ground reference, wherein the coupling capacitor and the voltage dividing capacitor are formed on the same integrated circuit Area.13.The attenuator of claim 12 wherein the positive RF input is arranged to shield the positive RF output from the ground reference and the negative RF input is arranged to output the negative RF output And the ground reference mask.14.The attenuator of claim 12,Wherein the positive side capacitive voltage divider of the attenuator cell further comprisesA metal plate connected to the positive RF input,Wherein the voltage dividing capacitor is formed of a metal-insulator-metal capacitor having a plurality of metal strips,Wherein the metal plate is between the metal-insulator-metal capacitor and the integrated circuit substrate, andWherein a terminal of the coupling capacitor is formed by a portion of the metal plate and the metal-insulator-metal capacitor connected to the positive RF output, andWherein the negative-side capacitive voltage divider of the attenuator cell further comprisesA metal plate connected to the negative RF input,Wherein the voltage dividing capacitor is formed of a metal-insulator-metal capacitor having a plurality of metal strips,Wherein the metal plate is between the metal-insulator-metal capacitor and the integrated circuit substrate, andWherein the terminals of the coupling capacitor are formed by portions of the metal plate and the metal-insulator-metal capacitor connected to the negative RF output.15.The attenuator according to claim 14, wherein the metal plate of the positive side capacitive voltage divider of said attenuator unit extends beyond the metal-insulator-positive side of the positive side capacitive voltage divider of said attenuator unit, Metal Capacitors, as wellWherein the metal plate of the negative-side capacitive voltage divider of the attenuator cell extends beyond the metal-insulator-metal capacitor of the negative-sided capacitive voltage divider of the attenuator cell.16.The attenuator according to claim 14, wherein the first switch and the second switch are provided in a switching area that is provided at a positive side capacitive portion of the attenuator unit Between the metal plate of the voltage regulator and the metal plate of the negative-side capacitive voltage divider of the attenuator unit.17.The attenuator of claim 14, wherein the metal plate is formed in the first metal layer.18.The attenuator of claim 17, wherein the plurality of metal strips of the metal-insulator-metal capacitor comprise a third metal layer and a fourth metal layer.19.The attenuator of claim 17, wherein the plurality of metal strips of the metal-insulator-metal capacitor do not include a second metal layer.20.The attenuator of claim 12, further comprising a clamp circuit connected to the positive RF output and a clamp circuit connected to the negative RF output.21.The attenuator of claim 12, wherein the first switch and the second switch are n-channel transistors.22.The attenuator of claim 21, wherein the n-channel transistor is a low-leakage transistor.23.The attenuator of claim 12, wherein the attenuator unit further comprises a third switch coupled between the second terminal of the voltage dividing capacitor of the positive-side capacitive voltage divider and the second terminal of the voltage- The second terminal of the voltage dividing capacitor of the negative-side capacitive voltage divider.24.The attenuator of claim 12, further comprising a second attenuator unit, said second attenuator unit comprisingA second positive side capacitive voltage divider comprising a coupling capacitor having a first terminal connected to the positive RF input and a second terminal connected to the positive RF , A voltage dividing capacitor having a first terminal connected to the positive RF output, a second terminal of the voltage dividing capacitor being connected to a third switch to ground, wherein the coupling capacitor and the voltage dividing capacitor are formed at the same IC area, as wellA second negative-side capacitive voltage divider including a coupling capacitor having a first terminal connected to the negative RF input and a second terminal connected to the negative RF input, , A voltage dividing capacitor having a first terminal connected to the negative RF output, a second terminal of the voltage dividing capacitor being connected to a fourth switch to ground, wherein the coupling capacitor and the voltage dividing capacitor are formed at the same Integrated circuit area.25.A method for variably attenuating a radio frequency (RF) input, the method comprising:Coupling the RF input to an RF output using a plurality of coupling capacitors; andThe terminals of the plurality of voltage dividing capacitors are conditionally connected to ground,Wherein each coupling capacitor of the plurality of coupling capacitors is formed in the same integrated circuit area as one of the plurality of voltage dividing capacitors.26.The method of claim 25, wherein the RF input is arranged to shield the RF output from a ground reference.27.The method of claim 25, wherein each of the plurality of voltage dividing capacitors is a metal-insulator-metal capacitor having a plurality of metal strips.28.The method of claim 27, wherein each of the coupling capacitors is formed from a portion of a metal plate and the metal-insulator-metal capacitor connected to the RF output.29.The method of claim 28, wherein the metal plate is between the metal-insulator-metal capacitor and the integrated circuit substrate.30.A device comprising:A coupling capacitor arrangement having a first terminal connected to the RF input and a second terminal connected to the RF output, andA voltage dividing capacitor arrangement having a first terminal connected to the RF output and a second terminal connected to a switch to a ground reference,Wherein the coupling capacitor means and the voltage dividing capacitor means are formed in the same integrated circuit area.31.The apparatus of claim 30, wherein the RF input is arranged to shield the RF output from the ground reference.32.The apparatus of claim 30, further comprising a metal plate coupled to said RF input,Wherein the voltage dividing capacitor device is formed of a metal-insulator-metal capacitor having a plurality of metal strips,Wherein the metal plate is between the metal-insulator-metal capacitor and the integrated circuit substrate, andWherein terminals of the coupling capacitor device are formed by portions of the metal plate and the metal-insulator-metal capacitor connected to the RF output.33.The apparatus of claim 32, wherein said metal plate extends beyond said metal-insulator-metal capacitor.34.The apparatus of claim 32, wherein the metal plate is formed in the first metal layer.35.The apparatus of claim 34, wherein the plurality of metal strips of the metal-insulator-metal capacitor comprise a third metal layer and a fourth metal layer.36.The apparatus of claim 34, wherein the plurality of metal strips of the metal-insulator-metal capacitor do not include a second metal layer.37.The apparatus of claim 30, further comprising a clamping circuit connected to said RF output.38.The apparatus of claim 30, wherein the switch is an n-channel transistor.39.The apparatus of claim 38, wherein the n-channel transistor is a low-leakage transistor. |
Variable high-pressure RF attenuatorbackgroundfieldThe present invention relates to integrated circuits, and more particularly to high voltage radio frequency attenuators.backgroundVariable attenuators can be used in radio frequency receivers to attenuate these signals before large received signals reach sensitive receiver devices. The received signal from the antenna can be so large that the received signal will compromise some receiver circuitry. For example, the signal from the antenna in a Near Field Communication (NFC) system can be as large as 100 volts.FIG. 9 is a functional block diagram of a radio frequency receiver that illustrates the use of a high-voltage radio frequency attenuator 1011. The radio frequency attenuator 1011 receives a radio frequency (RF) signal from the antenna 1001 and selectively attenuates the RF signal. The attenuated RF signal is provided to envelope detector 1021. The envelope detector 1021 supplies its output to an analog-to-digital converter (ADC) 1031. The output of the ADC 1031 is processed by the digital signal processor 1041.Implementing a radio frequency receiver (eg, for NFC) in a system-on-chip (SoC) integrated circuit is difficult. For example, mashing high-voltage (eg, 100V differential peak-to-peak) RF signals from an antenna to a receiver circuit implemented in a submicron SoC is challenging as SoC fabrication technology was developed for low voltages (eg, 1V). Some existing NFC receivers have, for example, been attenuated using capacitors and other circuit elements external to the SoC to handle high voltages.In addition, the RF signal may have a large dynamic range (eg, 55 dB). Some existing NFC receivers have used variable attenuators, which have significant attenuation in the lowest attenuation setting. This results in a weak signal that degrades the performance of the receiver. Therefore, receiver performance can be improved if the attenuator delivers the minimum RF signal with minimum attenuation.OverviewIn one aspect, there is provided a high voltage radio frequency (RF) attenuator for selectively attenuating an RF input to produce an RF output, the attenuator comprising: an attenuator unit including a coupling capacitor and a voltage divider A capacitor having a first terminal connected to the RF input and a second terminal connected to the RF output, the voltage dividing capacitor having a first terminal connected to the RF output and a second terminal connected to the ground The second terminal of the switch of the reference, wherein the coupling capacitor and the voltage dividing capacitor are formed in the same integrated circuit area.In one aspect, there is provided a high voltage radio frequency attenuator for selectively attenuating an RF input to produce an RF output including a positive RF input and a negative RF input, the RF output including a positive RF output and a negative RF Output. The attenuator includes an attenuator unit including a positive side capacitive voltage divider including a coupling capacitor and a voltage dividing capacitor, the coupling capacitor having a first terminal connected to the A positive RF input, and a second terminal connected to the positive RF output, the voltage dividing capacitor having a first terminal connected to the positive RF output, a second terminal of the voltage dividing capacitor being connected to a A first switch of a ground reference, wherein the coupling capacitor and the voltage dividing capacitor are formed in the same integrated circuit region; and a negative-side capacitive voltage divider including a coupling capacitor and a A voltage dividing capacitor having a first terminal connected to the negative RF input and a second terminal connected to the negative RF output, the voltage dividing capacitor having a first terminal connected to the negative RF output The second terminal of the voltage dividing capacitor is connected to a second switch to the ground reference, wherein the coupling capacitor and the voltage dividing capacitor are formed in the same integrated circuit area.In one aspect, a method for variably attenuating a radio frequency (RF) input is provided. The method comprising: coupling the RF input to an RF output using a plurality of coupling capacitors; and conditionally connecting terminals of a plurality of voltage dividing capacitors to ground, wherein each of the plurality of coupling capacitors couples a capacitor Formed in the same integrated circuit area as one of the plurality of voltage dividing capacitors.In one aspect, an apparatus is provided that includes means for coupling a capacitor arrangement having a first terminal connected to an RF input and a second terminal connected to an RF output, and a partial voltage A capacitor arrangement having a first terminal connected to the RF output and a second terminal connected to a switch to a ground reference, wherein the coupling capacitor arrangement and the voltage dividing capacitor arrangement are formed in the same In the integrated circuit area.Other features and advantages of the present invention will become apparent from the following description of the aspects of the present invention by way of example.Brief Description of the DrawingsThe details of the invention, both as to its structure and its operation, may be gathered by studying the drawings, wherein like reference numerals refer to similar parts and in which:Figure 1 is a schematic diagram of an attenuator according to embodiments disclosed herein;Figure 2-4 is a schematic diagram illustrating the operation of the attenuator of Figure 1;Figure 5 is a layout of an attenuator unit according to embodiments disclosed herein;Figure 6 is a cross-section of the portion indicated by line 6-6 in the attenuator cell layout of Figure 5;Figure 7 is a schematic diagram of a circuit model of an attenuator unit of the attenuator of Figure 1;Figure 8 is a flowchart of a process for variably attenuating RF signals in accordance with embodiments disclosed herein;Figure 9 is a functional block diagram of a radio frequency receiver illustrating the use of a high-voltage radio frequency attenuator.A detailed descriptionThe detailed description set forth below in connection with the appended drawings is intended as a description of the various configurations and is not intended to represent the only configuration in which the concepts described herein may be practiced. The detailed description includes specific details in order to provide a thorough understanding of various concepts. However, it will be apparent to one skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in simplified form to avoid obscuring such concepts.FIG. 1 is a schematic diagram of an attenuator according to an embodiment disclosed herein. The attenuator can be implemented, for example, in a complementary metal oxide semiconductor (CMOS) system on chip (SoC) integrated circuit (IC). The attenuator can be used in a radio frequency receiver such as the near field communication receiver of FIG. 9.The attenuator of FIG. 1 receives differential radio frequency (RF) inputs (positive input RFp and negative input RFn) and generates differential RF outputs (positive output OUTp and negative output OUTn). The attenuation between RF input and RF output is set by the enable controls EN0, EN1, EN2, EN3.The attenuator includes four attenuator units 100, 110, 120, 130. The RF input and the RF output are connected in parallel to each attenuator unit 100, 110, 120, 130. The attenuator units 100, 110, 120, 130 are individually enabled. The first attenuator unit 100 is enabled by the first enable control EN0; the second attenuator unit 110 is enabled by the second enable control EN1; the third attenuator unit 120 is enabled by the third enable control EN2; and the fourth attenuator Unit 130 is enabled by fourth enable control EN3.Each attenuator cell 100, 110, 120, 130 includes a switchable capacitive voltage divider. The first attenuator unit 100 includes a positive side capacitive voltage divider including a capacitor 101 and a capacitor 103. Capacitor 101 may be referred to as a coupling capacitor; capacitor 103 may be referred to as a voltage dividing capacitor. The first terminal of the capacitor 101 is connected to the positive RF input and the second terminal of the capacitor 101 is connected to the positive RF output. The first terminal of the capacitor 103 is also connected to the positive RF output and the second terminal of the capacitor 103 is connected to the n-channel transistor 107. The drain of the n-channel transistor 107 is connected to the second terminal of the capacitor 103, the gate of the n-channel transistor 107 is connected to the first enable control EN0, and the source of the n-channel transistor 107 is connected to the ground reference Can be called "ground"). The n-channel transistor 107 operates as a switch and couples the capacitor 103 to a ground reference or opens (floats) the capacitor 103 depending on the first enable control. Capacitor 103 is typically much larger (eg, 100 times larger) than capacitor 101.The first attenuator cell 100 also includes a negative side capacitive voltage divider similar to the positive side capacitive voltage divider. The negative side capacitive voltage divider is connected to the negative RF input and the negative RF output. The negative-side capacitive voltage divider includes a capacitor 102, a capacitor 104, and an n-channel transistor 108. In the embodiment of FIG. 1, the first attenuator cell 100 includes an n-channel transistor 109 coupled at the drain of the n-channel transistor 107 and the n-channel transistor 108 at the positive side capacitive portion Between the voltage divider and the negative side of the capacitor. The gate of the n-channel transistor 109 is connected to the first enable control. For a given differential mode on-resistance of the switch, n-channel transistor 109 may, for example, reduce parasitic capacitance at the drains of n-channel transistor 107 and n-channel transistor 108. The reduced capacitance increases the attenuator's dynamic range.The n-channel transistor 107, the n-channel transistor 108, and the n-channel transistor 109 may be implemented using SoC input / output transistors instead of logic transistors. Input / output transistors typically have lower leakage than logic transistors. Transistor leakage can distort the RF output and impair the attenuator's performance. Other types of low-leakage transistors can also be used.The second attenuator unit 110, the third attenuator unit 120, and the fourth attenuator unit 130 may be the same as or similar to the first attenuator unit 100.A radio frequency receiver using this attenuator will typically start with enabling control set to provide maximum attenuation. This avoids subjecting subsequent receiver circuits to high voltages (eg, greater than 3 volts) that can damage those circuits. The radio frequency receiver may then reduce the attenuation of the operating level based on the level of the received signal. In order to further protect the receiver circuit from high voltages, the attenuator may include a clamp circuit 151, 152 connected to the RF output. Clamp circuits 151, 152 may, for example, shunt the high voltage on the RF output to ground. The clamp circuit can be the same or similar to the electrostatic discharge (ESD) protection circuit used in the SoC.2-4 is a schematic diagram illustrating the operation of the attenuator of FIG. Figure 2-4 illustrates various attenuation settings. In each figure, the switches (eg, the n-channel transistor 107, the n-channel transistor 108, and the n-channel transistor 109 in the first attenuator cell 100) in the attenuator cell depend on the associated enable control Values are shown as shorted or open (shown in brackets).Figure 2 illustrates the attenuator's attenuationless setting. Each enable control is 0 and all switches are open. In a non-attenuating setting, the RF input is capacitively coupled to the RF output without attenuation (there may be a small amount of attenuation due to parasitic circuit elements such as capacitance on the RF output).Figure 3 illustrates the attenuator's high attenuation setting. Each enable control is 1 and all switches are closed. In high attenuation settings, the RF input is capacitively coupled to the RF output with high attenuation based on the relative capacitance of the attenuator cell capacitor.Figure 4 illustrates the attenuator's low attenuation setting. One of the enable controls (third enable control) is 1 and the other enable controls are 0. Thus, the switches in the third attenuator unit 130 are closed and the switches in the other attenuator units are open. In the low attenuation setting of FIG. 4, the attenuation from the RF input to the RF output is about a quarter of the attenuation of the high attenuation setting of FIG. 3 (the relative attenuation can be different from a quarter, as explained further below).When the attenuator has a large maximum attenuation and a small minimum attenuation, the performance of the RF receiver can be improved in the case where the received signal can have a large dynamic range. The term attenuation is used here to denote the ratio of RF input amplitude to RF output amplitude. The maximum attenuation is based on the ratio of the capacitance of the voltage dividing capacitor (eg, capacitor 103) to the capacitance of the coupling capacitor (eg, capacitor 101). The minimum attenuation is based on the ratio of the capacitance of the coupling capacitor to the parasitic capacitance to ground (eg, the capacitance on the RF output). For example, other parasitic capacitances between the RF input and the second terminal of the voltage dividing capacitor (eg, node Gp) do not increase the minimum attenuation. In the absence of parasitic capacitance, the minimum attenuation is 1 (RF output equals RF input).FIG. 5 is a layout of an attenuator unit according to embodiments disclosed herein. FIG. The view of FIG. 5 is a view commonly used to design the layout of an integrated circuit. For clarity, many details and layers (eg, via layers) are not shown in FIG. 5. In order to provide a specific example, various aspects of the layout will be described with reference to the first attenuator unit 100 of FIG. 1. The attenuator of FIG. 1 can be implemented using an array of attenuator cells.The attenuator cell layout is arranged to improve attenuator performance by keeping the parasitic capacitance low between the RF output and ground. For example, the coupling capacitor and the associated voltage dividing capacitor are formed in the same area. In contrast, some prior art attenuators have placed the coupling capacitor and the voltage dividing capacitor in the vicinity or in the vicinity. In addition, the RF input is used as a shield and separates the RF output (and the intermediate node of the attenuator unit) from ground. In addition, the ground connection to the switch is separate from the RF output. In addition to reducing unwanted parasitic capacitance, forming a coupling capacitor and a voltage dividing capacitor in the same area can also reduce the size of the attenuator (integrated circuit area).The attenuator cell layout includes a switch area 711 at the center. The switching region 711 includes an n-channel transistor 107, an n-channel transistor 108, and an n-channel transistor 109. The ground reference line 795 is routed longitudinally through the attenuator unit to connect to the switch region 711. The ground reference line 795 may be formed of a suitable metal layer (eg, a second metal layer ("Metal 2")).The capacitor of the positive side capacitive voltage divider and the capacitor of the negative side capacitive voltage divider are located above and below the ground reference line 795 (in the orientation of FIG. 5). The capacitor 103 (of the positive side capacitive voltage divider) is a metal-insulator-metal (MIM) capacitor formed of a metal strip 731 separated by a dielectric. The metal strips 731 are interconnected at the connection area 735. The connection region 735 includes a metal layer and a via layer, which are arranged in a manner suitable for the metal layer used in the metal strip 731. In addition to forming the capacitor 103, the connection region 735 is also used to connect the capacitor 103 to the attenuator unit. For example, the external connection region in the connection region 735 may be connected to the positive RF output and the internal connection region in the connection region 735 may connect the capacitor 103 to the n-channel transistor 107 and the n-channel transistor 109 in the switching region 711. Note that the ground reference line 795 is remote from the RF output, thereby avoiding parasitic capacitance between the RF output and ground.The capacitor 101 is formed using the metal plate 721. Metal plate 721 is connected to the positive RF input. Capacitor 101 is formed of a vertical flux capacitance between metal plate 721 and the portion of metal strip 731 connected to the positive RF output. The vertical flux capacitance between the metal plate 721 and the portion of the metal strip 731 connected to the switch forms a parasitic capacitor (capacitor Cp2 in the circuit model of FIG. 7).A negative-side capacitive voltage divider is similarly formed at a lower portion of the attenuator cell layout. The negative-side capacitive voltage divider includes a metal strip 732, a connection region 736, and a metal plate 721.Figure 6 is a cross-section of the portion indicated by line 7-7 in the attenuator cell layout. This cross-section is for the attenuator unit made in the six-metal layer manufacturing process.Capacitor 103 (MIM capacitor) is formed from alternating strips of metal. In the embodiment of FIG. 6, the third metal layer ("Metal 3"), the fourth metal layer ("Metal 4") and the fifth metal layer ("Metal 5") are used to form the capacitor 103. The strips are interconnected in a chessboard manner in which metal 3 strips 863, metal 4 strips 864 and metal 5 strips 865 are connected to the switch and metal 3 strips 873, 4 strips 874 and 5 strips 875 are connected To positive RF output. Due to the large surface area of the capacitor terminals and their small separation, the capacitor 103 has a large area capacitance. It is also possible to use lateral flux capacitors with capacitor terminals alternating only in each layer.The metal plate 821 (corresponding to the metal plate 721 in FIG. 5) is formed of a first metal layer ("Metal 1" closest to the integrated circuit substrate). It is to be noted that the second metal layer is not used in the embodiment of FIG. 6. The capacitor 101 is formed between the metal plate 821 and the metal strip 863. Due to the small surface area of the capacitor terminals and their large separation, the capacitor 103 has a small area capacitance.As seen in FIG. 5, the metal plate 721 extends beyond the metal strip 731. This reduces or eliminates the edge capacitance between the RF output (the first terminal of the capacitor 103) or the drain (the second terminal of the capacitor 103) of the n-channel transistor of the switch to ground (integrated circuit substrate). In contrast, the edge capacitance from the terminals of the capacitor 103 is to the RF input (metal plate 721). As discussed below with reference to the circuit model of FIG. 7, the capacitance from the terminals of the capacitor 103 to the RF input does not compromise the attenuator performance.The attenuator cell layout illustrated in FIGS. 5 and 6 is for the example six metal layer fabrication process. Variations in layout can be used in other manufacturing processes or provide different attenuator performance (eg, different maximum attenuation). The maximum attenuation of the attenuator depends on the ratio of the capacitance of the voltage dividing capacitor to the capacitance of the coupling capacitor. The ratio depends on the arrangement of the metal layers as shown in FIG. 6. For example, the omission of the second metal layer increases the separation between the RF input and the RF output, reducing the coupling capacitance. The voltage dividing capacitance can be changed, for example, by changing the spacing of the metal strips of the metal layer or metal-insulator-metal capacitor used. Using more metal layers or reducing the spacing between the metal strips will increase the voltage dividing capacitance. In the example six metal layer fabrication process, the sixth metal layer has a very large minimum width and spacing. Thus, the sixth metal layer is not used in the MIM capacitor.FIG. 7 is a schematic diagram of a circuit model of an attenuator unit of the attenuator of FIG. 1. FIG. The switch SW1 corresponds to the n-channel transistor 107, the switch SW3 corresponds to the n-channel transistor 109, the capacitor C1 corresponds to the capacitor 101, and the capacitor C2 corresponds to the capacitor 103, the capacitor C1 'corresponds to the capacitor 102, and the capacitor C2' corresponds to the capacitor 103. For a clear and concise description, the circuit model will be described only for the positive side capacitive voltage divider. The negative side capacitive voltage divider functions in a similar manner.The capacitor Cp1, the capacitor Cp2, the capacitor Cp2 'and the capacitor Cp2' represent parasitic capacitances associated with the implementation of the capacitor C1, the capacitor C2, the capacitor C1 'and the capacitor C2'. The capacitor Cp1 used for the layout of FIGS. 5 and 6 is mainly the capacitance between the metal plate 721 and the integrated circuit substrate. The capacitor Cp2 for the layout of FIGS. 5 and 6 is primarily the capacitance between the metal plate 721 and the metal 3 strip 873 connected to the positive RF output.When the attenuator unit is disabled, the switch is open and the positive RF input is capacitively coupled to the positive RF output through capacitor C1. The positive RF input is also capacitively coupled to the positive RF output through a series combination of capacitor Cp2 and capacitor C2. Since the series combination of capacitor Cp2 and capacitor C2 is in parallel with capacitor C1, the operation of the disabled attenuator unit can be understood without regard to the effects of capacitor Cp2 and capacitor C2. It is also possible to understand the operation of the disabled attenuator unit (eg, the parasitic capacitance associated with the switch can be small compared to the circuit element capacitance) without considering the effects of the parasitic capacitance associated with the switch. The capacitor Cp1 between the positive RF input and the ground reference only adds a capacitive load to the positive RF input and the operation of the attenuator unit can be understood without the capacitor Cp1.The switch closes when the attenuator unit is enabled. Since the capacitor Cp1 and the capacitor Cp2 are connected in parallel between the positive RF input and the ground reference, the capacitor Cp1 and the capacitor Cp2 only add a capacitive load to the positive RF input and may ignore the capacitor Cp1 and the capacitor Cp2 operating. Capacitor C1 and capacitor C2 form a voltage divider between the positive RF input and the positive RF output. The attenuation from the RF input to the RF output is C1 / (C1 + C2), where C1 is the capacitance of capacitor C1 and C2 is the capacitance of capacitor C2. For cases where C2 is much larger than C1, this attenuation is similar to C1 / C2.Note that the attenuator unit does not have material parasitic capacitance between the positive RF output and ground or across switch SW1 (between node Gp and ground). Such capacitors change the operation of the attenuator. Capacitance to a node other than ground (eg, voltage supply) will similarly change the operation of the attenuator.8 is a flowchart of a process for variably attenuating RF signals according to embodiments disclosed herein. The steps of the process may be performed, for example, using the attenuator of FIG. 1 and will be described with reference thereto. This process produces an RF output with selectable attenuation relative to the RF input.In block 210, the process couples the RF input to the RF output with a plurality of coupling capacitors. For example, the capacitors 101, 111, 121, 131 couple the positive RF input to the positive RF output.In block 220, the process conditionally connects the terminals of the plurality of voltage dividing capacitors to ground. For example, the n-channel transistors 107, 117, 127, 137 conditionally connect the terminals of the capacitors 103, 113, 123, 133 to ground. When the voltage dividing capacitor terminal is not connected to ground, the voltage dividing capacitor is opened. The process conditionally connects the terminals based on the desired attenuation from the RF input to the RF output. The terminals of the voltage dividing capacitor that are not conditionally connected to ground are connected to the RF output.The process of FIG. 8 may be modified, for example, by adding, omitting, reordering, or changing each box. In addition, the boxes may be executed concurrently.While various features of the invention have been described above with respect to particular embodiments, many variations of the invention are possible. For example, attenuators may be formed using other fabrication processes that include processes with different numbers of metal layers and different types of transistors. In addition, attenuators can have single-ended (rather than differential) inputs and outputs. In addition, the attenuator may have a different number of attenuator units, attenuator units of different sizes (eg, different capacitances), and attenuator units may have shared enable (eg, binary weighting). Different switches may be used in the attenuator unit, such as a p-channel transistor that conditionally switches an intermediate node in the attenuator cell to a voltage supply. In another variant, the switch is omitted from the attenuator unit.Directional terms such as "up," "down," "left," and "right" are used to describe some features. The term is used to provide a clear and concise description. These terms are relative and should not be extrapolated to a particular absolute direction. Additionally, the features of various embodiments may be combined in different combinations than those described above.The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles described herein may be applied to other embodiments without departing from the spirit or scope of the invention. Accordingly, it is to be understood that the description and drawings presented herein represent presently preferred embodiments of the invention and, thus, represent the broadly contemplated subject matter of the invention. It will be further understood that the scope of the present invention fully encompasses other embodiments that may be apparent to those skilled in the art, and the scope of the present invention is accordingly not to be limited except as by the appended claims. |
A method includes: generating, based on a hash function using at least one input including first data, a first digest; storing the first data in a memory; reading the first data from the memory; generating, based on the read data, a second digest; comparing the first digest and the second digest; and determining, based on comparing the first digest and the second digest, whether the read data is corrupted. |
CLAIMSWhat is claimed is:1. A method comprising:generating, based on a hash function using at least one input including first data, a first digest;storing the first data in a memory;reading the first data from the memory;generating, based on the read data, a second digest;comparing the first digest and the second digest: anddetermining, based on comparing the first digest and the second digest, whether the read data is corrupted2. The method of claim 1 , wherein the at least one input used by the hash function further includes at least one of:an address at which the first data is stored in the memory; ormetadata associated with the first data3. The method of claim 1 , wherein the memory is a boot device of a controller, the method further comprising copying the read data and the first digest to a system memory of the controller.4. The method of claim 1 , wherein reading the first data from the memorycomprises reading the first data by a first computing device, the method further comprising sending the read data to a second computing device, wherein comparing the first digest and the second digest is performed by the second computing device.5. The method of claim 1 , further comprising, in response to determining that the read data is corrupted, performing at least one action.8. The method of claim 5, wherein the first data is stored in the memory by acontroller, and the at least one action comprises at least one of:sending a signal to the controller that indicates the first data is corrupted;re-reading the first data from the memory;terminating a process executing on the controller; or containing data identified as being corrupted.7. The method of claim 1 , wherein the memory is a boot device, the method further comprising copying, by a controller, a plurality of rows of data from the boot device to a system memory of the controller, wherein the rows include a first row and a second row, the first data is stored in the first row, and comparing the first digest and the second digest is performed prior to copying the second row.8. The method of claim 1 , wherein the memory is a code area of a system memory of a first computing device, the method further comprising copying a plurality of rows of data from the code area to a runtime area of the system memory, wherein the rows include a first row and a second row, the first data is stored in the first row, and comparing the first digest and the second digest is performed prior to copying the second row.9. The method of claim 8, further comprising sending the first digest to a second computing device, wherein generating the second digest is performed by the second computing device.10. The method of claim 9, wherein the first computing device is a field- programmable gate array (FPGA), and the second computing device is a an FPGA, a controller, or a computing device executing a hypervisor.11. The method of claim 1 , further comprising:storing the first digest in the memory as being associated with the stored first data;generating a third digest for a block of data stored in the memory, wherein the block of data includes a plurality of rows of data, and the rows include a first row storing the first data; anddetermining, using the third digest, whether the block of data is corrupted.12. The method of claim 1 , wherein storing the first data comprises writing, by a controller, the first data to a volatile memory or a non-volatile memory.13. The method of claim 1 , wherein the first data is stored in a first row of a plurality of rows stored in the memory, and the plurality of rows corresponds to matrices of an artificial neural network.14. The method of claim 1 , wherein storing the first data comprises writing the first data by a controller, the method further comprising, after reading the first data, storing the read data in a system memory of the controller.15. The method of claim 1 , wherein storing the first data comprises writing the first data by a controller, and the memory is a system memory of the controller16. The method of claim 1 , wherein storing the first data comprises storing the first data by a first computing device, the method further comprising sending the first digest to a second computing device, wherein comparing the first digest and the second digest is performed by the second computing device.17. The method of claim 1 , wherein storing the first data comprises storing the first data in a first row of a plurality of rows stored in the memory, the method further comprising generating a third digest for a block of data stored in the memory, wherein the block includes the plurality of rows, and wherein the third digest is generated using a hash function with at least one input including at least one of: data stored in the plurality of rows; ora plurality of respective digests, wherein each respective digest corresponds to a digest generated for a respective row of the plurality of rows.18. A system comprising:at least one processor; andmemory containing instructions configured to instruct the at least one processor to:generate, based on a hash function using data as an input, a first digest; store the data in a first memory;read the data from the first memory;generate, based on the read data, a second digest; anddetermine, based on a comparison of the first digest and the seconddigest, whether the read data is corrupted.19. The system of claim 18, wherein:the first memory comprises a system memory, or a memory of a boot device; the at least one processor comprises a controller, a field-programmable gate array, or a computing device executing a hypervisor; andthe method further comprises storing the first digest in at least one of the first memory or a second memory.20. A non-transitory computer storage medium storing instructions which, when executed by at least one processor, cause the at least one processor to:generate a first digest using a hash function, wherein the hash function uses inputs including a page of data, and a first address at which the page is to be stored after generating the first digest;store the page at the first address in a memory;read the page from the first address of the memory;generate, based on the read page and the first address, a second digest; and determine, based on a comparison of the first digest and the second digest, whether the read page is corrupted. |
DETERMINING VALIDITY OF DATA READ FROMMEMORY BY A CONTROLLERRELATED APPLICATION[0001] The present application claims the benefit of the filing date of U.S. Pat. App. Ser. No. 15/991 ,463, filed May 29, 2018 and entitled“DETERMINING VALIDITY OF DATA READ FROM MEMORY BY A CONTROLLER,” the entire disclosure of which application is hereby incorporated herein by reference.FIELD OF THE TECHNOLOGY[0002] At least some embodiments disclosed herein relate to memory operations in a computing system in general and more particularly, but not limited to determining the validity of data read from memory (e.g., a volatile or non-volatile memory device) by a computing device (e.g., a controller).BACKGROUND[0003] Memory devices are frequently provided as internal, semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory. Volatile memory, including random-access memory (RAM), static random access memory (SRAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others, may require a source of applied power to maintain its data. Non-volatile memory, by contrast, can retain its stored data even when not externally powered. Non-volatile memory is available in a wide variety of technologies, including flash memory (e.g., NAND and NOR) phase change memory (PCM), resistive random access memory (RRAM), and magnetic random access memory (MRAM), among others.[0004] Memory devices can include large arrays of memory cells for storing data, frequently organized into rows and columns. Individual memory cells and/or ranges of memory cells can be addressed by their row and column. When a memory array is addressed, there may be one or more layers of address translation to, for example, translate between a logical address utilized by a host device and a physical address corresponding to a location in the memory array. Although uncommon, it is possible for the address information provided to a memory device on a command/address bus thereof to be corrupted by an error, such that an internal operation of the memory device (e.g., a read operation, a write operation, an erase operation, etc.) can be performed on a different physical address than was requested by a host device.[0005] In some cases, memory devices are used to store data for operating autonomous vehicles. For example, a control system of a vehicle can use stored data to autonomously navigate and drive the vehicle. In one example, a memory device stores data for an artificial neural network (ANN) that analyzes sensor inputs provided by sensors of the vehicle.[0006] Recent developments in the technological area of autonomous driving allow a computing system to operate, at least under some conditions, control elements of a vehicle without the assistance from a human operator of the vehicle. For example, sensors (e.g., cameras and radars) can be installed on a vehicle to detect the conditions of the surroundings of the vehicle on a roadway. A computing system installed on the vehicle analyzes the sensor inputs to identify the conditions and generate control signals or commands for the autonomous adjustments of the direction and/or speed of the vehicle, without any input from a human operator of the vehicle. Autonomous driving and/or advanced driver assistance system (ADAS) typically involves use of an ANN for the identification of events and/or objects that are captured in sensor inputs.[6067] In general, an artificial neural network (ANN) uses a network of neurons to process inputs to the network and to generate outputs from the network. Each neuron m in the network receives a set of inputs pk, where k = 1 , 2, ... , n. !n general, some of the inputs to a neuron may be the outputs of certain neurons in the network; and some of the inputs to a neuron may be the inputs to the network as a whole. Theinput/output relations among the neurons in the network represent the neuron connectivity in the network.[6668] Each neuron m has a bias bm, an activation function fm, and a set of synaptic weights wmkfor its inputs pkrespectively, where k = 1 , 2, ... , /?. The activation function may be in the form of a step function, a linear function, a log-sigmoid function, etc. Different neurons in the network may have different activation functions.[6669] Each neuron m generates a weighted sum smof its inputs and its bias, where Sm=bm + Wmixpi * wm2xp?. + + Wmnxpn- The output amof the neuron m is the activation function of the weighted sum, where = fm( sm).[6616] The relations between the input(s) and the output(s) of an ANN in general are defined by an ANN mode! that includes the data representing the connectivity of the neurons in the network, as well as the bias bm, activation function fm, and synaptic weights Wmkof each neuron m. Using a given ANN model a computing device computes the output(s) of the network from a given set of inputs to the network.[0011] For example, the inputs to an ANN network may be generated based on camera inputs; and the outputs from the ANN network may be the identification of an item, such as an event or an object.[0012] For example, U.S. Pat. App. Pub. No. 2017/0293808, entitled“Vision-Based Rain Defection using Deep Learning”, discloses a method of using a camera installed on a vehicle to determine, via an ANN model, whether the vehicle in in rain or no rain weather.[0013] For example, U.S. Pat. App. Pub. No. 2017/0242436, entitled“RoadConstruction Detection Systems and Methods”, discloses a method of detecting road construction using an ANN model.[0014] For example, U.S. Pat. Nos 9,672,734 and 9,245,188 discuss techniques for lane detection for human drivers and/or autonomous vehicle driving systems.[0015] In general, an ANN may be trained using a supervised method where the synaptic weights are adjusted to minimize or reduce the error between known outputs resulted from respective inputs and computed outputs generated from applying the inputs to the ANN. Examples of supervised learning/iraining methods include reinforcement learning, and learning with error correction.[0016] Alternatively or in combination, an ANN may be trained using anunsupervised method where the exact outputs resulted from a given set of inputs is not known a priori before the completion of the training. The ANN can be trained to classify an item into a plurality of categories, or data points into clusters. Multiple training algorithms are typically employed for a sophisticated machine learning/training paradigm.[0017] The disclosures of the above discussed patent documents are hereby incorporated herein by reference. BRIEF DESCRIPTION OF THE DRAWINGS[0018] The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.[0019] FIG, 1 shows a system using an Artificial Neural Network (ANN) model, according to one embodiment.[0020] FIG. 2 shows an example of a vehicle configured in the system of FIG. 1 where the vehicle uses an Artificial Neural Network (ANN) model, according to one embodiment.[0021] FIG. 3 shows a system for a vehicle including system memory and a boot device, according to one embodiment.[0022] FIG. 4 shows a system for a vehicle, where the system determines whether read data is corrupted using a digest comparison corresponding to the read data, according to one embodiment.[0023] FIG. 5 shows a computing system including an application controller that reads data from a boot device and/or a system memory, according to one embodiment.[0024] FIG. 6 shows an example of a boot phase for the computing system of FIG. S where digest data is stored in a security device, according to one embodiment.[0025] FIG. 7 shows an example of runtime operation for the computing system of FIG. 6, according to one embodiment.[0026] FIG. 8 shows an example of a boot phase for the computing system of FIG. 5 where digest data is stored in a system memory, according to one embodiment.[0027] FIG, 9 shows an example of runtime operation for the computing system of FIG. 8, according to one embodiment.[0028] FIG. 10 shows a method for determining whether data read from a memory is corrupted, according to one embodiment.DETAILED DESCRIPTION[0029] Implementing fault tolerance into a computing system often presents one or more technical problems. For example, if fault tolerance is implemented in an ad-hoc manner, various fault tolerant mechanisms can themselves become a primary source of faults and unreliability in the resulting architecture.[0030] In one example, essential services are provided by a fault tolerantarchitecture including a main controller used in an automotive application compliant with IS026262 (Road Vehicles - Functional Safety). A feedback mechanism is used such that data transmitted on a bus from a memory device to the main controller is not affected by errors induced by noise, cross talk and other source of soft errors.[0031] The memory device can be, for example, either volatile or non-volatile, and provides data to the controller in order to execute and/or store information. A portion of the memory can store critical code, for example firmware, software, and/or results and data for temporary calculations used by a process or application executing on the controller.[0032] In the automotive industry, the IS026262 (Road Vehicles - Functional Safety) standard provides guidelines to reduce the probability that a system executes incorrect codes. The implementation of this standard in autonomous driving applications is done using redundancy and the error correction mechanism in order to detect and correct errors generated inside an array.[0033] However, in some cases, this error correction not only implements a feature to correct the errors, but also can cause technical problems. For example, if a page contains a number of errors that exceeds (overloads) the correction power of the correction algorithm itself, then the algorithm introduces additional errors in different locations.[0034] Due to the above problems, it is desirable in some cases for a system to be informed about the correction of the internal error, and/or to disable an internal mechanism and let an external application controller take over management of the correction. In addition, in real-time operating systems, it is desirable in some cases to ensure the correctness of the data transmission between the memory device and the system controller, and the subsequent execution by the system controller of the proper code.[0036] In some cases, errors can occur that affect address information provided by a controller to a memory device on a command/address bus (e.g., during address translation, during command/address bus operations, etc.). Such errors can cause a memory operation to be performed at a different physical address than is desired. In other cases, stored data can become invalid due to undesired changes in the storage state of the data.[0036] There are, for example, several potential causes of data errors in memory. Typical memory systems contain a spare area used to store ECC data. For example, for IMAND flash, the ECC calculation is done externally such that there is an external controller that calculates and stores ECC data. In another example, In the case of DRAM, ECC corrections are internally calculated and are not available to an externa! controller. Some DRAM implementations can communicate externally when ECC corrections are made. However, in some cases, the number of data errors that occur in a memory system can exceed the capability of the system’s ECC correction capability. This can introduce errors into data that is read from the memory.[0037] In another example of a data error, bits can flip from one to zero, or zero to one, due to x-rays or alpha particles from space impacting the capacitor charge in a ceil that stores data. Also, aging can cause bits to flip slowly over time, or evensometimes to flip suddenly.[0038] Errors can also be caused by incorrect page decoding. A defective charge pump or a logic gate transitioning from the device interface to the memory array can cause a bit to be stuck at the value of another bit. This situation can cause data to be read from the wrong page. In some cases, this would not flag any errors in the computing system because the controller issued the correct address, and the ECC read from the page would be correct for the page that was read. However, the read data would be from the wrong page.[0039] Accordingly, various embodiments herein verify that the proper data is read from a memory (e.g., to determine whether the data is corrupted). For example, this is done to verify that the data read from a non-volati!e memory corresponds to the address from which data has been requested by a controller or other computing device.[0040] In some cases, a computing system contains a controller (e.g., amicroprocessor) and a memory sub-system (e.g , volatile memory used as system memory by the controller). Data written from the controller to the memory and subsequently read back to the controller can get corrupted in many ways. Some existing mechanisms, such as error correction codes (ECC), can detect and fix some of these errors. However, ECC processing can become overloaded, and errors will remain in the data read from the memory. Various embodiments described below can detect data corruption or addressing errors that occur anywhere along the data storage or transmission path to or from the controller and memory.[0041] At least some embodiments disclosed herein provide a method that uses a cryptographic hash function to detect data or address corruption in a computing system (e.g., an error that occurs in a read address during a memory storage operation, or read data that has been corrupted while stored in the memory). In one example, the computing system includes an application controller that implements an artificial neural network (ANN). For example, memory (e.g., DRAM) can store pages in the memory that correspond to matrices for the ANN. if data stored for the ANN becomes corrupted, or there is a read address error during operation that incorrectly returns data stored at an address other than was intended, improper control or operation of a vehicle controlled by the ANN can result in physical damage and/or severe personal injury.[0042] In various embodiments, when data is written from the controller to memory (e.g., a boot device or a system memory), a cryptographic hash function is run to generate a hash digest. The hash digest is stored in memory along with the data written by the controller. In some cases, the input to the hash function can include extra data associated with the data to be written such as metadata and/or the address to which the data is written in the memory.[0043] Later, when the data is read back from memory into the controller, the controller (or another computing device such as a security device running a safety hypervisor that monitors activities to provide secure operation of the computing system) re-calculates the hash digest on the read data and compares it with the hash digest previously stored when the data was written to memory. If the two hash digests are not the same, it is determined that the data has been corrupted. In response to determining that the read data is corrupted, one or more actions can be performed.For example, the controller can take measures to fix the problem or contain the corrupted data.[0044] Various embodiments as described herein can provide a solution to one or more of the above data error problems by using cryptography to enable a system to detect any corruption with either addresses or data going to or from the memory. This includes any bus errors at any point in the path between the host controller and memory. If an address is wrong or a data bit has changed in memory or anywhere along the transmission path, a hash digest that is calculated by the host controller will be different from the hash digest stored in memory, and the data error will be detected.[0045] In some cases, various embodiments use a cryptographic engine to generate the hash digest when data is read in order to determine if the address and read data are linked in unusual ways, such as aging of the decoding circuits or high voltage circuits that causes address or data bits to get linked together or stuck at a wrong value.[0046] The embodiments described herein also can be used with many various types of memory. For example, the embodiments can be used with two types of memory in predominant use in computing systems today: DRAM and NAND flash.The present embodiments can also be applied to many other types of memory as well such as 3D XPoint, resistive RAM, Spin Torque, etc. In some non-limiting examples discussed below, the memory stores data for an artificial neural network (ANN).[0047] F!G. 1 shows a system using an Artificial Neural Network (ANN) model 119, according to one embodiment. The system of FIG, 1 includes a centralized server (101 ) in communication with a vehicle 111 via a communications network (102).[0048] The server (101 ) includes a supervised training module (117) to train, generate, and update an artificial neural network (ANN) model (119) that includes neuron biases (121 ), synaptic weights (123), and activation functions (125) of neurons in a network used for processing sensor data generated in the vehicle 111.[0049] Once the ANN model (119) is designed, trained and implemented, e.g., for autonomous driving and/or advanced driver assistance system, the ANN model (119) can be deployed on vehicle 111 for real-world usage.[0050] Typically, the vehicle 111 has sensors, such as a visible light camera, an infrared camera, a UDAR, a RADAR, a sonar, and/or a set of peripheral sensors. The sensors of the vehicle 111 generate sensor inputs for the ANN model (119) in autonomous driving and/or advanced driver assistance system to generate operating instructions, such as steering, braking, accelerating, driving, alerts, emergency response, etc.[0061] During the operations of the vehicle 111 , the vehicle 111 encounters items, such as events or objects, that are captured in the sensor data. The ANN model (119) is used by the vehicle 111 to provide the identifications of the items to facilitate the generation of commands for the operations of the vehicle 111 , such as for autonomous driving and/or for advanced driver assistance.[0062] Some of the encountered items may be unexpected and thus not fully considered in the design, training and/or implementation of the ANN model (119). As a result, the ANN model (119) may identify the unexpected item as unknown, or fails to classify the item into a single known category.[0063] A function of the vehicle 111 for autonomous driving and/or advanced driver assistance may process such an unknown item according to a pre-programmed policy. For example, as a response to the detection of an unknown event or object, the vehicle (111 ) may be programmed to avoid the item, initiate a safe-mode response, alert a human operator to take control, request assistance from a human operator, place the vehicle in a safer situation by keeping a distance, and/or slow down for a stop, etc.[0054] When an output, generated by using the ANN model (119) from a particular sensor input, identifies an unknown item (or classifies an item with an insufficient precision or confidence level), the vehicle 111 is configured to store the particular sensor input that is responsible for the output and/or transmit the sensor input to the centralized server (101 ). The sensor input selected and transmitted back to the server (101 ) enriches the sensor data (103) for the training and updating of the ANN model (119) through a supervised machine learning technique implemented in the training model (117)[0055] For example, vehicle (111 ) may communicate, via a wireless connection (115) to an access point (or base station) (105), with the server (101 ) to submit the sensor input to enrich the sensor data (103) as an additional dataset for machine learning implemented using the supervised training module (117). The wireless connection (115) may be made via a wireless local area network, a cellularcommunications network, and/or a communication link (107) to a satellite (109) or a communication balloon.[0056] Periodically, the server (101 ) runs the supervised training module (117) to update the ANN model (119). The server (101 ) may use the sensor data (103) enhanced with the sensor inputs from the vehicle (111 ) and/or from similar vehicles that are operated in the same geographical region or in geographical regions having similar traffic conditions to generate a customized version of the ANN model (119) for the vehicle (111 ).[0057] Since the updated version of the ANN model (119) is trained, via machine learning, using the sensor inputs associated with the previously unexpected or unrecognized items to recognize and/or classify with certainty and accuracy these items and/or similar items. Thus, the capability of the ANN model (119) is enhanced.[0058] The updated ANN model (119) can be downloaded to the vehicles (e.g., 111 ) via the communications network (102), the access point (or base station) (105), and communication links (115 and/or 117) as an over-the-air update of thefirmware/software of the vehicles (e.g., 111 ).[0059] Optionally, the vehicle (111 ) has a self-learning capability. After an extended period on the road, the vehicle (111 ) may generate a new set of synaptic weights (123), neuron biases (121 ), activation functions (125), and/or neuronconnectivity for the ANN model (119) installed in the vehicle (111 ) using the sensor inputs it collected and stored in the vehicle (111 ), such as the sensor inputs capturing the unexpected, unknown, and/or unrecognized events or objects.[0060] As an example, the centralized server (101 ) may be operated by a factory, a producer or maker of the vehicles (111 , .. , 113), or a vendor of the autonomous driving and/or advanced driver assistance system for vehicle 111[0061] FIG. 2 shows an example of a vehicle configured in the system of FIG, 1 where the vehicle uses Artificial Neural Network (ANN) model 119, according to one embodiment. The vehicle (111 ) of FIG, 2 includes an infotainment system (149), a communication device (139), one or more sensors (137), and a computer (131 ) that is connected to some controls of the vehicle (111 ), such as a steering control (141 ) for the direction of the vehicle (111 ), a braking control (143) for stopping of the vehicle (111 ), an acceleration control (145) for the speed of the vehicle (111 ), etc.[0062] The computer (131 ) of the vehicle (111 ) includes one or more processors (133), memory (135) storing firmware (or software) (127), the ANN model (119) (e.g., as illustrated in FIG, 1), and other data (129).[0063] Memory 135 also includes system memory 155. For example, system memory 155 can store matrix rows for an ANN (e.g., see FIGS. 6-9 below). Digest comparison as described herein can be used to determine validity of data read from system memory 155.[0064] The one or more sensors (137) may include a visible light camera, an infrared camera, a LIDAR, RADAR, or sonar system, and/or peripheral sensors, which are configured to provide sensor input to the computer (131 ). A module of the firmware (or software) (127) executed in the processor(s) (133) applies the sensor input to an ANN defined by the model (119) to generate an output that identifies or classifies an event or object captured in the sensor input, such as an image or video clip.[0066] The identification or classification of the event or object generated by the ANN model (119) can be used by an autonomous driving module of the firmware (or software) (127), or an advanced driver assistance system, to generate a response.The response may be a command to activate and/or adjust one of the vehicle controls (141 , 143, and 145).[6066] Optionally, the identification or classification of the event or object is presented to an occupant of the vehicle (111 ) via the infotainment system (149).[0667] When the identification or classification of the current event or object is to be improved (e.g., when the event or object is identified as unknown, or identified as one of multiple possible events or objects, or identified as being an event or object with a confidence level below a threshold), the computer (131 ) selects the sensor input (e.g., the image or video clip, or data derived for the ANN from the image or video clip) for storage in the memory (135)). Subsequently, or in real time, the computer (131 ) transmits the selected sensor input to the server (101 ) illustrated in FIG. 1 using the communication device (139).[0068] The server (101 ) stores the received sensor input as part of the sensor data (103) for the subsequent further training or updating of the ANN model (119) using the supervised training module (117).[0069] When an updated version of the ANN model (119) is available in the server (101 ), the vehicle (111 ) may use the communication device (139) to download the updated ANN model (119) for installation in the memory (135) and/or for thereplacement of the previously installed ANN model (119).[0070] In other embodiments, the computer 131 is a controller such as, for example, a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The computer 131 can include a processor 133 configured to execute instructions stored in memory. The memory of the computer 131 can include embedded memory configured to perform various processes, logic flows, and routines for controlling operation of the vehicle 111 , including managing the system memory and handling communications between a memory device and a host device (not shown).[0071] In some embodiments, the embedded memory can include memory registers storing, e.g., memory pointers, fetched data, etc. The embedded memory can include volatile and/or non-volatile memory (e.g., DRAM, SRAM, NAND, NOR, PCM) for storing the memory registers, and can also include read-only memory (ROM) (e.g., for storing micro-code).[0072] In operation, the computer 131 can directly write or otherwise program (e.g., erase) the various memory regions of the main memory (e.g., system memory 155), such as by writing to groups of memory pages and/or memory blocks. In NAND-based memory, a write operation often includes programming memory cells in selected memory pages with specific data values (e.g., a string of data bits having a value of either logic 0 or logic 1 ). An erase operation is similar to a write operation, except that the erase operation re-programs an entire memory block or multiple memory blocks to the same data state (e.g., logic 1 ).[0073] The computer 131 can communicate with a host device (not shown) over a host-device interface. In some embodiments, the host device and the computer 131 can communicate over a serial interface, such as a serial attached SCSI (SAS), a serial AT attachment (SATA) interface, a peripheral component interconnect express (PCIe), or other suitable interface (e.g., a parallel interface). The host device can send various requests (in the form of, e.g., a packet or stream of packets) to the computer 131. A request can include a command to write, erase, return information, and/or to perform a particular operation (e.g., a TRIM operation). A request can also include an interrupt or another command that indicates a change in condition (e.g., a power loss event), which can trigger the implementation of a power loss algorithm.[0074] The host device (not shown) can be any one of a number of electronic devices capable of utilizing memory for the temporary or persistent storage of information, or a component thereof. For example, host device may be a computing device such as a desktop or portable computer, a server, a hand-held device (e.g., a mobile phone, a tablet, a digital reader, a digital media player), or some component thereof (e.g., a central processing unit, a co-processor, a dedicated memory controller, etc.). Host device may be a networking device (e.g., a switch, a router, etc.) or a recorder of digital images, audio and/or video, a vehicle, an appliance, a toy, or any one of a number of other products. In one embodiment, host device may be connected directly to a memory device, although in other embodiments, host device may be indirectly connected to memory device (e.g., over a networked connection or through intermediary devices).[0075] FIG, 3 shows a system for a vehicle including system memory 154, a boot device 156, and a storage device 150 according to one embodiment. The boot device 156 can be configured, for example, by a firmware update. The update can be firmware received, for example, into non-volatile memory of boot device 156.[0076] In one embodiment, the firmware is received by a wireless interface (not shown) of application controller 152. The received update is sent to a memory 158 of boot device 156.[0077] Various types of applications can be controlled and/or supported by application controller 152. Examples of such applications include a cluster, an entertainment or infotainment system, a seat control of vehicle, and a powertrain system of a vehicle.[0078] In one embodiment, a cryptographic engine (not shown) is used to generate various cryptographic values (e.g., hash digests of data). In one embodiment, the cryptographic engine compares hash digests to determine the validity of data (e.g., read by application controller 152 from boot device 156 or system memory 154). In one example, a digest used by the cryptographic engine in the comparison to determine validity of read data is generated using an algorithm such SHA256, SHA2, etc. The cryptographic engine determines, for example, whether to accept or reject the data based on the digest comparison (e.g., digest of data prior to storage in memory as compared to digest of data read from memory). In response to this determination, various actions can be performed, such as for example described below. In one example, the cryptographic engine includes one or more processors and memory located on boot device 156 In another example, the cryptographic engine is on a security device executing a safety hypervisor, or on the application controller 152 itself.[0079] Data may be transferred between components of the system viainterconnects 168, 170, 172, each of which may be, for example, an internal or external data or other bus (e.g., Peripheral Component Interconnect (PCI), PCI extended (PC!- X), PCI Express (PCIe)), a communication portion, and/or a computer network.[0080] In one embodiment, one or more of storage device 150, application controller 152, and system memory 154 are portions of a system-on-chip (SOC) device (e.g., all of these components are on the same SOC chip). In one embodiment, boot device 156 may be included as part of the SOC chip. In other embodiments, each of these components may be implemented on separate chips (e.g., mounted on and connected by wiring on a hardware card or other structure).[0081] In one example, application controller 152 is the main MCU running a system (e.g., INTEL corei7 is an application controller of a computer). Various controllers (e.g., memory controller) in the surrounding system serve application controller 152 to execute functions.[0082] In one embodiment, firmware or run-time code is received, via application controller 152, from boot device 156 or system memory 154. The determination is made to reject the data based on the digest comparison above. In response to determining to reject the data, application controller 152, for example, updates at least a portion of data in the boot device 156 and/or the system memory 154.[0083] For example, the data updated may be a software program that includes the rejected data. The determination to reject the data may be communicated from the cryptographic engine to application controller 152. In one embodiment, security of firmware or run-time code is checked using a digest comparison as described herein.[0084] In one example, a page of updated code from an OTA update is received and written into boot device 156 or system memory 154. In one example, a page has a size of at least 4K bytes.[0085] In one example, if an OTA firmware update is rejected, then the entire firmware or run-time code content corresponding to the update is deemed defective or insecure. In such a case, the firmware or run-time code is, for example, updated by a newly-requested secure over-the-air update. [0086] In one embodiment, application controller 152 is used to store data in system memory 154. The application controller 152 generates a first digest using a bash function. The inputs to the hash data include a page of data to be stored in system memory 154, and also an address at which the page will be stored in system memory 154. After generating the first digest, application controller 152 stores the page at the address.[0087] At a later time, application controller 152 reads the stored page from system memory 154 by providing the address above used to store the page. Application controller 152 generates a second digest using the read page and the address as inputs to the hash function. Then, application controller 152 compares the first digest and the second digest, and makes a determination based on this comparison whether the read page is corrupted. For example, the read page can be corrupted due to an error in the data that was stored in system memory 154, and/or due to an error in the addressing as provided to system memory 154.[0088] In one embodiment, the first digest generated prior to storage of data is stored in the system memory 154. For example, a page of data can be stored along with the first digest that was generated. In some cases, the first digest can be stored in a spare area of system memory 154.[0089] In another example, the page of data is stored in a non-volatile memory (e.g., boot device 156). In this example, the first digest can be stored in additional cells that are added to or available in a spare area of the non-volatile memory. For example, the first digest can be stored as being associated with a row of the memory in which the page of data is stored.[0090] In one example, it is desired to program the following pattern: Data = a page of data that the controller needs to store. The pattern Data is to be stored at the address location n. The controller 152 (or a cryptographic engine of another device) calculates the Digest (n) = HASH (Data jj Page n Address) that represents the digest associated with the page and its address. As described in the foregoing, the data to be stored is concatenated (indicated by“||”) with the address for storage of the page.[6091] The controller stores the Data, the address, the ECC (associated to the page without a signature and the Digest fn) value) and the Digest (n). The digest is generated, for example, as follows: Digest (n) = HASH (Data jj metadata (if any) jj Page n Address). In one example, the Page n Address is a row address.[6092] In one example, the read mechanism used to read the page of data from the memory is a mechanism that is used to communicate with the controller 152 (e.g., Nand O!MFI, SPI, a specific protocol, NOR CFI, DRAM, etc.) The application controller sends the address n of the page, the memory receives the address and then sends out the following data according to the specific interface protocol:Data + metadata (if any)ECC (if used)Digest (n)[0093] When the controller 152 receives the above information, the controller 152 executes the following flow:Read (and correct) the Data using the ECC (if used); andCalculate the second digest (e.g., Expected_Digest (n)) as follows:Digest (n) = HASH (Data jj metadata (if any) jj Page n Address)[0094] The controller 152 can interpret the presence of a mismatch inside the data read by looking at the comparison of the first digest generated prior to storage of the data to the second digest expected (and calculated) when reading the stored data. If the expected digest does not match the prior stored digest, the data is determined as being invalid.[0095] FIG, 4 shows a system for a vehicle (e.g., vehicle 111 of FIGs, 1-2), where the system determines whether read data is corrupted using a digest comparison corresponding to the read data, according to one embodiment. In variousembodiments, system memory 154 can store various types of data. In one example, an OTA update is received, via a wireless interface (not shown) of application controller 152 by buffer 204 as a stream of data portions (e.g., a data portion can be a page or a group of pages) that are stored in system memory 154.[0096] In one embodiment, in response to cryptographic engine 202 determining that a page of data is invalid based on digest comparison, application controller 152 discards software previously-stored in system memory 154 from which the invalid page of data was initially obtained. In one example, in response to determining to reject the page of data, cryptographic engine 202 causes application controller 152 to enter or remain in a rescue mode.[6097] In one embodiment, the application controller 152, before reading and/or using data from boot device 156, verifies the identity of the boot device 156 (e.g., to avoid a need for replacement of the boot device component). In this embodiment, the identity verification can be based in part on a block digest (e.g., block digest 511 of FIG, 6 below) stored in the boot device 156.[6698] In one embodiment, the previously-stored software was earlier obtained by an over-the-air update requested by application controller 152 In response to discarding the previously store software, application controller 152 makes a request for a new secure over-the-air update (e.g., from the same or a different source).[0099] In one embodiment, the over-the-air update is received from a computing device such as a server (e.g., server 101 of F!G. 1 ). When the application controller 152 is in a rescue mode, the application controller 152 may load rescue mode code from boot device 156, and use at least a portion of rescue mode code to obtain from the server a new update of the software that was previously rejected.[00100] In one embodiment, data received by buffer 204 is code obtained from storage device 150. In response to determining that the data is valid, application controller 152 copies the data from buffer 204 to system memory 154.[00101] In another embodiment, the data received by buffer 204 is a portion of run time code stored in system memory 154. A determination is made whether the code is valid using digest comparison as described herein !n response to determining to accept ail portions of data of the run-time code, the run-time code is executed by application controller 152.[00102] In one embodiment, the memory used on boot device 156, storage device 150, and/or system memory 154 can be a non-volatile storage media (e.g., flash memory) and/or volatile memory. This memory may, for example, store the boot code and/or rescue code.[00103] For example, during the boot of an application by application controller 152, the boot code, the operating system (OS), and software code/applications will be moved (in a compressed manner) from the storage device 150 to the system memory 154. Then, this data is uncompressed and execution by application controller 152 begins. When the system is up after the boot, the system memory (e.g., the volatile memory) contains, for example, the entire operating system and ail of the software code/applications.[00104] In one embodiment, the boot device 156 has a hardware security module capability and implements the following features: an authenticated command set, protection against replay attacks; a secret key stored inside memory of the boot device 156 (e.g., the secret key is shared with the system developer, which is a source); a cryptographic engine with a built-in, key-based MAC calculator; and a local memory that can be used for program operations[00105] In one embodiment, application controller 152 accesses the boot device 156 when performing some or all of its operations. This access involves using the secret keys, algorithms, and an authenticated command set. The command set is protected against replay attack. Application controller 152 may certify the validity of data stored in system RAM and/or in storage device 150, and also may certify the validity of secure over-the-air updates such as for firmware updates or boot device updates (security firmware).[00106] In one example, if one or more of any received data portions is found to be not valid, the entire content of the system memory is discarded. Application controller 152 loads rescue mode code from boot device 156 and runs a safety firmware with basic functionalities. In one example, these functionalities include requesting a new certified update from another source.[60107] In another embodiment, at power on of a system, application controller 152 receives secure code stored in storage device 150 that is to be executed. The secure code is certified as being valid prior to executing further operations by application controller 152. Boot code can be used to start an application of application controller 152.[66168] Various embodiments regarding a secure over-the-air (SOTA) update are now described below. In one embodiment, the update is used for updating code in boot device 156 and/or code in storage device 150. For example, the update may be a real software update. In another example, the update may be performed to repair code from a recognized attack determined by digest comparison.[66169] In one embodiment, application controller 152 receives an update from a remote location. This update can be, for example, a storage device content update.A system provider can, for example, use this approach to update an application, such as improving functionalities and/or security. In one embodiment, application controller 152 stores the received update inside system memory 154 and/or stores a signature of the update inside system memory 154.[66116] In one embodiment, if the received data is authenticated, the update is accepted. For example, the update is copied inside the storage device 150 for a system firmware update, or inside boot device 156 for a boot firmware update. A signature of the software update can be, for example, stored inside boot device 156 for certifying subsequent operations (e.g., operations during boot and/or run-time). If the received data fails authentication, then the system can enter or remain in a rescue mode.[66111] In one embodiment, when an update is downloaded by application controller 152, an image is stored first in system memory (e.g , DRAM), and/or from time-to-time stored in storage device 150. The update is signed (by calculating its MAC) and the signature is the mechanism to ensure that the downloaded content inside the memory is authentic. To perform a check of the signature, all of the data is downloaded, the data is measured against the internal application secret key, and then the final signature is compared with the received signature.[00112] FUG, 5 shows a computing system including an application controller 503 that securely reads data from a boot device 507 and/or a system memory 505 , according to one embodiment. Application controller 503 is, for example, a field-programmable gate array (FPGA) or a graphics processing unit (GPU). In one example, application controller 503 is computer 131 of FIG. 2.[00113] A security device 509 is used to monitor and secure communications within the computing system. For example, security device 509 determines whether data read from a memory is valid or is deemed corrupted. Security device 509 is for example an FPGA.[00114] In one embodiment, an artificial neural network (ANN) is implemented using the computing system of FIG, 5. In this implementation, a main application controller 501 is the main processing device for the system. Main application controller 501 receives sensor inputs from sensors of the vehicle (e.g., vehicle 111 of FIG. 1) such as L!DAR, braking, camera, and actuator output such as acceleration, braking, engine control, etc.[00115] The sensor inputs can be generated, for example, using a camera sensing visible lights and/or infrared lights, or a LIDAR, RADAR, or sonar system, in the form of an image or a video capturing an item, such as an event or an object. The sensor inputs can include data representing the image or video, and/or input data extracted or derived from the image or video. Actuators (e.g., to control braking, engine, steering, etc.) are used to implement control actions that are determined from the ANN based on sensor inputs.[00116] In one embodiment, application controller 503 implements the ANN.Application controller 503 is coupled to system memory 505 (e.g., DRAM system memory) and boot device 507 (e.g., boot device 507 contains firmware needed to run the application controller 503). Security device 509 includes a safety hypervisor and a safety controller. In one example, the safety controller is implemented in an FPGA.In other examples, the safety controller can be implemented in other types of controllers. In one embodiment, security device 509 is an external device that can receive the data (e.g., data read from memory), an address used to read the data, and a digest used to do a digest comparison as described herein.[00117] Code for the ANN is stored in the system memory 505. The code includes, for example, inputs p1 , p2, etc. and outputs a1 , a2, etc. For example, the ANN can have multiple layers. The inputs, such as sensor data, are transmitted to layer one. The input data is processed in layer one and then sent for further processing. For example, the data can be sent to the next layer, to another node within the current layer, to a previous layer, or back to the same node for further processing.[00118] In various embodiments, as described further below, the ANN is executed from the system memory 505. The ANN is moved to a code area of system memory 505 (e.g., DRAM). In one example, each page of the DRAM contains a portion of the ANN (e.g., one or more rows of matrices of the ANN)[00119] F!G. 6 shows an example of a boot phase for the computing system of F!G. S where digest data (e.g., Digest 1 , Digest 2, ... , Digest n) is stored in security device 509, according to one embodiment !n one embodiment, the digest data corresponds to the first digest generated for data to be stored, as discussed above. The digest data includes a digest generated for each matrix row (e.g., Matrix Row 1 , 2, ... n) stored in memory. The first digests are generated for data that is to be stored. In one embodiment, the first digests are then stored in association with the stored data (e.g., stored in boot device 507).[00120] In one embodiment, the boot device 507 contains the layers of the ANN matrix and the associated digests stored in multiple rows. There could be one row for each matrix layer, multiple rows per layer, or multiple layers per row.[00121] The boot device 507 also contains a block digest 511. The block digest 511 , for example, is a mechanism that encompasses all the individual digests (e.g., Digest 1 , Digest 2, ... , Digest n) and can be used to determine if there is any data corruption in a block of data (e.g., many rows of data stored in the array in boot device 507).[00122] In one embodiment, there are two levels of checks: the individual digests at the layer level and the block digest 511 summary check at the device level. During system boot, the block digest 511 is first checked to make sure the boot device content is correct. If so, the matrix row digests are copied to a safety hypervisor executing on security device 509 as a set of stored digests (as illustrated). For example, the first digests stored in boot device 507 are copied during the boot phase to security device 509 for later use during runtime, as discussed below.[00123] In one embodiment, the block digest 511 is generating using a hash function by using some or all matrix rows of data for an ANN as inputs to the hash function. Block digest 111 can also be sent, for example, by server 101 of FIG. 1.[00124] In addition, the matrix rows are copied from boot device 507 to a code area (see FIG. 7 below) of system memory 505. The matrix rows will be used during execution of the ANN from the system memory 505.[00125] FIG. 7 shows an example of runtime operation for the computing system of FIG. 6, according to one embodiment. The system memory 505 contains two copies of the matrix rows, one stored in a code area (e.g., Code Area, as illustrated) and one stored in a runtime area (e.g., Runtime Area, as illustrated). At runtime, the matrix rows are copied from the code area to the runtime area, for example, one-by-one to build the ANN in system memory. The same row of data is also copied to the security device 509.[00126] When the security device 509 receives each row, the security device 509 calculates a digest and compares it with the stored digest previously loaded (e.g., during the boot phase above) !f the calculated digest and stored digest do not match, then a hypervisor executing on security device 509 flags the error for application controller 503. The security device 509 and hypervisor can store the digests in either volatile memory or non-volatile memory such as local RAM, NOR flash, NAND flash, or 3D XPoint memory.[06127] In one embodiment, security device 509 includes a cryptographic engine that generates the second digest as each row is received. Security device 509 performs a comparison of the stored first digest to the generated second digest for each row being copied.[06128] FIG. 8 shows an example of a boot phase for the computing system of FIG. 5 where digest data is stored in system memory 505, according to one embodiment.The operation of the system of FIG. 8 is similar to that as discussed above for FIG. 6.In this embodiment, the first digests (Digest 1 , 2, ... , n) are copied from boot device 507 and stored in the code area of system memory 505.[66129] In one embodiment, the security device 509 copies each generated digest back to the runtime area of the system memory to enable continuous checking of data validity. In one embodiment, security device 509 stores the first digest values in volatile memory.[66136] In one embodiment, the application controller 503 handies communications regarding the correctness of the data in the matrices for the ANN. In this embodiment, the first digests are stored in the code area of system memory 505, instead of storing the first digests during the boot phase in security device 509 as for FIG. 6. [00131] FUG. 9 shows an example of runtime operation for the computing system of FIG. 8, according to one embodiment. When the system is executing the ANN in runtime mode, each matrix row is moved one-by-one from the code area to the runtime area of system memory 505. Each corresponding matrix row is also moved to security device 509. In addition, the corresponding digests are moved from the code area of system memory 505 to security device 509.[00132] As the security device 509 receives each row and its associated stored first digest, a new expected (second) digest is calculated, for example, as was described above. The newly-calculated second digest is compared to the received first digest, as was described above. In one example, based on this comparison, a warning signal is sent to application controller 503 if there is a mismatch.[00133] In one embodiment, the first digests (generated prior to storing data in memory) are stored at the factory level and/or during a secure update (e.g., an OTA update of firmware and/or data). For example, the first digests can be sent to vehicle 111 by server 101 of FIG. 1[00134] FIG. 10 shows a method for determining whether data read from a memory is corrupted, according to one embodiment. For example, the method of FIG. 10 can be implemented in one of the computing systems of FIGs. 2-5. The method of FIG. 10 includes, at block 1010, generating a first digest for first data (e.g., a page of data for an ANN) to be stored in a memory. The first digest is generated based on a hash function that uses at least one input including the first data to be stored. In one example, the inputs to the hash function include the first data and the address at which the first data will be stored.[00135] At block 1020, the first data is stored in the memory. In one example, the first data is stored in boot device 507 or system memory 505.[00136] At block 1030, the first data is read from the memory. For example, application controller 503 reads a row of data from a code area of system memory 505 and copies the row to a runtime area of system memory 505.[00137] At block 1040, a second digest is generated based on the read data. For example, the second digest is calculated by security device 509 after receiving a copy of a row of data being copied to the runtime area of system memory 505.[00138] At block 1050, the first digest and the second digest are compared. For example, the security device 509 stores the first digest in memory from a boot phase.At a later time during runtime, the security device 509 generates the second digest for comparison to the stored first digest. [00139] At block 1060, based on the comparison of the first digest to the second digest, a determination is made whether the read data is corrupted. For example, the security device 509 makes the determination whether the read data is corrupted. In one example, the security device 509 sends a signal to application controller 503 indicating that the read data is invalid.[00140] Various other embodiments are now described below. In one embodiment, a method implemented in at least one computing device comprises: generating, based on a hash function using at least one input including first data, a first digest;storing the first data in a memory; reading the first data from the memory; generating, based on the read data, a second digest; comparing the first digest and the second digest; and determining, based on comparing the first digest and the second digest, whether the read data is corrupted.[00141] In one embodiment, the at least one input used by the hash function further includes at least one of: an address at which the first data is stored in the memory; or metadata associated with the first data.[00142] In one embodiment, the memory is a boot device of a controller, and the method further comprises copying the read data and the first digest to a system memory of the controller.[00143] In one embodiment, reading the first data from the memory comprises reading the first data by a first computing device, and the method further comprises sending the read data to a second computing device, wherein comparing the first digest and the second digest is performed by the second computing device.[00144] In one embodiment, the method further comprises, in response todetermining that the read data is corrupted, performing at least one action.[00145] In one embodiment, the first data is stored in the memory by a controller, and the at least one action comprises at least one of: sending a signal to the controller that indicates the first data is corrupted; re-reading the first data from the memory;terminating a process executing on the controller; or containing data identified as being corrupted.[00146] In one embodiment, the memory is a boot device, and the method further comprises copying, by a controller, a plurality of rows of data from the boot device to a system memory of the controller, wherein the rows include a first row and a second row, the first data is stored in the first row, and comparing the first digest and the second digest is performed prior to copying the second row.[00147] In one embodiment, the memory is a code area of a system memory of a first computing device, and the method further comprises copying a plurality of rows of data from the code area to a runtime area of the system memory, wherein the rows include a first row and a second row, the first data is stored in the first row, and comparing the first digest and the second digest is performed prior to copying the second row.[00148] In one embodiment, the method further comprises sending the first digest to a second computing device, and generating the second digest is performed by the second computing device.[00149] In one embodiment, the first computing device is a field-programmable gate array (FPGA), and the second computing device is a an FPGA, a controller, or a computing device executing a hypervisor.[00150] In one embodiment, the method further comprises: storing the first digest in the memory as being associated with the stored first data; generating a third digest for a block of data stored in the memory, wherein the block of data includes a plurality of rows of data, and the rows include a first row storing the first data; and determining, using the third digest, whether the block of data is corrupted[00151] In one embodiment, storing the first data comprises writing, by a controller, the first data to a volatile memory or a non-volatile memory.[00152] In one embodiment, the first data is stored in a first row of a plurality of rows stored in the memory, and the plurality of rows corresponds to matrices of an artificial neural network.[00153] In one embodiment, storing the first data comprises writing the first data by a controller, and the method further comprises, after reading the first data, storing the read data in a system memory of the controller.[00154] In one embodiment, storing the first data comprises writing the first data by a controller, and the memory is a system memory of the controller.[00155] In one embodiment, storing the first data comprises storing the first data by a first computing device, and the method further comprises sending the first digest to a second computing device, wherein comparing the first digest and the second digest is performed by the second computing device.[00156] In one embodiment, storing the first data comprises storing the first data in a first row of a plurality of rows stored in the memory, and the method further comprises generating a third digest (e.g., block digest 511 ) for a block of data stored in the memory, wherein the block includes the plurality of rows, and wherein the third digest is generated using a hash function with at least one input including at least one of: data stored in the plurality of rows; or a plurality of respective digests (e.g., Digests 1 , 2, ... , n of FIG. 8), wherein each respective digest corresponds to a digest generated for a respective row of the plurality of rows.[00157] In one embodiment, a system comprises: at least one processor; and memory containing instructions configured to instruct the at least one processor to: generate, based on a hash function using data as an input, a first digest; store the data in a first memory; read the data from the first memory; generate, based on the read data, a second digest; and determine, based on a comparison of the first digest and the second digest, whether the read data is corrupted.[60158] In one embodiment, the first memory comprises a system memory, or a memory of a boot device; the at least one processor comprises a controller, a field- programmable gate array, or a computing device executing a hypervisor; and the method further comprises storing the first digest in at least one of the first memory or a second memory.[66159] In one embodiment, a non-transitory computer storage medium stores instructions which, when executed by at least one processor, cause the at least one processor to: generate a first digest using a hash function, wherein the hash function uses inputs including a page of data, and a first address at which the page is to be stored after generating the first digest; store the page at the first address in a memory; read the page from the first address of the memory; generate, based on the read page and the first address, a second digest; and determine, based on a comparison of the first digest and the second digest, whether the read page is corrupted.[66166] In one example, a non-transitory computer storage medium can be used to store instructions of the firmware 127, or firmware for application controller 152. When the instructions are executed by computer 131 , or the application controller 152, the instructions cause the respective computer 131 or application controller 152 to perform any of the methods discussed above.Variations of Determining Data Validity[00161] Various additional non-limiting embodiments are now described below. In one embodiment, cryptographic hashing is used to ensure correct communication between a controller and a memory device. The controller detects when there is an error in a computing system, identifies the type of error, and takes counter-measures to prevent propagating erroneous data.[00162] In one embodiment, the computing system identifies and intercept errors. If there is an error correction, the application or other controller is informed about the presence of the errors to prevent future issues or propagating the errors. In one embodiment, the computing system has the capability to disable internal error correction and have an external controller fix errors and write back corrected data to DRAM. In one embodiment, an external system measures error rates and determines if ECC is correcting errors, and when the corrections are occurring. In one example, DRAM has Error Correction Code (ECC) that can correct one error on a page. The error correction is performed internally and is not communicated external to the DRAM.[00163] In one embodiment, a mechanism is used to inform an external controller that the main application is waiting for data that contains errors, or that the main application is evaluating data that may contain errors (e.g., because the number of errors exceeds the ability of ECC to correct the errors). If there are too many errors for ECC to correct, ECC may insert more errors into the data.[00164] If the number of errors exceeds the ECC correction capability, the system can add more errors by trying to correct bits that are already correct. For example, a ECC implementation might detect two bit errors and correct one bit error. If there are three errors in one page, the first two errors will not be detected. The third error will be detected. The ECC algorithm will, for example, try to correct the third error and will introduce a fourth error with the attempted correction. Various embodiments as described herein provide a solution to this problem[0016SJ In one embodiment, when the internal ECC is operating, the hypervisor receives a notification when a correction is performed. This permits evaluating the probability that another error is occurring, or permits calculating a method to fix the internal error.[00166] In one embodiment, if internal ECC is disabled, then the hypervisor can direct corrective action. The external controller can take counter measures to re-write the data or erase the block in the case of a non-voiatiie memory and replace it with the correct data.[00167] In one embodiment, in a Real Time Operating (RTO) system, instructions are executed in place. The instruction pointer is jumping from one location to another without time to implement on-the-fly corrections. Various embodiments herein can provide a method of applying corrections as needed for artificial intelligence systems.[00168] For example, in an autonomous vehicle application, map errors need to be communicated promptly. If the vehicle’s sensors are detecting, for example, that there is a cat on the road, but for an unknown reason the internal state machine is indicating it is something else like a human, the safety controller can advise the main controller that there is an algorithm error, and the algorithm can be re-run, or another method used to identify the obstacle.[00169] Now discussing another example regarding digest comparison, the address at which data is to be stored is concatenated with the data to be stored and a hash function is run on this concatenation. In this example, if the data is a 4K byte array and the address is, for example, 256, the 4K byte array and address (256) would both be stored. The 4K byte of data and address would then be concatenated and the hash function digest calculated. The hash digest would be stored, for example, in a spare area of memory. When the data is later read back, the digest is read with the data.The hash digest is recalculated with the data read from storage, and the address and the calculated digest are compared with the digest read from storage. If both digests match, the data and address are valid. If not, there is deemed to be an error.[00176] In one example, in a memory device a DRAM implementation uses several levels of address decoding starting with the row address as a first level address decode. The row address decode is then combined with a second level address decode (column address decode). The resulting address is then sent to the high voltage address decode for accessing the memory array. This address is also sent along with a page of data (and metadata, if present) that is to be written to acryptographic algorithm to calculate the hash digest. The page of data and metadata are written to the memory array, and the calculated hash digest is written to a spare area in the memory for that page. In some cases, there would be one digest per row.In other cases, since there can be multiple pages per row there can be up to one digest per page.[00171] In various embodiments, the metadata above can include data such as ECC bits. ECC is useful for correcting single-bit errors, but is typically not useful for detecting multi-bit errors. The hash digest in the present embodiments significantly increases the ability of a system to detect multi-bit errors.[00172] In one embodiment, if ECC data is included, ECC corrections are made to data read from memory before the hash digest is calculated. This ensures that any additional errors introduced due to having more bit-errors than the ECC system can properly handle (due to capability limitations) are detected by the digest.[00173] In one embodiment, the use of the hash digest comparison also provides protection against any errors introduced on the data bus. Since the main controller calculates a digest based on the address it put on the bus and the data it received back, any error introduced at any point in the data retrieval or transmission will change the calculated digest, which results in detection of the error.[00174] In this description, various functions and operations may be described as being performed by or caused by computer instructions to simplify description.However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the computer instructions by one or more controllers or processors, such as a microprocessor. Alternatively, or in combination, the functions and operations can be implemented using special purpose circuitry, with or without software instructions, such as using Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.[00175] While some embodiments can be implemented in fully-functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution.[00176] At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor or microcontroller, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.[60177] Routines executed to implement the embodiments may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as“computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.[66178] A tangible, non-transitory computer storage medium can be used to store software and data which, when executed by a data processing system, causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-vo!ati!e memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer-to-peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in their entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine-readable medium in their entirety at a particular instance of time.[00179] Examples of computer-readable storage media include, but are not limited to, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, and optical storage media (e.g., Compact Disk Read-Only Memory (CD ROM), Digital Versatile Disks (DVDs), etc.), among others. The instructions may be embodied in a transitory medium, such as electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc. A transitory medium is typically used to transmit instructions, but not viewed as capable of storing the instructions.[00180] In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.[00181] Although some of the drawings illustrate a number of operations in a particular order, operations that are not order dependent may be reordered and other operations may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.[00182] The above description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.[00183] In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
An apparatus for vehicle operator authentication operations may include a plurality of memory devices coupled to a processor. The processor may perform an authentication operation using a determined driving mode associated with an authorized operator of a vehicle in which the device is deployed and information received from a global positioning satellite and a base station, to determine whether a current operator of the vehicle is the authorized operator of the vehicle and to determine whether the vehicle has experienced a vehicle event. The processor may also reallocate computing resources between a first memory device and a second memory device in response to a determination that the vehicle has experienced the vehicle event, and performing a subsequent authentication operation using the reallocated computing resource using the determined driving mode and the information received from the global positioning satellite and the base station to determine whether the current operator of the vehicle is the authorized operator of the vehicle. |
CLAIMS 1. An apparatus for vehicle operator authentication operation comprising:a first memory device (123, 223) comprising a first type of media (124, 224);a second memory device (125, 225) comprising a second type of media (126, 226); andA processor (122, 222) coupled to the first memory device (123, 223) and the second memory device (125, 225), wherein the processor (122, 222) is configured to:An authentication operation is performed to identify the vehicle (541) using the determined driving pattern associated with an authorized operator of the vehicle (541) in which the device is deployed and information received from global positioning satellites (547) and base stations (543) ) is the authorized operator of the vehicle (541);determining whether the vehicle (541) has experienced a vehicle event;reallocating computing resources between said first memory device (123, 223) and said second memory device (125, 225) in response to said determination that said vehicle (541) has experienced said vehicle event; as well asUsing said reallocated computing resources to perform a subsequent authentication operation using said determined driving pattern and said information received from said global positioning satellite (547) and said base station (543) to determine the identity of said vehicle (541) whether the current operator is the authorized operator of the vehicle.2. The apparatus of claim 1, wherein the processor is further configured to store information between the first memory device and the second memory device based at least in part on the determined environmental characteristic in which the vehicle is operating. Reallocate the computing resources between.3. The apparatus of claim 1, wherein the processor is further configured to use the reallocated computing resources to perform at least one of the authentication operation or the subsequent authentication operation based on internal vehicle sensor (113) data one or both.4. The apparatus of any one of claims 1 to 3, wherein the vehicle event comprises the vehicle being stationary for greater than a threshold period of time, starting, or detecting a change in captured internal vehicle sensor (113) data at least one of , or any combination thereof.5. The apparatus of any one of claims 1-3, wherein the processor is configured to execute one or more sets of machine learning instructions to determine the authorized operation with the vehicle over time or associated with the driving mode.6. The apparatus of any one of claims 1-3, wherein the processor is configured to:transmitting a notification to the authorized operator of the vehicle, deactivating the vehicle, or both in response to a determination that the current operator of the vehicle is not the authorized operator of the vehicle operation, orResponsive to a determination that the current operator of the vehicle is the authorized operator of the vehicle, the vehicle is permitted to continue operating.7. The apparatus of any one of claims 1-3, wherein the processor is configured to determine before reallocating the computing resources between the first memory device and the second memory device characteristics of the first memory device and the second memory device, and wherein the determined characteristics of the first memory device and the second memory device comprise the first memory device and the second memory device bandwidth, memory access time, latency, memory cell density, or any combination thereof.8. The apparatus of any one of claims 1-3, wherein the processor is configured to reallocate the computing resources between the first memory device and the second memory device such that Computing resources greater than a threshold amount exhibiting higher bandwidth, faster memory access time, or lower latency than other computing resources available to the vehicle may be used to perform the subsequent authentication operation.9. A method (650) for vehicle operator authentication operations, comprising:executing one or more machine learning instructions to determine a driving pattern associated with an authorized operator of the vehicle (541);Perform routine authentication operations using the determined driving pattern and information received from global positioning satellites (547), base stations (543) and interior vehicle sensors (113) to determine if the current operator of the vehicle (541) is the said authorized operator of said vehicle (541);determining whether the vehicle (541) has experienced a vehicle event;reallocating computing resources among memory devices (123, 223, 125, 225, 227) associated with said vehicle (541) in response to said determination that said vehicle (541) has experienced said vehicle event; as well asUsing said reallocated computing resources and in response to determining that said vehicle (541) has experienced a subsequent vehicle event, using said determined driving pattern, said information received from said global positioning satellite (547), from said The information received by the base station (543) and the internal vehicle sensors (113) perform subsequent authentication operations to determine whether the current operator of the vehicle (541) is the authorized operator of the vehicle (541) By.10. The method of claim 9, further comprising performing the subsequent authentication operation for each subsequent determined vehicle event while the vehicle is operating using the reallocated computing resources.11. The method of any one of claims 9 to 10, further comprising reallocating the computing resources among the memory devices associated with the vehicle such that a representation ratio is available for the vehicle's Other Computing Resources Computing resources greater than a threshold amount of higher bandwidth, faster memory access time, or lower latency, or any combination thereof, may be used to perform said subsequent authentication operations.12. The method of any one of claims 9 to 10, wherein the vehicle event includes at least one of the vehicle being stationary for greater than a threshold period of time, starting, or detecting a change in captured internal vehicle sensor data, or any combination.13. The method of any one of claims 9-10, further comprising:determining the characteristics of the environment in which the vehicle is operating or performing a traffic sequence prediction operation, or both; andThe computing resources are reallocated among the memory devices based at least in part on the determined environmental characteristic or the traffic sequence prediction operation, or both.14. A system for vehicle operator authentication operations comprising:an autonomous vehicle (541) comprising:a first memory device (123, 223) comprising a first type of media (124, 224);a second memory device (125, 225) comprising a second type of media (126, 226); anda processor (122, 222) coupled to the first memory device (123, 223) and the second memory device (125, 225), wherein the processor (122, 222) is to:executing instructions to determine a driving pattern associated with an authorized operator of said autonomous vehicle (541);Perform an authentication operation using the determined driving pattern and information received from global positioning satellites (547), base stations (543) and at least one internal sensor (113) associated with the autonomous vehicle (541) to determine the autonomous whether the current operator of the vehicle (541) is said authorized operator of said autonomous vehicle (541);determining whether the ego vehicle (541) has experienced a vehicle event;reallocating computing resources between said first memory device (123, 223) and said second memory device (125, 225) in response to said determination that said ego vehicle (541) has experienced said vehicle event ;as well asUsing said reallocated computing resources to perform a subsequent authentication operation using said determined driving pattern and said information received from said global positioning satellite (547) and said base station (543) to determine said ego vehicle (541) Whether the current operator of is the authorized operator of the autonomous vehicle (541).15. The system of claim 14 , wherein said processor is to transmit information stored in said first memory device to said autonomous vehicle in response to said determination that said autonomous vehicle has experienced said vehicle event. A second memory device to increase the amount of available memory resources associated with the first memory device.16. The system of claim 14 , wherein the processor is to respond to determining that the first memory device exhibits at least one of higher bandwidth or faster memory access time than the second memory device or both or said second memory device exhibits at least one or both of higher bandwidth or faster memory access time than said first memory device and between said first memory device and said The computing resources are reallocated among the second memory devices.17. The system of claim 14, wherein the processor is configured to:In response to a determination that the current operator of the autonomous vehicle is not the authorized operator of the vehicle, transmitting a notification to the authorized operator of the autonomous vehicle, deactivating the autonomous vehicle, or performing both operations; orResponsive to a determination that the current operator of the autonomous vehicle is the authorized operator of the autonomous vehicle, the autonomous vehicle is allowed to continue operating.18. The system of any one of claims 14-17, wherein the processor is configured to execute one or more sets of machine learning instructions to determine the authorized relationship with the ego vehicle over time. The driving mode associated with the operator.19. The system of any one of claims 14 to 17, wherein the vehicle event comprises at least one of the vehicle being stationary for greater than a threshold period of time, starting, or detecting a change in captured internal vehicle sensor data , or any combination thereof.20. The system of any one of claims 14-17, wherein the processor is further configured to store in the first memory device based at least in part on a determined environmental characteristic in which the vehicle is operating. Reallocating the computing resource with the second memory device. |
Vehicle operator certification operationtechnical fieldThe present disclosure relates generally to semiconductor memories and methods, and more particularly, to devices, systems and methods for vehicle operator authentication operations.Background techniqueMemory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic systems. There are many different types of memory, including volatile and non-volatile memory. Volatile memory may require power to maintain its data (eg, host data, error data, etc.), and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), Synchronous Dynamic Random Access Memory (SDRAM) and Thyristor Random Access Memory (TRAM), etc. Non-volatile memory can provide persistent data by retaining stored data when power is not applied, and can include NAND flash memory, NOR flash memory, and resistive variable memory, such as phase change random access memory (PCRAM), Resistive random access memory (RRAM) and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), etc.A memory device can be coupled to a host (eg, a host computing device) to store data, commands, and/or instructions for use by the host in operating the computer or electronic system. For example, data, commands and/or instructions may be transferred between a host and a memory device during operation of a computing or other electronic system.Contents of the inventionAn aspect of the present disclosure provides an apparatus for vehicle operator authentication operations comprising: a first memory device including a first type of media; a second memory device including a second type of media; and a processor , coupled to the first memory device and the second memory device, wherein the processor is configured to: use the determined driving pattern associated with an authorized operator of the vehicle in which the apparatus is deployed and from The information received by the global positioning satellite and the base station performs an authentication operation to determine whether the current operator of the vehicle is the authorized operator of the vehicle; determines whether the vehicle has experienced a vehicle event; in response to the vehicle having experienced said determination of said vehicle event reallocates computing resources between said first memory device and said second memory device; and using said reallocated computing resources uses said determined driving pattern and from said The global positioning satellite and the information received by the base station perform a subsequent authentication operation to determine whether the current operator of the vehicle is the authorized operator of the vehicle.Another aspect of the present disclosure provides a method for operating a vehicle operator authentication, comprising: executing one or more machine learning instructions to determine a driving pattern associated with an authorized operator of the vehicle; using the determined driving patterns and information received from global positioning satellites, base stations, and interior vehicle sensors perform routine authentication operations to determine whether the current operator of the vehicle is the authorized operator of the vehicle; determine whether the vehicle has experienced a vehicle event; reallocating computing resources among memory devices associated with the vehicle in response to the determination that the vehicle has experienced the vehicle event; and using the reallocated computing resources in response to determining the said vehicle has experienced a subsequent vehicle event, a subsequent authentication operation is performed using said determined driving pattern, said information received from said global positioning satellite, said information received from said base station, and said internal vehicle sensors to determine said whether the current operator of the vehicle is the authorized operator of the vehicle.Another aspect of the present disclosure provides a system for vehicle operator authentication operations comprising: an autonomous vehicle comprising: a first memory device comprising a first type of media; a second memory device comprising a second type of media; and a processor coupled to the first memory device and the second memory device, wherein the processor will: execute instructions to determine a driving status associated with an authorized operator of the autonomous vehicle mode; performing an authentication operation using said determined driving mode and information received from global positioning satellites, a base station, and at least one internal sensor associated with said autonomous vehicle to determine whether the current operator of said vehicle is the vehicle's said authorized operator; determining whether said autonomous vehicle has experienced a vehicle event; responsive to said determination that said autonomous vehicle has experienced said vehicle event, between said first memory device and said second memory device reallocating computing resources between; and using the reallocated computing resources to perform a subsequent authentication operation using the determined driving pattern and the information received from the global positioning satellite and the base station to determine all of the ego vehicle's whether the current operator is the authorized operator of the autonomous vehicle.Description of drawingsFigure 1 is a functional block diagram in the form of an apparatus including a host and a memory device, according to several embodiments of the present disclosure.2 is another functional block diagram in the form of a computing system including a device including a host and a memory system, according to several embodiments of the present disclosure.Figure 3 is a functional block diagram in the form of an apparatus including a memory system, according to several embodiments of the present disclosure.Figure 4 is another functional block diagram in the form of an apparatus including a memory system in accordance with several embodiments of the present disclosure.5 is a diagram illustrating an autonomous vehicle including an electronic control unit, according to several embodiments of the present disclosure.FIG. 6 is a flowchart representing an example method corresponding to a vehicle operator authentication operation in accordance with several embodiments of the present disclosure.Detailed waysAn apparatus for vehicle operator authentication operations may include a plurality of memory devices coupled to a processor. The processor may perform authentication operations using determined driving patterns associated with authorized operators of the vehicle in which the device is deployed and information received from global positioning satellites and base stations to determine whether the current operator of the vehicle is an authorized operator of the vehicle or and determine whether the vehicle has experienced a vehicle event. The processor may also reallocate computing resources between the first memory device and the second memory device in response to a determination that the vehicle has experienced a vehicle event, and use the reallocated computing resources to use the determined driving pattern and receive information from global positioning satellites and base stations. The information is used to perform subsequent authentication operations to determine whether the current operator of the vehicle is an authorized operator of the vehicle.Along with autonomous vehicles (e.g., vehicles such as automobiles, trucks, buses, motorcycles, mopeds, all-terrain vehicles, military vehicles, tanks, etc.), where at least a portion of the decision-making and/or vehicle control operations are controlled by computer hardware and/or or software control, rather than a human operator) are becoming increasingly popular, and questions regarding the safety of such vehicles must be addressed. While there are various approaches to mitigating the safety issues associated with autonomous vehicles and thus improving their safety, the amount and type of computing resources (e.g., computer hardware and software) that control the safety and driver authentication of autonomous vehicles are limited. Constraints make improvements in autonomous vehicle safety difficult.For example, some approaches rely on "simple" authentication, such as physical vehicle keys, passwords, vehicle operator input, global positioning data, or the like, to attempt to ensure that the operator of the vehicle is an authorized operator of the vehicle as a means of providing security to the vehicle. the sexual part. These and similar authentication paradigms can expose vehicles to cyberattacks that can allow rogue entities to unlock, start (eg, power up the vehicle), and in some scenarios steal or otherwise compromise the vehicle.Additionally, such approaches may not adequately address situations where the operator of the vehicle changes during operation of the vehicle. For example, such methods may not adequately address situations in which the driver and passengers attempt to change places in the vehicle while the vehicle is being operated, or in worse cases, when the driver and/or passengers are Other unfavorable situations where the vehicle is forced to change places in the vehicle when it is retracted. Such methods expose authorized operator vehicles to various legal risks. For example, if a vehicle is stolen, reoccupied, commissioned, or otherwise used without the vehicle's authorized operator's knowledge and/or consent, the authorized operator of the vehicle may be responsible for damage caused by use.In contrast, embodiments described herein provide multiple stages of authentication to ensure that the operator of the vehicle is an authorized operator of the vehicle. For example, embodiments herein may allow the use of any combination of Global Positioning Satellite (GPS) data, base station data, door sensor data, seat sensor data, steering wheel sensor data, and/or interior vehicle camera data, etc., over time Driving patterns of authorized vehicle users are determined. Once an authorized operator's driving pattern has been learned (eg, by using a machine learning algorithm), any combination of the aforementioned detected and/or collected information may be used to authenticate the operator of the vehicle. These authentication operations may be referred to herein as "routine authentication operations." For example, an authentication operation performed at startup of a vehicle and then performed periodically and/or at regular or scheduled intervals based on the amount of time elapsed since startup of the vehicle may be referred to as a "routine authentication operation."In some embodiments, subsequent authentication operations may be performed in response to detection of a vehicle event and/or subsequently thereafter. However, as compared to routine authentication operations, subsequent authentication operations as described herein may be performed specifically in response to detection of a vehicle event generally, or at least at least at different periodic intervals associated with the performance of routine authentication operations. A periodic interval is executed periodically after the occurrence of a vehicle event is detected. For example, a subsequent authentication operation may refer to an authentication operation performed at least upon startup of the vehicle and/or after an initial routine authentication operation performed on a different schedule than the routine authentication operation. As used herein, the term "vehicle event" generally refers to a situation in which a vehicle stops driving at least temporarily or an event in which the vehicle detects some change in the operator of the vehicle. Non-limiting examples of a "vehicle event" may include a vehicle stopped at a stop sign, a tragedy signal, or other locations where a vehicle may stop moving or driving but remain powered on (eg, a traffic jam).However, examples are not so limited, and a "vehicle event" may include detecting changes in data collected by door sensors, seat sensors, steering wheel sensors, and/or interior vehicle camera data. For example, a vehicle event may correspond to a door sensor having been triggered, a seat sensor state having changed (e.g., the amount of weight present on the seat has changed due to a vehicle operator sitting on the seat), An indication that the hand position and/or steering wheel sensors have detected a change in the tightness or looseness of the grip on the steering wheel, etc. Such events may correspond to potential violations of operator authentication (e.g., there may be a possibility that the operator of the vehicle may have changed) and/or may correspond to emergency handling operations (e.g., operations performed by the vehicle that may cause computing The resource is focused on the operation and may thus represent the determination of a time slot for monitoring by the vehicle operator.As described in greater detail herein, aspects of the present disclosure may allow efficient performance of authentication operations by purposefully reallocating computing resources available to autonomous vehicles so that the most efficient (e.g., fastest, most accurate, etc. ) computing resources to process the information used in the performance of authentication operations to allow the autonomous vehicle to operate in a safe manner.Some embodiments of the present disclosure allow the execution of applications to perform authentication operations to ensure that the operator of a vehicle (eg, an autonomous vehicle) is an authorized operator of the vehicle. As used herein, the term "application" generally refers to one or more computer programs, which may include computing instructions executable to cause a computing system to perform certain tasks, functions and/or activities. The amount of computing resources (eg, processing resources and/or memory resources) consumed in the execution of an application program may be measured in terms of a "workload." As used herein, the term "workload" generally refers to the aggregate computing resources consumed in the execution of an application program to perform a certain task, function and/or activity. During the course of executing an application, a number of sub-applications, subroutines, etc. may be executed by the computing system. The amount of computing resources consumed while executing an application (including sub-applications, subroutines, etc.) may be referred to as a workload. Some applications that may cause demanding workloads include applications that process data such as GPS data, satellite data, and/or sensor data in real time to perform the authentication operations described herein.As workloads become more demanding, particularly in view of improvements in broadband cellular network technology that may allow communications between vehicles operating on the road and/or between vehicles and base stations, associated with optimization of workload handling The problem of may become further exacerbated in autonomous vehicle deployments where physical space constraints may dictate the amount of processing resources and/or memory resources available to the autonomous vehicle.As broadband cellular network technology develops, higher resource demands may be placed on autonomous vehicles connected to broadband cellular networks. This is attributable to the increase in available bandwidth associated with broadband cellular networks (referred to herein as "the network" for brevity), which can in turn lead to higher download speeds, and thus increased bandwidth associated with devices connected to the network. connected data traffic. Such increased data traffic may further result in greater amounts of data being received, stored, and/or processed within autonomous vehicles connected to the network.In addition, the potential for increased data traffic involving autonomous vehicles connected to the network may allow increasingly complex applications (e.g., computing applications designed to cause a computing device to perform one or more specific functions or tasks) in autonomous vehicles. to execute. Execution of such applications can in turn generate demanding workloads that can strain computing resources and, more specifically, can strain computing resources allocated to such devices in some conventional approaches.To address the shortcomings of various approaches to quickly and accurately perform authentication operations involving vehicles, embodiments described herein may provide hardware circuitry (e.g., controllers, processors, etc.) Monitoring and/or determining characteristics of workloads executing in computing systems of autonomous vehicles while in different types of memory devices. Based on the monitored or determined characteristics of the workload, the hardware circuitry may write at least a portion of the workload to a different type of memory device. For example, if a workload is executed while data corresponding to the workload is stored in a volatile memory device, and if data corresponding to the workload is stored in a nonvolatile memory device, the hardware circuitry determines that the workload may be optimized Executed, the hardware circuitry may cause at least a portion of data corresponding to the workload to be written to the non-volatile memory device. Such dynamic determination of workload characteristics and subsequent allocation of workloads to memory devices containing different types of media may be particularly beneficial in mobile computing systems, especially where more and more processing resource intensive work is performed on mobile computing devices on load.Non-limiting examples of how a workload may be optimized may include optimizing bandwidth associated with a computing system, consumption of computing resources associated with a computing system, and/or speed at which a computing system executes a workload, among others. For example, if the computing system is deployed in an autonomous vehicle, the computing resources of the computing device may be strained when multiple demanding workloads are executing simultaneously. Accordingly, in order to optimize resource consumption and thus resource availability of the autonomous vehicle, the hardware circuitry may cause at least a portion of the data corresponding to one or more of the workloads to be written to a memory device characterized in that when executing The workload is faster memory access time than another memory device associated with the ego vehicle.Another non-limiting example in which a workload may be optimized may include optimizing execution of the workload by utilizing memory devices and/or media types that exhibit different memory capacities and bandwidth capacities. For example, memory devices exhibiting high capacity but low bandwidth (e.g., NAND memory devices) can be used to perform some types of workloads (or portions thereof), while memory devices exhibiting high bandwidth but low capacity (e.g., 3D stacked SDRAM memory devices) can be used to execute some types of workloads (or portions thereof). Embodiments herein may optimize the amount of time spent executing resource-intensive applications in a computing device or mobile computing device by utilizing the capacity of memory devices that exhibit high capacity but low bandwidth or high bandwidth but low capacity for different workloads, Processing resources and/or power. Embodiments are not so limited, however, and other examples of optimizing the execution of workloads according to the present disclosure are described in greater detail herein.As described in greater detail herein, embodiments may further optimize mobile computing systems by writing data associated with a workload to a memory device based on characteristics of the data (e.g., frequency of access to data involved in executing the workload) Execution of workloads in . The access frequency of data may refer to the amount of access (eg, reads, writes, etc.) involved in data when executing a workload. Reference may be made herein to the access frequency of data regarding "hot data" and "cold data". "Cold data" as used herein means that a particular memory object has not been accessed for a long duration relative to other memory objects read from the memory device. "Hot data" as used herein refers to a particular memory object that has been accessed frequently relative to other memory objects read from the memory device.For example, if certain data involved in executing a workload is determined to be "hot," such data may be written to a memory device that includes a type of media well suited to quickly accessing the data. A non-limiting example of a memory device described herein to which hot data may be written during execution of a workload is a volatile memory device, such as a DRAM device.In contrast, if certain data involved in executing a workload is determined to be "cold," such data may be written to a memory device containing a type of media well suited to storing infrequently accessed data. A non-limiting example of a memory device described herein to which cold data may be written during execution of a workload is a non-volatile memory device, such as a NAND flash device.In the following detailed description of the disclosure, reference is made to the accompanying drawings which form a part hereof, and which show by way of illustration the manner in which one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the disclosed embodiments, and it is to be understood that other embodiments may be utilized and that technological, electrical, and other modifications may be made without departing from the scope of the present disclosure. Structural changes.As used herein, a designator designation such as "N," "M," etc. specifically with respect to a reference number in the drawings indicates that the number of the particular feature so designated may be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a/an" and "the" can encompass both singular and plural referents unless the context clearly dictates otherwise. Additionally, references to "a plurality," "at least one," and "one or more" (e.g., a plurality of memory banks) may refer to one or more memory banks, while "a plurality" is intended to refer to more than one such memory bank thing.Furthermore, throughout this application the words "may" and "may" are used in a permissive sense (ie, may, can) rather than a mandatory sense (ie, must). The term "comprising" and its derivatives mean "including but not limited to". Depending on the context, the term "coupled/coupling" means physically connecting or accessing and moving (transmitting) commands and/or data, directly or indirectly. Depending on the context, the terms "data" and "data value" are used interchangeably herein and may have the same meaning.The drawings herein follow a numbering convention in which the first one or more digits correspond to the drawing number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar numerals. For example, 104 may represent element "04" in FIG. 1, and a similar element may be represented as 204 in FIG. Generally, a single element number may be used herein to refer to one or more similar elements or components. For example, a number of reference elements such as elements 544 - 1 through 544 -N (or 544 - 1 , . . . , 544-N in the alternative) may be generally referred to as 544 . As will be appreciated, elements shown in various embodiments herein may be added, exchanged, and/or removed in order to provide additional embodiments of the present disclosure. Additionally, the proportions and/or relative dimensions of elements provided in the drawings are intended to illustrate certain embodiments of the present disclosure and should not be viewed in a limiting sense.1 is a functional block diagram in the form of a computing system 100 including an apparatus including a host 102 and a memory device 104, according to several embodiments of the present disclosure. In some embodiments, host computer 102 and/or memory system 104 may be part of electronic control unit (ECU) 101 (eg, an electronic control unit of an autonomous vehicle). As will be appreciated, ECU 101 is an electronic component resident on board an autonomous vehicle that controls the performance of one or more specific functions. An autonomous vehicle may contain a large number of such ECUs 101 , and it will be appreciated that the ECU 101 shown in FIG. 1 may represent a single ECU 101 or an aggregate of ECUs (or portions thereof) residing on board the autonomous vehicle. As used herein, "device" may refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, one or more dies, one or more modules, one or more device or one or more systems. In some embodiments, computing system 100 may be part of an autonomous vehicle (eg, autonomous vehicle 541 shown in FIG. 5 herein). For example, computing system 100 may reside on an autonomous vehicle. In such embodiments, computing system 100 may control the operation of the ego vehicle by controlling, for example, acceleration, braking, steering, parking, etc. of the ego vehicle.As used herein, the term "resides on" means that something is physically located on a particular component. For example, resident of computing device 100 on an autonomous vehicle refers to a condition in which computing system 100 is physically coupled to or physically located within the autonomous vehicle. The term "residing on" may be used interchangeably herein with other terms such as "deployed on" or "located on".Memory system 104 may include a number of different memory devices 123, 125 (and/or 227 shown herein in FIG. 2 ), which may include one or more different media types 123, 125 (and/or herein 227) shown in Figure 2). The different memory devices 123, 125, and/or 227 may include one or more memory modules (eg, single inline memory modules, dual inline memory modules, etc.).Memory system 104 may include volatile memory and/or nonvolatile memory. In several embodiments, memory device 104 may comprise a multi-chip device. A multi-chip device may include a number of different memory devices 123, 125, and/or 227, which may include a number of different memory types and/or memory modules. For example, a memory system may include non-volatile or volatile memory on any type of module. As shown in FIG. 1 , computing system 100 may include a controller 120 , which may include a processor 122 . Each of the components (eg, ECU 101, host 102, controller 120, processor 122, and/or memory devices 123, 125) may be individually referred to herein as a "device."Memory system 104 may provide main memory for computing system 100 or may serve as additional memory and/or storage throughout computing system 100 . The memory system 104 may include one or more memory devices 123, 125, which may include volatile and/or non-volatile memory cells. For example, at least one of the memory devices 123, 125 may be a flash array having a NAND architecture. Furthermore, at least one of the memory devices 123, 125 may be a dynamic random access array of memory cells. Embodiments are not limited to a particular type of memory device. For example, memory system 104 may include RAM, ROM, DRAM, SDRAM, PCRAM, RRAM, and/or flash memory (eg, NAND and/or NOR flash memory devices), among others.However, embodiments are not limited thereto, and the memory system 104 may include other non-volatile memory devices 123, 125, such as non-volatile random access memory devices (eg, NVRAM, ReRAM, FeRAM, MRAM, PCM), such as "emerging" memory devices such as variable resistance (e.g., 3D cross-point (3D XP)) memory devices, memory devices comprising arrays of self-selectable memory (SSM) cells, memory devices operating according to the Computational Interconnect Standard (CXL), and the like, or its combination.Resistance variable memory devices may incorporate stackable cross-grid data access arrays to perform bit storage based on changes in bulk resistance. In addition, in contrast to many flash-based memories, resistive variable nonvolatile memory can perform write-in-place operations, where nonvolatile memory cells can be written to without pre-erasing the nonvolatile memory cells. for programming. In contrast to flash-based memory and resistance variable memory, self-select memory cells may include memory cells with a single chalcogenide material that acts as both the switch and the storage element of the memory cell.In some embodiments, the memory system 104 may be a Compute Express Link (CXL) compliant memory system (eg, the memory system may include a PCIe/CXL interface). CXL is a high-speed central processing unit (CPU)-to-device and CPU-to-memory interconnect designed to accelerate next-generation data center execution. CXL technology maintains memory coherency between CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost.CXL was designed as an industry open-standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs to support emerging applications such as artificial intelligence and machine learning. CXL technology builds on the Peripheral Component Interconnect Express (PCIe) infrastructure, which utilizes the PCIe physical and electrical interface to communicate between, for example, input/output (I/O) protocols, memory protocols (e.g., initially allowing hosts to share memory with accelerators) and High-level protocols are provided in the field of coherent interfaces. In some embodiments, CXL technology may include multiple I/O lanes configured to transmit multiple commands to circuitry external to controller 120 at a rate of approximately thirty-two (32) gigatransfers per second. Or transmitted from the circuitry, such as memory devices 123 , 125 , 227 and/or host 102 . In another embodiment, the CXL technology may include a Peripheral Component Interconnect Express (PCIe) 5.0 interface coupled to multiple I/O lanes, and the controller 120 will receive information related to the PCIe 5.0 interface according to the Compute Express Link memory system via the PCIe 5.0 interface. commands from at least one of the first memory device 123 or the second memory device 125 or both.As shown in FIG. 1, memory devices 123, 125 include different types of memory devices. For example, memory device 125 may be a non-volatile memory device, such as a resistance variable memory device, a memory device operating according to the CXL protocol, a 3D XP memory device, or a NAND memory device, etc., and memory device 123 may be volatile memory devices, such as DRAM devices, or vice versa. That is, memory devices 123,125 may include different media types 124,126. Embodiments are not so limited, however, and the memory devices 123, 125 may comprise any type of memory device provided at least two of the memory devices 123, 125 comprise different media types 124, 126. As used herein, "media type" generally refers to the type corresponding to the memory cell architecture of the memory device 123,125. For example, one of the media types 124, 126 may correspond to an array of memory cells including at least one capacitor and at least one transistor, while the other of the media types 124, 126 may include floating gate MOSFETs array. In some embodiments, at least one of the media types 124, 126 may include an array of resistance variable memory cells configured to perform based on a change in the bulk resistance associated with the resistance variable memory cell. bit storage.As illustrated in FIG. 1 , host 102 may be coupled to memory system 104 . In several embodiments, memory system 104 may be coupled to host 102 via one or more signals (eg, signal 103). In FIG. 1 , memory system 104 is coupled to host 102 via channel 103 , which may additionally be coupled to controller 120 and/or processor 122 of memory system 104 . Controller 120 and/or processor 122 are coupled to memory devices 123 , 125 via channels 105 , 107 . In some embodiments, each of memory devices 123, 125 is coupled to controller 120 and/or processor 122 by one or more respective channels 105, 107 such that each of memory devices 123, 125 can receive a corresponding A message, command, request, protocol, or other signaling of the type of memory device 123, 125 coupled to the controller 120 (e.g., a message, command, request, protocol, or other signaling conforming to the media type 124, 126 of the memory device 123, 125 signaling).The ECU 101 may further include a Radio Frequency Integrated Circuit (RFIC) 111 . As used herein, the term "RFIC" generally refers to an electrical integrated circuit operating in a frequency range suitable for wireless transmission. In some embodiments, RFIC 111 may facilitate the operation of an autonomous vehicle (e.g., autonomous vehicle 541 shown in FIG. 5 herein), a base station (e.g., base station 543 shown in FIG. communication between other autonomous vehicles operating on roads or streets.As shown in FIG. 1 , system 100 (eg, an autonomous vehicle) further includes internal sensors 113 . Interior sensors may include sensors associated with various interior components of the vehicle. Non-limiting examples of such sensors include door sensors (e.g., sensors to determine whether the doors of the vehicle are open or closed), steering wheel sensors (e.g., sensors to detect whether the operator of the vehicle is touching the steering wheel), seat sensors (e.g., sensors to determine whether a person is seated in a vehicle seat and, if so, determine, for example, the person's weight), interior cameras (e.g., to provide facial detection and recognition of the operator and/or passengers of the vehicle) sensor) and so on.In addition, the system 100 and/or the ECU 101 may further have various sensors, which are not shown in order to avoid confusing the drawings. For example, ECU 101 may include inertial sensors, radar sensors, lidar sensors, etc. that may be used to aid in the navigation and operation of the autonomous vehicle.Host 102 may be a host system such as a personal notebook computer, desktop computer, digital camera, smartphone, memory card reader, and/or Internet of Things (IoT) enabled device, among various other types of hosts. However, in some embodiments, host computer 102 includes one or more central processors that execute instructions to control the operation of the autonomous vehicle.Those of ordinary skill in the art will understand that "processor" may mean one or more processors, such as a parallel processing system, several coprocessors, and the like. System 100 may comprise separate integrated circuits, or one or more of host 102, memory system 104, control circuitry 120, and/or memory devices 123, 125 may be on the same integrated circuit. Computing system 100 may be, for example, a server system and/or a high performance computing (HPC) system and/or a portion thereof. While the example shown in FIG. 1 shows a system with a Von Neumann architecture, embodiments of the present disclosure may be implemented in non-Von Neumann architectures, which may not include the usual One or more components (eg, CPU, ALU, etc.) associated with a von Neumann architecture.Memory system 104 may include a controller 120 , which may include a processor 122 . Processor 122 may be provided in the form of an integrated circuit, such as an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), Reduced Instruction Set Computing (RISC), Advanced RISC A machine, system-on-a-chip, or other combination of hardware and/or circuitry. In some embodiments, processor 122 may include one or more processors (eg, processing devices, co-processors, etc.).Processor 122 may perform operations to monitor and/or characterize workloads running on memory system 104 . The characteristics may include, for example, bandwidth consumption, memory resource consumption, access frequency (e.g., whether data associated with one or more of the workloads is hot or cold), and/or power consumption in execution of the workloads And so on. The processor 122 may control the writing of at least a portion of the data for the authentication operation of the autonomous vehicle to the different memory devices 123, 125 in order to optimize the execution of the workload corresponding to the authentication operation, and/or to balance the execution of the workload corresponding to the different memory devices. 123, 125 workload of authentication operations for media management purposes and so on.In a non-limiting example, an apparatus (eg, computing system 100 ) may include a processor 122 , a first memory device 123 including a first type of media 124 , and a second memory device 125 including a second type of media 126 . Processor 122, first memory device 123, second memory device 125, and processor 122 may in some embodiments reside on an autonomous vehicle (eg, autonomous vehicle 541 shown in FIG. 5 herein). The processor 122 may be coupled to a first memory device 123 and a second memory device 125 .Processor 122 may perform authentication operations using determined driving patterns associated with authorized operators of the vehicle in which the device is deployed, information received from global positioning satellites (GPS), and information received from base stations to determine the vehicle's current operating whether the operator is an authorized operator of the vehicle. As used herein, a "base station" generally refers to a system that generates and receives electromagnetic radiation within a specific frequency range and that facilitates communications between a base station and a computing device (e.g., a mobile computing device such as a smartphone) within the base station's network coverage area. A device for the transfer of data or other information. Several non-limiting examples of frequency ranges that a base station can generate and receive may include 700MHz-2500MHz (in the case of a 4G base station) or 28GHz-39GHz (in the case of a 5G base station).If it is determined that the current operator of the vehicle is not an authorized operator of the vehicle, the processor 122 may transmit a notification to the authorized operator of the vehicle and/or disable the vehicle, among other actions to ensure the safety of the vehicle. On the other hand, if it is determined that the current operator of the vehicle is an authorized operator of the vehicle, the processor 122 may allow the vehicle to continue operating.Processor 122 may determine whether the vehicle has experienced a vehicle event, and reallocate computing resources between the first memory device and the second memory device in response to the determination that the vehicle has experienced a vehicle event. In some embodiments, the vehicle event may include at least one of the vehicle being stationary for greater than a threshold period of time, starting, and/or detecting a change in captured internal vehicle sensor data.In some embodiments, processor 122 may use reallocated computing resources to perform subsequent authentication operations using the determined driving pattern and information received from global positioning satellites and base stations to determine whether the current operator of the vehicle is authorized to operate the vehicle By. However, embodiments are not so limited, and in some embodiments, in addition to GPS and/or station information using reallocated computing resources, processor 122 may use internal vehicle sensor data to perform authentication operations or in subsequent authentication operations. at least one or both of theProcessor 122 may reallocate computing resources between the first memory device and the second memory device based at least in part on the determined environmental characteristics in which the vehicle is operating. As used herein, the term "environmental characteristics" generally refers to various conditions that a vehicle encounters while the vehicle is operating. Non-limiting examples of environmental characteristics may include weather conditions, traffic conditions, presence (or absence) of road construction (e.g., presence or absence of traffic barriers, road construction signage, detours, etc.), and/or presence (or absence) of traffic signs presence) (eg, the vehicle is operating on a highway that contains relatively few traffic signs rather than in a city that contains relatively many traffic signs), and other conditions that the vehicle may encounter during operation.Continuing with this example embodiment, processor 122 may execute one or more sets of machine learning instructions to determine driving patterns associated with authorized operators of the vehicle over time. Execution of machine learning operations may include executing instructions to use training data to make predictions and/or learn behavior over time to perform operations. For example, instructions may be executed while an authorized driver is operating the vehicle to predict and/or learn driving patterns, such as the general speed at which the operator operates the vehicle, the operator's lane changing behavior, braking behavior, and/or operating the preferred or common route followed by the reader, and so on. In some embodiments, prediction and/or learning of the driving behavior of an authorized operator of a vehicle over time may allow deviations from a learned driving pattern to be detected and, in some embodiments, may result in an application initiated indication to cause an authentication operation to be performed to determine whether the operator of the vehicle is an authorized operator of the vehicle.In some embodiments, the processor 122 may determine the characteristics of the first memory device 123 and the second memory device 125 prior to the reallocation of computing resources between the first memory device 123 and the second memory device 125 . Non-limiting examples of processor-determinable characteristics may include bandwidth, memory access time, latency, and/or memory cell density characteristics of the first memory device 123 and/or the second memory device 125 . For example, the processor 122 may reallocate computing resources between the first memory device 123 and the second memory device 125 such that higher bandwidth, faster memory access times, and A greater than a threshold amount of computing resources of/or lower latency may be used to perform subsequent authentication operations and/or further subsequent authentication operations.In response to receiving the application initiation indicator, the processor 122 may reallocate between the first memory device 123 and the second memory device 125 based at least in part on the determined characteristics of the first memory device 123 and the second memory device 125. computing resources. In some embodiments, an application launch indicator may be generated in response to detecting the occurrence of a vehicle event. However, the embodiments are not so limited, and in some embodiments, the application may be generated in response to a determination that the driving pattern of the operator of the vehicle has deviated from the predicted and/or learned driving pattern of an authorized operator of the vehicle Initiate indicator. The processor 122 may determine the characteristics of the first memory device 123 and the second memory device 125 before or during execution of the application program. In some embodiments, the determined characteristics of the first memory device 123 and the second memory device 125 may include bandwidth, memory access time, latency, and/or memory cell density of the first memory device 123 and the second memory device 125 and other features.In some embodiments, processor 122 may process information received from a GPS (e.g., GPS 547 shown in FIG. 5 herein), a base station (e.g., base station 543 shown in FIG. 5 herein), and/or further internal sensors 113 . In some embodiments, operations for processing received information captured by imaging device 121 may involve applications having specific workloads corresponding thereto. When a workload is written to the first memory device 123 or the second memory device 125, the processor 122 may determine characteristics of the workload. In some embodiments, the characteristics of a workload may include the frequency of access to data associated with the workload, the latency associated with execution of the workload, and/or the amount of processing resources consumed while executing the workload. at least one of the . In some embodiments, applications and/or workloads may involve processing data received and/or captured by imaging device 121 .The processor 122 may determine whether to write at least a portion of the data associated with the workload to the other of the first memory device 123 or the second memory device 125 and control the writing to the first memory device based on the characteristics of the workload. 123 or the other of the second memory device 125 distribution of execution of the workload such that the work is subsequently performed after at least a portion of the workload has been written to the other of the first memory device 123 or the second memory device 125 at least part of the load. In some embodiments, subsequently performed workloads may involve processing data received from GPS, base stations, and/or internal sensors 113 .As mentioned above, either the first memory device 123 or the second memory device 125 may be a non-persistent (eg, volatile) memory device, and the other of the first memory device 123 or the second memory device 125 may be Persistent (eg, non-volatile) memory devices. Furthermore, as mentioned above, in some embodiments, either the first type of memory or the second type of memory, or both, include sets of memory cells that exhibit different memory characteristics. For example, a first memory device 123 may have a first media type 124 and a second memory device 125 may have a second media type 126 associated therewith.Continuing with the above non-limiting example, either the first memory device 123 or the second memory device 125 may be a NAND flash memory device comprising a set of single-level memory cells (SLC) and a set of multi-level memory cells (SLC) A memory cell (MLC), as shown herein in Figures 3 and 4. In such embodiments, processor 122 may write at least a portion of the data associated with the workload to the set of SLC memory cells or the set of MLC memory cells based at least in part on receiving the application initiation indicator. In some embodiments, the set of SLCs may be configured to store a look-up table to facilitate writing at least a portion of data to the other of the first memory device 123 or the second memory device 125 .As used herein, the term “look-up table” generally refers to a data structure containing index information that may correspond to a desired output format of data written to memory system 104 . For example, a lookup table may include prefetch information that may be used by memory system 104 to output various types of data processed by the memory system in a requested format. In some embodiments, the look-up table may be included in a flash memory device such as NAND memory device 333 , eg, in SLC portion 335 of NAND memory device 333 . A lookup table may store data corresponding to artificial intelligence and/or machine learning applications. In such embodiments, it may be beneficial to store the look-up table in the SLC portion of the memory device, since SLC memory generally provides high access speed and accurate storage. In some embodiments, such artificial intelligence and/or machine learning applications may be performed in conjunction with the performance of the operations described herein.The embodiment of FIG. 1 may include additional circuitry that is not illustrated to avoid obscuring embodiments of the present disclosure. For example, memory system 104 may include address circuitry that latches address signals provided on I/O connections by I/O circuitry. Address signals may be received and decoded by row and column decoders to access memory system 104 and/or memory devices 123,125. Those skilled in the art will appreciate that the number of address input connections may depend on the density and architecture of the memory system 104 and/or memory devices 123 , 125 .2 is another functional block diagram in the form of a computing system 200 including a device including a host 202 and a memory system 204 in accordance with several embodiments of the present disclosure. In some embodiments, computing system 200 may reside on an autonomous vehicle, such as autonomous vehicle 541 shown in FIG. 5 herein. The memory system 204 may include a number of different memory devices 223, 225, 227, which may include one or more different media types 223, 225, 227. The different memory devices 223, 225, and/or 227 may include one or more memory modules (eg, single inline memory modules, dual inline memory modules, etc.). The host 202, memory system 204, controller 220, processor 222, memory devices 223, 225, and/or media types 224, 226 may be similar to the host 102, memory system 104, controller shown herein in FIG. 120 , processor 122 , memory means 123 , 125 and/or media type 124 , 126 .In some embodiments, each of memory devices 223, 225, and 227 may be a different type of memory device. Accordingly, in some embodiments, each of memory devices 223, 225, and 227 may contain different media types 224, 226, and 228. In a non-limiting example, memory device 223 may be a volatile memory device, such as a DRAM device, and may include a media type 224 corresponding to a DRAM memory device (eg, an array of memory cells including at least one capacitor and at least one transistor). . Continuing with the example, memory device 225 may be a flash memory device, such as a NAND memory device, and may include a media type 226 corresponding to a NAND memory device (eg, including a floating gate MOSFET array). In this non-limiting example, memory device 227 may be an emerging memory device (eg, emerging memory device 439 shown in FIG. 4 herein), such as the emerging memory device described above, and may include a device corresponding to Media for emerging memory devices (e.g., arrays of resistance variable memory cells configured to perform bit storage based on changes in bulk resistance associated with the resistance variable memory cells, arrays of self-selecting memory cells, arrays of memory cells operating according to the CXL protocol, etc.) Type 228.Memory devices 223 , 225 , and 227 may be configured to read, write, and/or store data, such as GPS data, base station data, and/or internal sensor data corresponding to one or more workloads performed by computing system 200 . An application corresponding to a workload (e.g., corresponding to the execution of an authentication operation) may be executed, for example, by processor 222, and in response to receiving an application initiation indicator, executes to cause data to be written to memory devices 223, 225, and 227. for use in the execution of applications and/or workloads. In some embodiments, the controller 220 may pre-allocate resources among the memory devices 223, 225, and/or 227 in response to receiving the application initiation indicator but prior to performance of the authentication operation. In such embodiments, the controller 220 may ensure that sufficient computing resources and/or memory resources are available for the memory devices 223, 225, and/or 227 that will be used to perform authentication operations. That is, in some embodiments, the controller 220 may control writing at least a portion of the data to a memory device that has been used based on the nature of the workload prior to receiving GPS data, base station data, and/or internal sensor data. At least a portion of computing resources associated with the memory device is reallocated (eg, in response to receiving an application initiation indicator due to the vehicle experiencing a vehicle event).However, embodiments are not so limited, and in some embodiments, controller 220 may control writing at least a portion of the data to a different memory device than the memory device in which the data was originally written based on characteristics of the workload. For example, if data corresponding to a particular workload (e.g., for authentication operations of an autonomous vehicle) is stored in memory device 223, controller 220 and/or processor 222 may use a different memory device in response to the workload. The determination is performed more efficiently (eg, optimized) such that at least a portion of data corresponding to a particular workload is written to memory device 225 and/or memory device 227 .In some embodiments, the controller 220 may control data transfer by copying and/or writing data from one of the memory devices 223, 225, and/or 227 to a different one of the memory devices 223, 225, and/or 227. Movement between memory devices 223, 225, and/or 227 to ensure that sufficient computing resources and/or memory resources are available for the memory devices 223, 225, and/or 227 that will be used to perform authentication operations. For example, the controller 220 may determine that cold data and/or data corresponding to lower priority applications are running using memory devices having characteristics that may be desirable for performing authentication operations. In such scenarios, the controller 220 may cause cold data and/or data corresponding to lower priority applications to different memory devices to ensure that there are sufficient memory resources available for applications having characteristics that may be desirable for performing authentication operations. The memory device receives data corresponding to authentication and performs authentication operations. In some embodiments, controller 220 may perform these operations in response to receiving the application initiation indicator.In a non-limiting example, a system (e.g., computing system 200 and/or autonomous vehicle 501 shown in FIG. device 223 , a second memory device 225 including media 226 of a second type, and a third memory device 227 including media 228 of a third type. In some embodiments, the first memory device 223 may be a dynamic random access memory device, the second memory device 225 may be a NAND flash memory device, and the third memory device 227 may be an emerging memory device, for example as described above CXL compliant memory devices, 3D XP memory devices, self-selected cell memory devices, etc.In at least one embodiment, media type 224 includes an array of memory cells including at least one capacitor and at least one transistor, media type 226 includes a floating gate MOSFET array, and media type 228 includes An array of resistance variable memory cells configured to perform bit storage based on a change in bulk resistance associated with the resistance variable memory cells.The processor 222 may respond to the generation of the application initiation indicator based at least in part on characteristics of the first memory device 223, the second memory device 225, and the third memory device 227 in the first memory device 223, the second memory device 225, or Reallocating computing resources among the third memory device 227, or any combination thereof, the application initiation indicator may be generated in response to determining that the ego vehicle has experienced a vehicle event. As described herein, the processor 222 may determine the characteristics of the first memory device 223, the second memory device 225, and the third memory device 227 to authenticate the operator of the autonomous vehicle before or during execution of the application program, while the first memory device 223. The determined characteristics of the second memory device 225 and the third memory device 227 may include bandwidth, memory access time, latency, memory cell density of the first memory device 223, the second memory device 225, and the third memory device 227 or any combination thereof. In some embodiments, processor 222 may reallocate computing resources such that greater than a threshold amount of computing resources are available for memory devices exhibiting characteristics consistent with processing and/or performing operations described herein.The processor 222 may write at least a portion of the data to be used for execution of the authentication operation to the first memory device 223 , the second memory device 225 and/or the third memory device 227 in response to the generation of the application initiation indicator. In some embodiments, when one or more images captured by the imaging device are written to the first memory device 223, the second memory device 225, or the third memory device 227, or any combination thereof, the processor 222 may execute a corresponding An application for detecting anomalies in at least a portion of an organism.In embodiments in which the memory system 204 is resident on the autonomous vehicle, the processor 222 may execute one or more sets of machine learning instructions to The characteristics of the first memory device 223 , the second memory device 225 , and the third memory device 227 are determined based on the monitored benchmark data associated with the devices 227 . As used herein, the term "benchmark data" generally refers to data that can be used to test characteristics of the memory device 204, such as read/write speed, throughput, bandwidth, accuracy, and/or data retention, and to indicate that the memory device 204 Other test data of the overall performance. In such embodiments, the processor 222 may, based at least in part on the determined characteristics of the first memory device 223 , the second memory device 225 , and the third memory device 227 , select between the first memory device 223 , the second memory device 225 , or Computing resources are reallocated among the third memory device 227 or any combination thereof.In some embodiments, processor 222 may determine the workload performed by monitoring at least one of A characteristic of: the frequency of access to data associated with a workload, the latency associated with execution of the workload, and/or the amount of processing resources consumed while executing the workload, and based at least in part on Writing at least a portion of the data associated with the workload to other memory based on the determined access frequency of the data, the latency associated with execution of the workload, and/or the amount of processing resources consumed while executing the workload At least one of device 223 , memory device 225 , or memory device 227 .In some embodiments, at least a portion of the data written to memory device 223 , memory device 225 , or memory device 227 to be used for performance of authentication operations is formatted according to a common number format or an assumed number format. In contrast to the IEEE 754 floating-point or fixed-point binary format, which includes a subset of sign bits, mantissa bits, and exponent bits, the general number format for, for example, hypothetical numbers includes a sign bit subset, a subset of status bits, a subset of mantissa bits, and a subset of exponent bits. This may allow the accuracy, precision, and/or dynamic range of assumed numbers to be greater than that of floating point numbers or other numeric formats. Additionally, assumptions may reduce or eliminate overflows, underflows, NaNs, and/or other corner cases associated with floating point and other number formats. Furthermore, using hypothetical numbers may allow fewer bits to be used to represent values (eg, numbers) than floating point numbers or other number formats.As used herein, "precision" refers to the amount of bits in a bit string used to perform calculations using the bit string. For example, a bit string may be said to have 16-bit precision if each bit in the 16-bit bit string is used when performing a calculation using the bit string. However, a bit string may be said to have 8-bit precision if only 8 bits of the bit string are used when performing calculations using the bit string (eg, if the first 8 bits of the bit string are zeros). As the precision of the bit string increases, calculations can be performed with higher accuracy. Conversely, as the precision of the bit string decreases, calculations can be performed with less precision. For example, an 8-bit string can correspond to a data range consisting of two hundred and fifty-five (256) precision steps, while a 16-bit string can correspond to a range consisting of sixty-three thousand five hundred and thirty-six (63,536) The accuracy step size corresponds to the data range composed.As used herein, "dynamic range" or "dynamic range of data" refers to the ratio between the maximum and minimum values available for a string of bits with a particular precision associated therewith. For example, the largest numerical value that can be represented by a bit string with a particular precision associated therewith can determine the dynamic range of the data format of the bit string. For general number (eg, hypothetical number) format bit strings, the dynamic range may be determined by the value of a subset of the exponent bits of the bit string.Dynamic range and/or precision may have a variable range threshold associated therewith. For example, the dynamic range of data may correspond to applications using the data and/or various computations using the data. This may be due to the fact that the dynamic range expected by one application may be different than that expected by another application, and/or because some calculations may require different data dynamic ranges. Accordingly, embodiments herein may allow changing the dynamic range of data to suit the requirements of different applications and/or computations. In contrast to methods that do not allow manipulation of the dynamic range of data to suit the requirements of different applications and/or computations, embodiments herein may improve resource usage by allowing the dynamic range of data to be changed based on the application and/or computation in which the data is to be used and/or data accuracy.FIG. 3 is a functional block diagram in the form of an apparatus including a memory system 304 in accordance with several embodiments of the present disclosure. FIG. 3 shows memory system 304 , which may be similar to memory system 104 shown in FIG. 1 and/or memory system 204 shown in FIG. 2 herein. As shown in FIG. 3 , memory system 304 includes controller 320 (which may be similar to controller 120 illustrated herein in FIG. 1 and/or controller 220 illustrated in FIG. 2 ), DRAM memory device 331 (which may be may be similar to one of memory devices 123, 125 illustrated herein in FIG. 1 and/or one of memory devices 223, 225, 227 illustrated in FIG. 2), and NAND memory device 333 (which may be similar to One of the memory devices 123, 125 illustrated herein in FIG. 1 and/or one of the memory devices 223, 225, 227 illustrated in FIG. 2).As shown in FIG. 3, a NAND memory device 333 may include various portions of memory cells, which may include a set of single-level memory cells (SLC) 335 and a set of multi-level memory cells (MLC), such as a set of three-level memory cells (SLC). Flat memory cell (TLC) 337, quad level cell (QLC), etc. In some embodiments, the controller may cause an application executing on memory system 304 to detect abnormalities in blood corresponding to At least part of the data of an image or sequence of images (eg images of blood cells within a blood vessel) is written to the SLC section 335 and/or the TLC section 337 .In some embodiments, as part of optimizing the performance of memory system 304 during execution of applications and corresponding workloads, data classified as hot data may be written to SLC portion 335 while data classified as cold data may be written to SLC portion 335. into TLC section 337, or vice versa. By selectively writing portions of the data involved in executing an application program to different memory portions of the NAND memory device 333 (e.g., to the SLC portion 335 and/or the TLC portion 337), the performance of the computing system (especially in During execution of an application to detect abnormalities in blood) described herein may be improved over some methods. However, embodiments are not so limited, and in some embodiments, hot data can be written to the DRAM memory device, cooler data can be written to the NAND memory device 333 , and cold data can be written to the emerging memory device 339 .For example, by selectively writing data portions to DRAM memory device 331 and/or SLC portion 335 that correspond to workloads that benefit from faster execution, such as the authentication operations described herein, while writing portions of data corresponding to workloads that cannot be accessed from Executed data portions of applications and workloads that benefit from fast execution are written to TLC portion 337 and/or emerging memory devices (e.g., emerging memory device 439 shown in FIG. 4 ), such as resulting from execution of the Those workloads of certified operations may be assigned to memory devices within memory system 304 that allow the workload to perform optimally within memory system 304 .In some embodiments, at least a portion of the SLC portion 335 of the NAND memory device 333 may be allocated for storing look-up tables. The lookup table may be a data structure containing index information that may correspond to a desired output format of data written to or from the memory system 304 . For example, a lookup table may include prefetch information that may be used by memory system 304 to output various types of data processed by memory system 304 in a requested format. In some embodiments, a lookup table may facilitate writing at least a portion of data involved in a workload to one of the memory devices described herein.FIG. 4 is another functional block diagram in the form of an apparatus including a memory system 404 in accordance with several embodiments of the present disclosure. FIG. 4 shows memory system 404 , which may be similar to memory system 104 shown in FIG. 1 , memory system 204 shown in FIG. 2 , and/or memory system 304 shown in FIG. 3 herein.As shown in FIG. 4, the memory system 404 includes a controller 420 (which may be similar to the controller 120 illustrated herein in FIG. 1, the controller 220 illustrated in FIG. 2, and/or the control illustrated in FIG. device 320), a DRAM memory device 431 (which may be similar to one of the memory devices 123, 125 illustrated in FIG. 1 herein, one of the memory devices 223, 225, 227 illustrated in FIG. 2, and/or in One of the DRAM memory devices 331 illustrated in FIG. 3 ), a NAND memory device 433 (which may be similar to one of the memory devices 123, 125 illustrated herein in FIG. 225, 227, and/or the NAND memory device 333 illustrated in FIG. 3), and emerging memory device 439 (which may be similar to one of the memory devices 123, 125 and and/or one of the memory devices 223, 225, 227 illustrated in FIG. 2).DRAM memory device 431 may include an array of memory cells including at least one transistor and one capacitor configured to store charge corresponding to a single data bit. NAND memory device 433 may include various portions of memory cells, which may include a set of single-level memory cells (SLC) 435 and a set of multi-level memory cells (MLC), such as a set of three-level memory cells (TLC) 437 , may be similar to the SLC section 335 and the TLC section 337 shown and described herein in connection with FIG. 3 , respectively.Emerging memory device 439 may be an emerging memory device as described above. Emerging memory device 439 may be, for example, a resistance variable (e.g., 3D cross point (3D XP)) memory device, a memory device including an array of self-selectable memory (SSM) cells, a memory device operating according to the CXL protocol, etc., or any combination.FIG. 5 is a diagram illustrating an autonomous vehicle 541 including an electronic control unit (ECU) 501 , according to several embodiments of the present disclosure. As shown in FIG. 5 , ego vehicle 541 communicates with base station 543 via communication path 545 and with satellite (eg, global positioning satellite) 547 via communication path 549 . ECU 501 may be similar to ECU 101 illustrated herein in FIG. 1 . Although not explicitly shown in FIG. 5 so as not to obscure the drawings, the ECU 501 may include a memory system, such as the memory systems 104, 204, 304, 404 illustrated herein in FIGS. Host 102 is illustrated.In a non-limiting example, autonomous vehicle 541 may include a first memory device including a first type of media, a second memory device including a second type of media, and a memory device coupled to the first memory device and the second memory device. processor. The processor may be similar to the processor 122, 222 illustrated herein in FIGS. 1-2. Furthermore, the first memory device can be similar to the memory device 123, 223 illustrated in Figures 1 and 2 herein, and the first type of media can be similar to the media type 124, 224 illustrated in Figures 1-2 herein. Similarly, the second memory device may be similar to the memory device 125, 225 illustrated in Figures 1 and 2 herein, and the second type of media may be similar to the media type 126, 226 illustrated in Figures 1-2 herein. Embodiments are not so limited, however, and the first memory device or the second memory device may be similar to the various types of memory devices discussed herein in connection with FIGS. 3-4 .The processor is executable with instructions to determine a driving mode associated with an authorized operator of the autonomous vehicle. In some embodiments, a processor executes one or more sets of machine learning instructions to determine a driving pattern associated with an authorized operator of an autonomous vehicle over time.The processor may perform an authentication operation using the determined driving pattern and information received from global positioning satellites, base stations, and at least one internal sensor associated with the ego vehicle to determine whether the current operator of the vehicle is an authorized operator of the vehicle. In some embodiments, the internal sensor may be similar to internal sensor 113 illustrated herein in FIG. 1 .A processor can determine whether the ego vehicle has experienced a vehicle event. As described herein, a vehicle event may include at least one of the vehicle being stationary for greater than a threshold period of time, starting, or detecting a change in captured internal vehicle sensor data, or any combination thereof.The processor may reallocate computing resources between the first memory device and the second memory device in response to a determination that the autonomous vehicle has experienced a vehicle event. For example, in some embodiments, the processor may, in response to a determination that the ego vehicle has experienced a vehicle event, transfer information stored in a first memory device to a second memory device to increase the number associated with the first memory device. The amount of memory resources available. However, embodiments are not so limited, and in some embodiments, the processor may respond to determining that the first memory device exhibits at least one of higher bandwidth or faster memory access time than the second memory device or both Both or the second memory device exhibits at least one or both of higher bandwidth or faster memory access time than the first memory device, while reallocating computation between the first memory device and the second memory device resource.In some embodiments, the processor may reallocate computing resources between the first memory device and the second memory device based at least in part on the determined environmental characteristics in which the vehicle is operating.The processor may use the reallocated computing resources to perform subsequent authentication operations using the determined driving pattern and information received from the global positioning satellites and base stations to determine whether the current operator of the autonomous vehicle is an authorized operator of the autonomous vehicle.As described above, in some embodiments, the processor may transmit a notification to an authorized operator of the autonomous vehicle, deactivate the autonomous vehicle, or otherwise respond to a determination that the current operator of the autonomous vehicle is not an authorized operator of the vehicle. both. In the alternative, in some embodiments, the processor may allow the autonomous vehicle to continue operating in response to a determination that the current operator of the autonomous vehicle is an authorized operator of the autonomous vehicle.FIG. 6 is a flowchart representing an example method corresponding to a vehicle operator authentication operation in accordance with several embodiments of the present disclosure. Method 650 may be performed by processing logic that may comprise hardware (e.g., a processor, a processing device, control circuitry, dedicated logic, programmable logic, microcode, hardware and/or integrated circuits of a device, etc.), software ( For example, instructions to run or execute on a processor) or a combination thereof. Although shown in a particular order or sequence, the order of the processes may be modified unless otherwise specified. Therefore, it should be understood that the illustrated embodiments are examples only, and that illustrated processes may be performed in a different order, and that some processes may be performed in parallel. Additionally, one or more procedures may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.At block 651 , method 650 may include executing one or more machine learning instructions to determine a driving mode associated with an authorized operator of the vehicle. For example, as described above, one or more machine learning instructions may be executed to predict and/or learn the driving behavior of an authorized operator of a vehicle over time. This may allow a departure from the learned driving pattern to be detected and, in some embodiments, may cause an application initiation indicator to be generated to cause an authentication operation to be performed to determine whether the operator of the vehicle is an authorized operator of the vehicle.At block 653, the method 650 may include performing a "routine" authentication operation using the determined driving pattern and information received from global positioning satellites, base stations, and internal vehicle sensors to determine whether the current operator of the vehicle is an authorized operator of the vehicle . As used herein, a "routine authentication operation" generally refers to an authentication operation that is performed upon start-up of the vehicle and again at regularly scheduled intervals during operation of the autonomous vehicle. A "routine authentication operation" may be performed independently of authentication operations that do not occur periodically or at scheduled intervals (eg, subsequent authentication operations described in connection with block 659). For example, data from GPS and base stations can be used as two-way authentication to verify the driving patterns of the operator of the vehicle to determine if they are consistent with those of an authorized operator, while internal vehicle sensors can be used to The biometric data detects the identity of the operator and determines if he is an authorized operator of the vehicle.At block 655 , method 650 may include determining whether the vehicle has experienced a vehicle event. As described above, the vehicle event may include at least one of the vehicle being stationary for greater than a threshold period of time, starting, and/or detecting a change in captured internal vehicle sensor data.At block 657 , the method 650 may include reallocating computing resources among memory devices associated with the vehicle in response to the determination that the vehicle has experienced the vehicle event. In some embodiments, method 650 may include reallocating computing resources among memory devices associated with the vehicle such that higher bandwidth, faster memory access times, and/or faster memory access times are exhibited than other computing resources available to the vehicle. A greater than threshold amount of computing resources of lower latency may be used to perform subsequent authentication operations.At block 659, the method 650 may include using the reallocated computing resources and using the determined driving pattern, information received from global positioning satellites, information received from a base station, and internal vehicle sensors in response to determining that the vehicle has experienced a subsequent vehicle event. Subsequent authentication operations are performed to determine whether the current operator of the vehicle is an authorized operator of the vehicle. In some embodiments, method 650 may include performing subsequent authentication operations for each subsequent determined vehicle event while the vehicle is operating using reallocated computing resources.In some embodiments, method 650 may include determining an environmental characteristic in which the vehicle is operating and/or reallocating computing resources among memory devices based at least in part on the determined environmental characteristic. As described above, non-limiting examples of environmental characteristics may include weather conditions, traffic conditions, the presence (or absence) of road construction (e.g., the presence or absence of traffic barriers, road construction signs, detours, etc.), and/or traffic signs The presence (or absence) of traffic signs (for example, the vehicle is operating on a highway with relatively few traffic signs rather than in a city with relatively more traffic signs), and other conditions that the vehicle may encounter during operation.In some embodiments, method 650 may include performing traffic sequence prediction operations and/or reallocating computing resources among memory devices based at least in part on traffic sequence prediction operations. As used herein, the term "traffic sequence prediction operation" generally refers to the amount of an operation performed to estimate, determine, or otherwise predict objects (known or unknown) that an autonomous vehicle will encounter in the future. Traffic sequence prediction operations may include executing deep learning algorithms, and/or receiving information from other autonomous vehicles on the road and/or from base stations in communication with the autonomous vehicle, among others. A traffic sequence prediction operation may be performed to determine the likelihood that an ego vehicle will experience a vehicle event within a given threshold time period. For example, a traffic sequence may be performed to determine whether the road is clear (e.g., minimal traffic, traffic signals, and/or miles or kilometers of roadwork on the road), bad (e.g., there is high traffic ahead) traffic, numerous traffic signals and/or miles and/or kilometers of roadwork on or near the roadway), or something in between.As described above, the memory devices may include a first memory device or a second memory device, which may be a non-persistent memory device, and the other of the first memory device or the second memory device may be a persistent memory device. Additionally, as described herein, the processor, first memory device, and second memory device may reside on an autonomous vehicle (eg, autonomous vehicle 541 shown in FIG. 5 herein). In such embodiments, method 650 may include carrying out, executing, determining, reassigning, and executing by a processor in the absence of control signals generated external to the autonomous vehicle.In some embodiments, method 650 may include reallocating computing resources between the first memory device and the second memory device in response to receiving the indication corresponding to the initiation of the application. For example, in embodiments where the indication corresponding to the launch of an application is an application launch indicator, computing resources may be reallocated among the memory devices to ensure that a particular characteristic is exhibited (e.g., the fastest memory among the memory devices access time, maximum bandwidth among memory devices, etc.) may be used to perform the authentication operations described herein.At block 655, the method 650 may include determining, by the processor, for the first memory device and the second memory device, characteristics of a workload corresponding to execution of an application to process data captured by the imaging device. The characteristics of a workload may include the amount of computing resources consumed in executing the workload, the amount of processing time involved in executing the workload, or the amount of power consumed in executing the workload, among others.Although specific embodiments have been shown and described herein, those of ordinary skill in the art will appreciate that arrangements calculated to achieve the same results may be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the disclosure. It should be understood that the foregoing description has been made by way of illustration and not limitation. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. The scope of one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. The scope of one or more embodiments of the present disclosure should, therefore, be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the disclosure have to use more features than are expressly recited in each claim. Rather, inventive subject matter lies in less than all features of a single disclosed embodiment, as the following claims reflect. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
Methods and apparatus to perform string matching for network packet inspection are disclosed. In some embodiments there is a set of string matching slice circuits, each slice circuit of the set being configured to perform string matching steps in parallel with other slice circuits. Each slice circuit may include an input window storing some number of bytes of data from an input data steam. The input window of data may be padded if necessary, and then multiplied by a polynomial modulo an irreducible Galois-field polynomial to generate a hash index. A storage location of a memory corresponding to the hash index may be accessed to generate a slice-hit signal of a set of H slice-hit signals. The slice-hit signal may be provided to an AND-OR logic array where the set of H slice-hit signals is logically combined into a match result. |
CLAIMS What is claimed is: 1. A method to perform string matching for network packet inspection, the method comprising: configuring a set of H slice circuits, each 1th slice circuit of the set of H slice circuits being configured to perform the steps of: storing an iΛ input window OfW1 bytes of data from an input data steam; padding the W1 bytes of data if necessary, and multiplying the W1 bytes of data by a polynomial modulo an irreducible Galois-field polynomial to generate an iΛ hash index; and accessing a storage location of a memory corresponding to the 1th hash index to generate an iΛ slice-hit signal of a set of H slice-hit signals; and providing the 1th slice-hit signal to an AND-OR logic array as one of the set of H slice-hit signals. 2. The method of Claim 1 wherein configuring each 1th slice circuit of the set of H slice circuits to perform the step of providing the 1th slice-hit signal to the AND-OR logic array comprises: storing the iΛ slice-hit signal in the storage location of the memory corresponding to the 1th hash index. 3. The method of Claim 2 wherein each 1th input window of W1 bytes of data from the input data steam comprises a complete data pattern. 4. The method of Claim 2 wherein providing the ith slice-hit signal to the AND-OR logic array comprises: reading out the i •th slice-hit signal, from the storage location of the memory corr ■eessppoonnddiinngg ttoo tthhe ith hash index, to the AND-OR logic array as the ith one of the set of H slice-hit signals. 5. The method of Claim 2 wherein providing the ith slice-hit signal to the AND-OR logic array comprises: mutiplexing the 1th slice-hit signal from the storage location of the memory corresponding to the ith hash index, to the AND-OR logic array as the ith one of the set of H slice-hit signals. 6. The method of Claim 1 further comprising: configuring the AND-OR logic array to receive the set of H slice-hit signals and to combine the set of H slice-hit signals into a match result. 7. The method of Claim 6 wherein the AND-OR logic array is configured to receive the set of H slice-hit signals and to logically AND the set of H slice-hit signals into a match result. 8. The method of Claim 6 wherein the AND-OR logic array is configured to receive the set of H slice-hit signals and to logically OR the set of H slice-hit signals into a match result. 9. The method of Claim 6 wherein the AND-OR logic array is configured to receive the set of H slice-hit signals and to logically AND subsets of the set of H slice-hit signals into temporary results, and to logically OR the temporary results into a match result. 10. An apparatus comprising: an AND-OR logic array configurable to receive a set of H slice-hit signals and to combine the set of H slice-hit signals into a match result; and a set of H slice circuits, each 1th slice circuit of the set comprising: an input window configurable to store W1 bytes of data from an input data steam; a Ghash unit coupled with the input window and configurable to receive the W1 bytes of data, pad the W1 bytes of data if necessary, and multiply the W1 bytes of data by a polynomial modulo an irreducible Galois-field polynomial to generate an index; and a memory coupled with the Ghash unit and configurable to access a storage location responsive to the index to generate a slice-hit signal and to provide the slice-hit signal to said AND-OR logic array as one of the set of H slice-hit signals. 11. The apparatus of Claim 10 wherein providing the slice-hit signal to the AND-OR logic array comprises: reading out the slice-hit signal, from the storage location of the memory corresponding to the index of the 1th slice circuit, to the AND-OR logic array as the 1th one of the set of H slice-hit signals. 12. The apparatus of Claim 10 wherein providing the slice-hit signal to the AND-OR logic array comprises: multiplexing the slice-hit signal, from the storage location of the memory corresponding to the index of the 1th slice circuit, to the AND-OR logic array as the 1th one of the set of H slice-hit signals. 13. The apparatus of Claim 10 wherein the AND-OR logic array is configurable to receive the set of H slice-hit signals and to logically AND the set of H slice-hit signals into a match result. 14. The apparatus of Claim 10 wherein the AND-OR logic array is configurable to receive the set of H slice-hit signals and to logically OR the set of H slice-hit signals into a match result. 15. The apparatus of Claim 10 wherein the AND-OR logic array is configurable to receive the set of H slice-hit signals and to logically AND subsets of the set of H slice-hit signals into temporary results, and to logically OR the temporary results into a match result. 16. The apparatus of Claim 10 wherein the same irreducible Galois-field polynomial is used in each 1th slice circuit of the set of H slice circuits. 17. The apparatus of Claim 16 wherein each the W1 bytes of data are multiplied by a different distinct polynomial in each ith slice circuit of the set of H slice circuits. 18. A packet processing system to perform string matching for network packet inspection, the system comprising: a system processor; an AND-OR logic array configurable to receive a set of H slice-hit signals and to combine the set of H slice-hit signals into a match result; and a set of H slice circuits, each 1th slice circuit of the set comprising: an input window configurable to store W1 bytes of data from an input data steam; a Ghash unit coupled with the input window and configurable to receive the W1 bytes of data, pad the W1 bytes of data if necessary, and multiply the W1 bytes of data by a polynomial modulo an irreducible Galois-field polynomial to generate an index; and a memory coupled with the Ghash unit and configurable to access a storage location responsive to the index to generate a slice-hit signal and to provide the slice-hit signal to said AND-OR logic array as one of the set of H slice-hit signals; and a machine readable medium to store executable instructions, such that when said executable instructions are executed by the system processor, the system processor is caused to: set a pointer to a first character of the input data steam to establish a starting point for the input window of each ith slice circuit, and increment the pointer until the match result is positive or until an end-of-file is reached in the input data steam. 19. The system of Claim 18 wherein the same irreducible Galois-field polynomial is used in each ith slice circuit of the set of H slice circuits. 20. The system of Claim 19 wherein each the W1 bytes of data are multiplied by a different distinct polynomial in each ith slice circuit of the set of H slice circuits. 21. The system of Claim 18 wherein the AND-OR logic array is configurable to receive the set of H slice-hit signals and to logically AND the set of H slice-hit signals into a match result. 22. The system of Claim 18 wherein the AND-OR logic array is configurable to receive the set of H slice-hit signals and to logically OR the set of H slice-hit signals into a match result. 23. The system of Claim 18 wherein the AND-OR logic array is configurable to receive the set of H slice-hit signals and to logically AND subsets of the set of H slice-hit signals into temporary results, and to logically OR the temporary results into a match result. 24. The system of Claim 18 wherein providing the slice-hit signal to the AND-OR logic array comprises: reading out the slice-hit signal, from the storage location of the memory corresponding to the index of the 1th slice circuit, to the AND-OR logic array as the 1th one of the set of H slice-hit signals. 25. The system of Claim 18 wherein providing the slice-hit signal to the AND-OR logic array comprises: multiplexing the slice-hit signal, from the storage location of the memory corresponding to the index of the 1th slice circuit, to the AND-OR logic array as the 1th one of the set of H slice-hit signals. |
FILTER FOR NETWORK INTRUSION AND VIRUS DETECTION FIELD OF THE DISCLOSURE [0001] This disclosure relates generally to the field of network processing. In particular, the disclosure relates to a novel filter architecture to accelerate string matching in packet inspection for network applications such as intrusion detection/prevention and virus detection. BACKGROUND OF THE DISCLOSURE [0002] In modern networks, applications such as intrusion detection/prevention and virus detection are important for protecting the networks and/or network users from attacks. In such applications network packets are often inspected to identify problematic packets by finding matches to a known set of data patterns. Matching every byte of an incoming data stream against a large database of patterns (e.g. up to hundreds of thousands) is very compute-intensive. Programs have used techniques such as finite-state machines and filters to find matches to known sets. [0003] A Bloom filter, conceived by Burton H. Bloom in 1970, is a probabilistic structure for determining whether an element is a member of a set. Hashing is performed on the element. Multiple different hash functions are used to generate multiple different hash indices into an array of bits. To add or insert an element into the set, these hash functions are used to index multiple bit locations in the array for the element and these bit locations are then set to one. To query the filter for an arbitrary element the hash functions are used to index multiple bit locations in the array for the element and these bit locations are then checked to see if they are all set to one. If they are not all set to one, the arbitrary element in question is not a member of the set. [0004] Whenever a filter generates a positive outcome for an element, which is not actually a member of the set, the outcome is called a false positive. The Bloom filter will not generate a false negative. It is a goal of any particular filter design, that the probability of false positives is "small." For Bloom filters, after inserting n elements into a set represented by an array of m bits using k different hash functions, the probability of a false positive is (1 - (1 - \lm)kn f. [0005] Designing a filter for a specific problem may be tedious, and at high data rates it is difficult or impossible for state-of-the art processors to implement the design at rates even close to line-rate. To achieve rates close to one or more gigabits per second, specialized field-programmable gate array solutions or custom circuits have been proposed.[0006] To date, more generalized reconfigurable architectures to accelerate string matching in packet inspection for network applications such as intrusion detection/prevention and virus detection have not been fully explored. BRIEF DESCRIPTION OF THE DRAWINGS [0007] The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings. [0008] Figure 1 illustrates one embodiment of a filter apparatus to accelerate string matching in packet inspection for network applications such as intrusion detection/prevention and virus detection. [0009] Figure 2 illustrates a flow diagram for one embodiment of a process to initialize a filter apparatus for string matching in packet inspection. [0010] Figure 3 illustrates a flow diagram for one embodiment of a process to utilize a filter apparatus for string matching in packet inspection. [0011] Figure 4 illustrates one embodiment of a system employing a filter apparatus to accelerate string matching in packet inspection for network applications such as intrusion detection/prevention and virus detection. DETAILED DESCRIPTION [0012] Methods and apparatus to perform string matching for network packet inspection are disclosed below. In some embodiments, a filter apparatus may be configured as a set of string matching slice circuits, each slice circuit of the set being configured to perform string matching steps in parallel with other slice circuits. Each slice circuit may include an input window storing some number of bytes of data from an input data steam. The input window of data may be padded if necessary, and may be multiplied by a distinct Galois-field polynomial modulo an irreducible Galois-field polynomial to generate a hash index. A storage location of a memory slice corresponding to the hash index may be accessed to generate a slice-hit signal of a plurality of slice-hit signals. The slice-hit signal may be provided to an AND-OR logic array where the plurality of slice-hit signals is logically combined into a match result. [0013] Embodiments of such methods and apparatus represent reconfigurable architectures to accelerate string matching in packet inspection for network applications such as intrusion detection/prevention and virus detection. [0014] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specificdetails. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. These and other embodiments of the present invention may be realized in accordance with the following teachings and it should be evident that various modifications and changes may be made in the following teachings without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense and the invention measured only in terms of the claims and their equivalents. [0015] Figure 1 illustrates one embodiment of a filter apparatus 101 to accelerate string matching in packet inspection for network applications such as intrusion detection/prevention and virus detection. Filter apparatus 101 as shown includes an input data stream 120, which may be in a system memory or may comprise an optional data stream buffer of filter apparatus 101 for storing packed data for inspection and/or a pattern database to initialize filter apparatus 101. Filter apparatus 101 also includes a set of H (e.g. 1-8) slice circuits 110- 150, each /* slice circuit of the set is configurable for providing an z- th slice-hit signal to a configurable AND-OR logic array 140 as one of a set of H slice-hit signals. Slice circuits 110-150, respectively include input windows 111-151 each configurable to store W1 (e.g. 2-8) bytes of data from input data steam 120, and Ghash units 112-152 coupled with input windows 111-151 and configurable to receive the W1 bytes of data, to pad the W1 bytes of data if necessary, and to multiply their respective W1 bytes of data by a polynomial modulo an irreducible Galois-field polynomial to generate an index. [0016] It will be appreciated that some embodiments of filter apparatus 101 may use the same irreducible Galois-field polynomial in each of the Ghash units 112-152 with H distinct polynomial multipliers selected at random (each having a good mixture of 1 's and O's) to generate H distinct hash indices, thus simplifying the task of generating distinct hash indices for each Ghash unit. It will also be appreciated that in embodiments of filter apparatus 101 where, unlike the Bloom filter, input windows 111-151 are independently configurable to store W1 bytes of data from input data steam 120, the filter apparatus 101 may be used to solve multiple problems of different sizes (e.g. a 2-byte match, a 3-byte match, a 6-byte match, and an 8-byte match, etc.) at the same time in parallel. [0017] Slice circuits 110-150, respectively, also include memories 113-153 coupled with the Ghash units 112-152 and configurable to access respective storage locations responsive to their respective indices (e.g. at the addresses specified by some field of bits from respective indices) to each generate an ith slice-hit signal and to provide the an ith slice-hit signal to AND-OR logic array 140 as one of the set of H slice-hit signals 115-155. Someembodiments of memories 113-153 are configurable from a larger memory 130 to serve as individual memories 113-153 for slice circuits 110-150 respectively. Some alternative embodiments of memories 113-153 may be TV-entry (e.g. IK entries) read/write random- access memories (RAMs) of fixed width (e.g. 64-bits wide) and are configurable to be combined into larger memories (e.g. memory 130) as necessary (e.g. when a very large set of patterns is required). Slice circuits 110-150 may also include multiplexers 114-154, respectively, configurable to access respective bit storage locations responsive to portions of their respective indices to generate the i Λ slice-hit signal and to provide the 1th slice-hit signal to AND-OR logic array 140 as one of the set of H slice-hit signals 115-155. [0018] AND-OR logic array 140 is configurable to receive a set of H slice-hit signals 115-155 and to combine the set of H slice-hit signals 115-155 into a match result 145, a copy of which may be stored as a match result 185. Some embodiments of AND-OR logic array 140 may be configurable to perform a simple AND (e.g. as in a Bloom filter) or a simple OR (e.g. as in solving multiple problems of different sizes in parallel) of the set of H slice-hit signals 115-155 to get a match result 145. Alternative embodiments of AND-OR logic array 140 may be configurable to perform a complex AND-OR of the set of H slice-hit signals 115- 155 (e.g. temp^ = (AND slice-hit signal for all i in a set Sk) and then the final match result = (OR tempt for all k) ) to get a match result 145. The complex AND-OR of the set of H slice- hit signals 115-155 may be used, for example, in embodiments of filter apparatus 101 to provide multiple Bloom filters in parallel. [0019] It will be appreciated that when a final match result is positive, a verification process may be used to check against false positives. Such verification process may be relatively slower than using filter apparatus 101 and so the configuration of filter apparatus 101 should be carefully made to avoid frequent false positives. [0020] Figure 2 illustrates a flow diagram for one embodiment of a process 201 to initialize a filter apparatus for string matching in packet inspection. Process 201 and other processes herein disclosed are performed by processing blocks that may comprise dedicated hardware or software or firmware operation codes executable by general purpose machines or by special purpose machines or by a combination of both. [0021] In processing block 211 a set of H slice circuits are configured. In processing block 212, i is set to zero (0). In processing block 213, i is incremented. In processing block 214, z is checked to see if it has exceeded H. It will be appreciated that even though initialization of the H slice circuits is shown as an iterative process 201, in at least some preferred embodiment of process 201, the set of H slice circuits are configured toconcurrently perform initialization according to processing blocks 215-220 of process 201 for use in string matching during network packet inspections. Therefore, for each of the H slice circuits processing blocks 215-220 are executed as follows, before proceeding to processing block 222. [0022] In processing block 215 W1 bytes of data is stored from an input data steam in an / Λ input window. In processing block 216 the W1 bytes of data are padded if necessary. Then in processing block 217 the W1 bytes of data are multiplied by a Galois-field polynomial modulo an irreducible Galois-field polynomial to generate an ith hash index. In processing block 218 a storage location of a memory corresponding to the i th hash index is accessed, and in processing block 220 an /* slice-hit signal is stored (i.e. set) in the storage location of the memory corresponding to the / Λ hash index. When all of the H slice circuits have completed processing blocks 215-220 of process 201, processing proceeding to processing block 222 where a pointer in the input data stream is moved (e.g. to a new string in the database). Then from processing block 224, if the data stream is empty processing terminates. Otherwise processing repeats in processing block 212. [0023] It will be appreciated that the process 201 may be iterated for hundreds to hundreds of thousands of times in order to initialize a filter apparatus for string matching patterns in packet inspection. Thus when the set of H slice circuits are configured to concurrently perform initialization substantial performance improvements may be realized. It will also be appreciated that the process 201 of initializing a filter apparatus (by setting slice- hit signals) may be performed in a manner substantially similar to a process of utilizing a filter apparatus for string matching (by reading the slice-hit signals) in packet inspection. In some embodiments of processing block 222 a pointer into the input data stream may moved for each z Λ slice, in such a way as to provide each z Λ slice with a new compete pattern, whereas in utilizing a filter apparatus for string matching a pointer into the input data stream may be simply incremented. [0024] Figure 3 illustrates a flow diagram for one embodiment of a process 301 to utilize a filter apparatus for string matching in packet inspection. In processing block 311 a set of H slice circuits are configured. In processing block 312, / is set to zero (0). In processing block 313, / is incremented. In processing block 314, / is checked to see if it has exceeded H. Again, it will be appreciated that even though utilization of the H slice circuits is shown as an iterative process 301, in at least some preferred embodiment of process 301, the set of H slice circuits are configured to concurrently perform string matching according to processing blocks 315-321 of process 301 for use during network packet inspections. Therefore, foreach of the H slice circuits processing blocks 315-321 are executed as follows, before proceeding to processing block 323. [0025] In processing block 315 W1 bytes of data is stored from an input data steam in an i th input window. In processing block 316 the W1 bytes of data are padded if necessary. Then in processing block 317 the W1 bytes of data are multiplied by a Galois-field polynomial modulo an irreducible Galois-field polynomial to generate an z- th hash index. In processing block 319 a storage location of a memory corresponding to the i Λ hash index is accessed to generate an ith slice-hit signal of a set of H slice-hit signals. In processing block 321 the i th slice-hit signal is provided to an AND-OR logic array as one of the set of H slice-hit signals. When all of the H slice circuits have completed processing blocks 315-321 of process 301, processing proceeding to processing block 323 where the AND-OR logic array is configured to receive the set of H slice-hit signals and to combine the set of H slice-hit signals into a match result. Then from processing block 323 processing terminates. [0026] It will be appreciated that iterations of process 301 may be configured in accordance with embodiments of filter apparatus 101 to substantially accelerate string matching in packet inspection. [0027] Figure 4 illustrates one embodiment of a system 401 employing a filter 480 to accelerate string matching in packet inspection for network applications such as intrusion detection/prevention and virus detection. [0028] System 401 includes an input data stream 420, which may be in system memory 470 as shown, or may comprise an optional data stream buffer of filter 480 for storing packed data for inspection and/or a pattern database to initialize filter 480. [0029] Filter 480 includes a set of H slice circuits 410-450, each ith slice circuit of the set is configurable for providing an i Λ slice-hit signal to a configurable AND-OR logic array 440 as one of a set of H slice-hit signals. Slice circuits 410-450, respectively include input windows 411-451 each configurable to store W1 bytes of data from input data steam 420, and Ghash units 412-452 coupled with input windows 411-451 and configurable to receive the W1 bytes of data, to pad the W1 bytes of data if necessary, and to multiply their respective W1 bytes of data by a polynomial modulo an irreducible Galois-field polynomial to generate an index. [0030] Slice circuits 410-450, respectively, also include memories 413-453 coupled with the Ghash units 412-452 and configurable to access respective storage locations responsive to their respective indices to each generate an ith slice-hit signal and to provide the an ith slice- hit signal to AND-OR logic array 440 as one of the set of H slice-hit signals 415-455.Memories 413-453 may be JV-entry read/write RAMs of any fixed width and configurable to be combined into larger memories (e.g. memory 430) as necessary. Alternatively some embodiments of memories 413-453 may be configurable from a larger memory 430. Slice circuits 410-450 may also include multiplexers 414-454, respectively, configurable to access respective bit storage locations responsive to portions of their respective indices to generate the i th slice-hit signal and to provide the iΛ slice-hit signal to AND-OR logic array 440 as one of the set of H slice-hit signals 415-455. AND-OR logic array 440 may receive the set of H slice-hit signals 415-455 and combine the set of H slice-hit signals 415-455 into a match result 445. [0031] System 401 also includes system processor 460 to executed a program 471 in system memory 470 to accelerate string matching in packet inspection for network applications using filter 480, and to move or increment a pointer 461 into input data stream 420 until a match result 445 is positive (in the case of string matching for packet inspections) or until an end-of-file is reached in the input data steam 420. In some embodiments of system 401, processor 460 may check a copy of match result 445 stored in system memory 470 as a match result 485 when string matching for packet inspections to determine if match result 445 was positive. [0032] The above description is intended to illustrate preferred embodiments of the present invention. From the discussion above it should also be apparent that especially in such an area of technology, where growth is fast and further advancements are not easily foreseen, the invention may be modified in arrangement and detail by those skilled in the art without departing from the principles of the present invention within the scope of the accompanying claims and their equivalents. |
A digital system and method of operation is provided in which several processing resources (340) and processors (350) are connected to a shared translation lookaside buffer (TLB) (300, 310(n)) of a memory management unit (MMU) and thereby access memory and devices. These resources can be instruction processors, coprocessors, DMA devices, etc. Each entry location in the TLB is filled during the normal course of action by a set of translated address entries (308, 309) along with qualifier fields (301, 302, 303) that are incorporated with each entry. Operations can be performed on the TLB that are qualified by the various qualifier fields. A command (360) is sent by an MMU manager to the control circuitry of the TLB (320) during the course of operation. Commands are sent as needed to flush (invalidate), lock or unlock selected entries within the TLB. Each entry in the TLB is accessed (362, 368) and the qualifier field specified by the operation command is evaluated (364). This can be task ID field 302, resource ID field 301, shared indicator 303, or combinations of these. Operation commands can also specify a selected virtual address entry (305). Each TLB entry is modified in response to the command (366) only if its qualifier field(s) match the qualifier(s) specified by the operation command. |
A method of operating a digital system having a processor and associated translation lookaside buffer (TLB), comprising the steps of:executing a plurality of program tasks within the processor;initiating a plurality of memory access requests in response to the plurality of program tasks;caching a plurality of translated memory addresses in the TLB responsive to the plurality of memory access requests;incorporating a task identification value with each translated memory address to identify which of the plurality of program tasks requested the respective translated memory address; andperforming an operation on the TLB that is qualified by the task identification value.The method according to Claim 1, wherein the step of performing an operation comprises invalidating only a portion of the plurality of translated addresses that have the selected task identification value.The method of Claim 1 or 2, wherein the TLB has several levels, and wherein the step of performing an operation encompasses all of the several levels of the TLB.The method according to Claim 1, wherein each memory access request includes a virtual address and a task identification value and wherein the step of performing an operation comprises the steps of:selecting a translated memory address cached in the TLB in response to a memory access request; andcomparing the task identification value included with the memory access request to a task identification value incorporated with the selected translated memory address and indicating a TLB miss if they are not the same.The method according to any previous Claim, further comprising the step of incorporating a second qualifier value with each translated memory address; and wherein the step of performing an operation on the TLB is qualified by both the task identification value and the second qualifier value.The method of Claim 5, wherein the digital system has a plurality of processors and wherein the second qualifier value identifies which of the plurality of processor requested the respective translated memory address.A digital system having a translation lookaside buffer (TLB), the TLB comprising:storage circuitry with a plurality of entry locations for holding translated values, wherein each of the plurality of entry locations includes a first field for a translated value and a second field for an associated qualifier value;a set of inputs for receiving a translation request;a set of outputs for providing a translated value selected from the plurality of entry locations; andcontrol circuitry connected to the storage circuitry, wherein the control circuitry is responsive to an operation command to invalidate selected ones of the plurality of entry locations which have a first qualifier value in the second field.The digital system of Claim 7, wherein the digital system further comprises a second level TLB connected to the TLB, the second level TLB comprising:second level storage circuitry with a plurality of entry locations for holding translated values, wherein each of the plurality of entry locations includes a first field for a translated value and a second field for an associated qualifier value; and wherein the control circuitry is connected to the second level storage circuitry, the control circuitry being responsive to an operation command to invalidate selected ones of the plurality of entry locations in the second storage circuitry which have a first qualifier value in the second field, such that qualified entry locations in the TLB and in the second level TLB are invalidated in response to a single operation command.The digital system according to any of Claims 7-8, wherein each of the plurality of entry locations in the storage circuitry and the second storage circuitry contain a third field for a second associated qualifier value; and wherein the control circuitry is responsive to an operation command to invalidate selected ones of the plurality of entry locations which have both a specified task ID value in the second field and a specified resource ID value in the third field.The digital system according to any of Claims 7-9 being a personal digital assistant, further comprising:a processor (CPU) connected to the TLB and thereby connected to access a memory circuit;a display, connected to the CPU via a display adapter;radio frequency (RF) circuitry connected to the CPU; andan aerial connected to the RF circuitry. |
This application claims priority to European Application Serial No. 00402331.3, filed August 21, 2000.This invention generally relates to computer processors, and more specifically to improvements in translation lookaside buffers for address translation, systems, and methods of making.Microprocessors are general purpose processors which provide high instruction throughputs in order to execute software running thereon, and can have a wide range of processing requirements depending on the particular software applications involved.Many different types of processors are known, of which microprocessors are but one example. For example, Digital Signal Processors (DSPs) are widely used, in particular for specific applications, such as mobile processing applications. DSPs are typically configured to optimize the performance of the applications concerned and to achieve this they employ more specialized execution units and instruction sets. Particularly in applications such as mobile telecommunications, but not exclusively, it is desirable to provide ever increasing DSP performance while keeping power consumption as low as possible.To further improve performance of a digital system, two or more processors can be interconnected. For example, a DSP may be interconnected with a general purpose processor in a digital system. The DSP performs numeric intensive signal processing algorithms while the general purpose processor manages overall control flow. The two processors communicate and transfer data for signal processing via shared memory. A direct memory access (DMA) controller is often associated with a processor in order to take over the burden of transferring blocks of data from one memory or peripheral resource to another and to thereby improve the performance of the processor.Modular programming builds a computer program by combining independently executable units of computer code (known as modules), and by tying modules together with additional computer code. Features and functionality that may not be provided by a single module may be added to a computer program by using additional modules.The design of a computer programming unit known as a task (or function) is often accomplished through modular programming, where a specific task is comprised of one module and the additional computer code needed to complete the task (if any additional code is needed). However, a task may be defined as broadly as a grouping of modules and additional computer codes, or, as narrowly as a single assembly-type stepwise command. A computer program may be processed (also called "run" or "executed") in a variety of manners. One manner is to process the computer code sequentially, as the computer code appears on a written page or on a computer screen, one command at a time. An alternative manner of processing computer code is called task processing. In task processing, a computer may process computer code one task at a time, or may process multiple tasks simultaneously. In any event, when processing tasks, it is generally beneficial to process tasks in some optimal order.Unfortunately, different tasks take different amounts of time to process. In addition, the result, output, or end point of one task may be required before a second task may begin (or complete) processing. Furthermore, particularly in a multiple processor environment, several tasks may need access to a common resource that has a generally fixed capacity.In order to better manage program tasks and physical memory, a concept of virtual memory and physical memory has evolved. Program task modules are generally compiled and referenced to virtual address. When a task is executed in physical memory, address translation is performed using a cache of translated addresses, referred to as a translation lookaside buffer (TLB). TLBs must be managed to optimize system performance as various tasks are executed.Accordingly, there is needed a system and method for managing task processing and address translation that takes into account active tasks, active resources, and other task processing needs.Particular and preferred aspects of the invention are set out in the accompanying independent and dependent claims. In accordance with a first embodiment of the invention, a method is provided for operating a digital system having a processor and associated translation lookaside buffer (TLB). Several program tasks are executed within the processor that initiate a sequence of memory access requests in response to the program tasks. In response to the sequence of memory access requests, a set of translated memory addresses are cached in the TLB. A task identification value is incorporated with each translated memory address to identify which of the program tasks requested the respective translated memory address. An operation is performed on the TLB that is qualified by the task identification value.In a first embodiment, an operation is performed on the TLB that invalidates only a portion of the set of translated addresses that have the selected task identification value.In another embodiment, the TLB has several levels, and the step of performing an operation encompasses all of the levels of the TLB.In another embodiment, each memory access request includes a virtual address and a task identification value and the step of performing an operation includes selecting a translated memory address cached in the TLB in response to a memory access request; and comparing the task identification value included with the memory access request to a task identification value incorporated with the selected translated memory address and indicating a TLB miss if they are not the same.Another embodiment of the invention is a digital system that has a translation lookaside buffer (TLB). The TLB includes storage circuitry with a set of entry locations for holding translated values, wherein each of the set of entry locations includes a first field for a translated value and a second field for an associated qualifier value. There is a set of inputs for receiving a translation request, a set of outputs for providing a translated value selected from the set of entry locations; and control circuitry connected to the storage circuitry. The control circuitry is responsive to an operation command to invalidate selected ones of the set of entry locations which have a selected qualifier value in the second field.Particular embodiments in accordance with the invention will now be described, by way of example only, and with reference to the accompanying drawings in which like reference signs are used to denote like parts and in which the Figures relate to the digital system of Figure 1 and in which:Figure 1 is a block diagram of a digital system that includes an embodiment of the present invention in a megacell core having multiple processor cores;Figure 2A and 2B together is a more detailed block diagram of the megacell core of Figure 1;Figure 3A is a block diagram illustrating a shared translation lookaside buffer (TLB) and several associated micro-TLBs (µTLB) included in the megacell of Figure 2;Figure 3B is a flow chart illustrating a method of operating the TLB of Figure 3A;Figure 4 is a block diagram of a digital system similar to Figure 1 illustrating a cloud of tasks that are scheduled for execution on the various processors of the digital system;Figure 5 illustrates a TLB control format used to operate on the TLB and µTLBs of Figure 3A;Figure 6 illustrates operation of the TLB of Figure 3A for selective flushing of an entry for a given task or resource;Figure 7 illustrates control circuitry for adaptive replacement of TLB entries in the TLB of Figure 3A; andFigure 8 is a representation of a telecommunications device incorporating an embodiment of the present invention.Corresponding numerals and symbols in the different figures and tables refer to corresponding parts unless otherwise indicated.Detailed Description of Embodiments of the InventionAlthough the invention finds particular application to Digital Signal Processors (DSPs), implemented, for example, in an Application Specific Integrated Circuit (ASIC), it also finds application to other forms of processors. An ASIC may contain one or more megacells which each include custom designed functional circuits combined with pre-designed functional circuits provided by a design library.Figure 1 is a block diagram of a digital system that includes an embodiment of the present invention in a megacell core 100 having multiple processor cores. In the interest of clarity, Figure 1 only shows those portions of megacell 100 that are relevant to an understanding of an embodiment of the present invention. Details of general construction for DSPs are well known, and may be found readily elsewhere. For example, U.S. Patent 5,072,418 issued to Frederick Boutaud, et al, describes a DSP in detail. U.S. Patent 5,329,471 issued to Gary Swoboda, et al, describes in detail how to test and emulate a DSP. Details of portions of megacell 100 relevant to an embodiment of the present invention are explained in sufficient detail herein below, so as to enable one of ordinary skill in the microprocessor art to make and use the invention.Referring again to Figure 1, megacell 100 includes a control processor (MPU) 102 with a 32-bit core 103 and a digital signal processor (DSP) 104 with a DSP core 105 that share a block of memory 113 and a cache 114, that are referred to as a level two (L2) memory subsystem 112. A traffic control block 110 receives transfer requests from a host processor connected to host interface 120b, requests from control processor 102, and transfer requests from a memory access node in DSP 104. The traffic control block interleaves these requests and presents them to the shared memory and cache. Shared peripherals 116 are also accessed via the traffic control block. A direct memory access controller 106 can transfer data between an external source such as off-chip memory 132 or on-chip memory 134 and the shared memory. Various application specific processors or hardware accelerators 108 can also be included within the megacell as required for various applications and interact with the DSP and MPU via the traffic control block.External to the megacell, a level three (L3) control block 130 is connected to receive memory requests from internal traffic control block 110 in response to explicit requests from the DSP or MPU, or from misses in shared cache 114. Off chip external memory 132 and/or on-chip memory 134 is connected to system traffic controller 130; these are referred to as L3 memory subsystems. A frame buffer 136 and a display device 138 are connected to the system traffic controller to receive data for displaying graphical images. A host processor 120a interacts with the external resources through system traffic controller 130. A host interface connected to traffic controller 130 allows access by host 120a to external memories and other devices connected to traffic controller 130. Thus, a host processor can be connected at level three or at level two in various embodiments. A set of private.peripherals 140 are connected to the DSP, while another set of private peripherals 142 are connected to the MPU.Figure 2, comprised of Figure 2A Figure 2B together, is a more detailed block diagram of the megacell core of Figure 1. DSP 104 includes a configurable cache 203 that is configured as a local memory 200 and data cache 202, and a configurable cache 204 that is configured as instruction cache 206 and a RAM-set 208, which are referred to as level one (L1) memory subsystems. The DSP is connected to the traffic controller via an L2 interface 210 that also includes a translation lookaside buffer (TLB) 212. A DMA circuit 214 is also included within the DSP. Individual micro TLBs (µTLB) 216-218 are associated with the DMA circuit, data cache and instruction cache, respectively.Similarly, MPU 102 includes a configurable cache 223 that is configured as a local memory 220 and data cache 222, and a configurable cache 224 that is configured as instruction cache 226 and a RAM-set 228, again referred to as L1 memory subsystems. The MPU is connected to traffic controller 110 via an L2 interface 230 that also includes a TLB 232. A DMA circuit 234 is also included within the MPU. Individual micro TLBs (µTLB) 236-238 are associated with the DMA circuit, data cache and instruction cache, respectively.L2 traffic controller 110 includes a TLB 240 and one or more micro-TLB (µTLB) 242 that are associated with system DMA block 106, host processor interface 120b for a host connected at level two, and other application specific hardware accelerator blocks. Similarly, L3 traffic controller 130 includes a □TLB controllably connected to TLB 240 that is associated with system host 120a at level three. This □TLB is likewise controlled by one of the megacell 100 processors.Memory Management UnitAt the megacell traffic controller level, all addresses are physical. They have been translated from virtual to physical at the processor sub-system level by a memory management unit (MMU) associated with each core, such as DSP core 105 and MPU core 103. At the processor level, access permission, supplied through MMU page descriptors, is also checked, while at the megacell level protection between processors is enforced by others means, which will be described in more detail later. Each MMU includes a TLB and its associated □TLBs.The translation lookaside buffer (TLB) caches contain entries for virtual-to-physical address translation and page descriptor information such as access permission checking, cache policy for various levels, etc. If the TLB contains a translated entry for the virtual address, the access control 'logic determines whether the access is permitted. If access is permitted, the MMU generates the appropriate physical address corresponding to the virtual address. If access is not permitted, the MMU sends an abort signal via signal group 244 to the master CPU 102. The master CPU is identified by the value of the R-ID field. On a slave processor such as a hardware accelerator the R-ID is equal to the R-ID of the master CPU.Upon a TLB miss, i.e., the TLB does not contain an entry corresponding to the virtual address requested, an exception is generated that initiates a translation table walk software routine. The TLB miss software handler retrieves the translation and access permission information from a translation table in physical memory. Once retrieved, the page or section descriptor is stored into the TLB at a selected victim location. Victim location selection is done by software or with hardware support, as will be described later.Translation TableTo provide maximum flexibility, the MMU is implemented as a software table walk, backed up by TLB caches both at the processor sub-system and megacell level. This allows easy addition of new page size support or new page descriptor information if required. A TLB miss initiates a TLB handler routine to load the missing reference into the TLB. At the Megacell 100 level, a TLB miss asserts a miss signal in signal group 244 and is routed via system interrupt router 250 to the processor having generated the missing reference or to the processor in charge of the global memory management, via interrupt signals 251, 252.Translation tables and TLB cache contents must be kept consistent. A flush operation is provided for this reason and will be described in more detail later.An address reference is generally located within the □TLB or main TLB of each processor sub-system; however, certain' references, such as those used by system DMA 106 or host processor 120, for example, to access megacell memories can be distributed within L2 traffic controller 110 and cached into L2 system shared TLB 240. Because system performance is very sensitive to the TLB architecture and size, it is important to implement efficient TLB control commands to lock entries for critical tasks or unlock and flush those entries when a task is deleted without degrading the execution of other tasks. Therefore, each TLB and L2 cache entry holds a task-ID. Commands are supplied to flush locked or unlocked entries of a TLB/□TLB corresponding to a selected task.As part of the page descriptor information, the MMU provides cacheability and bufferability attributes for all levels of memory. The MMU also provides a "Shared" bit for each entry to indicate that a page is shared among multiple processors (or tasks). This bit, as standalone or combined with the task-ID, allows specific cache and TLB operation on data shared between processors or/and tasks. The MMU may also provide additional information, such as memory access permission and access priority as described later.All megacell memory accesses are protected by a TLB. As they all have different requirements in term of access frequencies and memory size, a shared TLB with individual □TLB backup approach has been chosen to reduce the system cost at the megacell level. This shared TLB is programmable by each processor. The architecture provides enough flexibility to let the platform work with either independent operating systems (OS) on each processors or a distributed OS with a unified memory management, for example.The present embodiment has a distributed operating system (OS) with several domains corresponding to each processor but only a single table manager for all processors. Slave processors do not manage the tables. In a first embodiment slave processors R-ID are equal to the R-ID of the master CPU. In another embodiment, they could, however, have a different R-ID to control their TLB entries lock/unlock entries corresponding to some of their own tasks or flush all their entries, when putting themselves in sleep mode to free entries for the others processors. Having different R-ID provides a means to increase security in a concurrent multi-processor environment, processor X can not access memory allocated to processor Y.In another embodiment with several independent OS(s), for example, there will be independent tables. These tables can be located in a memory space only viewed by the OS that they are associated with in order to provide protection from inadvertent modification by another OS. As they manage the virtual memory and task independently, the R-ID provides the necessary interprocessor security. R-Ids are managed by a single master CPU. This CPU can make TLB operations on all TLB entries. TLB operation or memory accesses from slave processor are restricted by their own R-ID. The CPU master will have rights to flush out entries belonging to another processor in a different OS domain.The organization of the data structures supporting the memory management descriptor is flexible since each TLB miss is resolved by a software TLB-miss handler. These data structures include the virtual-to-physical address translation and all additional descriptors to manage the memory hierarchy. A list of these descriptors and their function is described inTable 2. Table 1 includes a set of memory access permission attributes, as an example. In other embodiments, a processor may have other modes that enable access to memory without permission checks.Table 1 -No accessNo accessRead onlyNo accessRead onlyRead onlyRead/WriteNo accessRead/WriteRead onlyRead/WriteRead/WriteTable 2 -Execute Neverprovides access permission to protect data memory area from being executed. This information can be combined with the access permission described above or kept separate.Sharedindicates that this page may be shared by multiple tasks across multiple processor.CacheabilityVarious memory entities such as individual processor's cache and write buffer, and shared cache and write buffer are managed through the MMU descriptor. The options included in the present embodiment are as follows: Inner cacheable, Outer cacheable, Inner Write through/write back, Outer write through/write back, and Outer write allocate. The terms Inner and outer refer to levels of caches that are be built in the system. The boundary between inner and outer is defined in specific embodiment, but inner will always include L1 cache. In a system with 3 levels of caches, the inner correspond to L1 and L2 cache and the outer correspond to L3 due to existing processor systems. In the present embodiment, inner is L1 and outer is L2 cache.Endianismdetermines on a page basis the endianness of the transfer.priorityIndicates a priority level for the associatedmemory address region. Memory access can be prioritized based on this priority value.MMU/TLB Control OperationFigure 3A is a block diagram illustrating a shared translation look-aside buffer (TLB) 300 and several associated micro-TLBs (µTLB) 310(0)-310(m) included in megacell 100 of Figure 2. On a µTLB miss, the shared TLB is first searched. TLB controller 320 is alerted by asserting a µTLB miss signal 324. In case of a hit on the shared TLB, the □TLB that missed is loaded with the entry content of the shared TLB 300. In the case of a miss in shared TLB 300, the shared TLB alerts TLB controller 320 by asserting a TLB miss signal 326. Controller 320 then asserts an interrupt request signal 328 to system interrupt controller 250. Interrupt controller 250 asserts an interrupt to the processor whose OS supervises the resource which caused the miss. A TLB entry register 330 associated with TLB controller 320 is loaded by a software TLB handler in response to the interrupt. Once loaded, the contents of TLB entry register 330 are transferred to both shared TLB 300 and the requesting µTLB at a selected victim location as indicated by arcs 332 and 334.A separate TLB entry register 330 is only one possible implementation and is not necessarily required. The separate register TLB entry register is a memory mapped register that allows buffering of a complete TLB entry (more than 32 bits). A TLB value is not written directly in the TLB cache but is written to the TLB entry register first. Because of the size of an entry, several writes are required to load the TLB entry register. Loading of a TLB cache entry is then done in a single operation "Write TLB entry". Advantageously, other uTLBs associated with other modules can continue to access the shared TLB while the TLB entry register is being loaded, until a second miss occurs. Advantageously, by controlling access to the TLB via the TLB entry register, CPUs have no direct access to TLB cache internal structure and thus the risk of partial modifications inconsistent with the MMU tables is avoided.The sequence of operations to update a TLB cache entry after a miss is:1 - the software TLB handler writes to the TLB entry register,2- the software TLB handler sends a command to write the TLB entry, which transfers a value from TLB entry register to a preselected victim TLB cache entry; and3- control circuitry checks and preselects a next victim TLB entry, in preparation for the next miss. In this embodiment, this step is generally performed in background prior to the occurrence of a miss.Advantageously, TLB cache entries can be preemptively updated under OS software control to prevent TLB miss by preloading a new entry, using the following sequence of operation:1- control circuitry checks and selects a TLB entry, referred to as a victim TLB cache entry.2- the software TLB handler writes to the TLB entry register, and3- the software TLB handler sends a command to write the TLB entry, which transfers a value from TLB entry register to the selected victim TLB cache entry.The priority on the shared TLB is managed in the same way as priority on a memory access. One or more resources can be using the shared TLB. One or more resources can program the shared TLB. The replacement algorithm for selecting the next victim location in the shared TLB is under hardware control. A victim pointer register 322 is maintained for each TLB and µTLB to provide a victim separate pointer for each. A typical embodiment will use a round robin scheme. Different TLBs within a single megacell can use different replacement schemes. However, in an embodiment in which the system has a master CPU with a distributed OS, this master CPU could also bypass the hardware replacement algorithm by selecting a victim entry, reading and then writing directly to the Shared TLB, for example.In this embodiment, each shared TLB has 256 entries. Each µTLB is generally much smaller, i.e., has fewer entries, than the shared TLB. In various embodiments, each shared TLB has 64-256 or more entries while µTLBs generally have 4-16 entries. The penalty for a miss in a µTLB is small since a correct entry is generally available from the shared TLB. Therefore, the present embodiment does not provide direct control of the victim pointers of the various µTLBs; however, direct control of the victim pointer of shared TLBs, such as 212, 232, and 240, is provided.Each entry in a TLB has a resource identifier 301 along with task-ID 302. Resource-IDs and task IDs are not extension fields of the virtual address (VA) but simply address qualifiers. Resource IDs are provided by a resource-ID register associated with each resource; such as R-ID register 342a associated with resource 340 and R-ID register 342n associated with resource 350. Resource 340 is representative of various DMA engines, coprocessor, etc within megacell 100 and/or an external host connected to megacell 100. Resource 350 is representative of various processors within megacell 100. Each resource 340, 350 typically has its own associated R-ID register; however, various embodiments may choose to provide resource ID registers for only a selected portion of the resources. A task ID is provided by a task-ID register, such as task-ID register 344a associated with resource 340 and task-ID register 344n associated with resource 350. A task register associated with a non-processor resource, such as DMA, a coprocessor, etc, is loaded with a task value to indicate the task that it is supporting.In another embodiment, only processor resources 340, 350 that execute program modules have an associated programmable task-ID register. In this case, a system wide default value may be provided for access requests initiated by non-processor resources such as DMA. The default value may be provided by a programmable register or hardwired bus keepers, for example.Advantageously, with the task-ID, all entries in a TLB belonging to a specific task can be identified. They can, for instance, be invalidated altogether through a single operation without affecting the other tasks. Advantageously, the resource ID permits discrimination of different tasks being executed on different resources when they have the same task number. Task-ID number on the different processors might not be related; therefore, task related operations must be, in some cases, qualified by a resource-ID.In another embodiment, the R-ID and Task_ID registers are not necessarily part of the resource core and can be located elsewhere in the system, such as a memory mapped register for example, and associated to a resource bus. The only constraint is that a task_ID register related to a CPU must be under the associated OS control and updated during context switch. R-ID must be set during the system initialization. In some embodiments at system initialization, all R-ID and Task-ID registers distributed across the system are set to zero, which is a default value that causes the field to be ignored. In other embodiments, a different default value may be used. In other embodiments, R-ID "registers" provide hardwired values.Referring still to Figure 3A, each TLB entry includes a virtual address field 305 and a corresponding physical address field 308 and address attributes 309. Various address attributes are described in Table 1 andTable 2. Address attributes define conditions or states that apply to an entire section or page of the address space that is represented by a given TLB entry. An S/P field 306 specifies a page size such as 64kB and 4kB for example. Naturally, the page size determines how many most significant (ms) address bits are included in a check for an entry.Each TLB entry also includes "shared" bit 303 and a lock bit 304. All entries marked as shared can be flushed in one cycle globally. A V field 307 indicates if an associated TLB cache entry is valid. V field 307 includes several V-bits that are respectively associated with R-ID field 301 to indicate if a valid R-ID entry is present, task-ID field 302 to indicate if a valid task-ID entry is present, and virtual address field 305 to indicate if a valid address entry is present. These valid bits enable the compare logic with their associated field.As mentioned earlier, the resource ID field and task ID field in each entry of the TLB/µTLB can be used to improve security. During program task execution, each transaction request is checked by the miss control circuitry of the TLB/µTLB to determine if the entry is allowed for a specific resource or for all resources and for a specific task or for all tasks. For example, if a request is received and a valid entry is present for the proffered virtual address but a task ID or R-ID which accompany the request does not match the corresponding valid task ID and R-ID fields of the entry, then a miss is declared. If the task ID and/or R-ID fields of the entry are marked as invalid, then they are ignored.Figure 3B is a flow chart illustrating a method of operating the TLB of Figure 3A. As discussed above, the TLB is filled during the normal course of action by a set of translated address entries along with qualifier fields that are incorporated with each entry. As will be described in more detail below, operations can now be performed on the TLB that are qualified by the various qualifier fields.In step 360, an operation command is received by the control circuitry of the TLB. This command is sent by the MMU manager during the course of operation. Commands are sent as needed to flush (invalidate), lock or unlock selected entries within the TLB. These operations will be described in detail later.Step 362 accesses a first entry in the TLB and reads the qualifier field specified by the operation command. This can be task ID field 302, resource ID field 301, shared indicator 303, or combinations of these. Operation commands can also specify a selected virtual address entry.Step 364 compares the qualifier specified by the operation command with the qualifier field read from the TLB entry. If they match, then the operation is performed on that entry in step 366. If they do not match, then the next entry is accessed in step 368 and compare step 364 is repeated for the next entry.Step 366 performs the operation specified in the operation command on each entry whose qualifier field(s) match the operation command. In this embodiment, the operation can invalidate an entry by resetting valid bit field 307, and lock or unlock an entry by appropriate setting of lock bit 304.Step 368 access each next TLB entry until all entries have been accessed. In this embodiment, all µTLBs associated with a shared TLB are also accessed as part of the same operation command.Other embodiments may provide additional or different operations that are qualified by the qualifier fields of the present embodiment or by additional or other types of qualifier fields. For example, resource type, power consumption, processor speed, instruction set family, and the like may be incorporated in the TLB and used to qualify operations on the TLB.Figure 4 is a block diagram of a digital system similar to that of Figure 1 illustrating cloud of tasks that are scheduled for execution on the various processors of the digital system. Typically, each software task includes a task priority value that is commonly used by an operating system to schedule an order of execution for a set of pending tasks 1440.In this illustration, a circle such as 1442 represents a task, with a task name "c" and a task priority of 12, for example. Likewise, task 1443 has a task name "r" and a priority of 15, where a lower number indicates a higher priority. If the set of tasks 1440 are assigned to three processors, then an operating system on each processor forms a ready to execute queue, such as ready queue 1446 in which task "c" is scheduled for first execution, then task "a" and finally task "b" according to priority values of 12, 15, and 50 respectively. The Task ID register in each processor is loaded when a task is invoked.Table 3 illustrates several portions of instruction code sequences in which a task is spawned. From line 1 to line 5, task "c" is active and spawns a new task, "audio" on line 5. The kernel is then invoked to instantiate the new task and create the associated TCB. An eight bit (numbers of bits can be more or less) task-ID field is memorised in the TCB at line 11. During the context switch (reschedule in line 13) before launching the "audio" task, the kernel loads task-ID register 1412 with the task-ID value held in the TCB (Table 4) or in another table. At line 14, the new task is now active.Table 4 is an example task control block that is used to define a task-ID. Typically, the OS uses a 32-bit task-ID that is in fact an address that enables the OS to locate task information (TCB). At line 4, an execution priority value is defined that is used by the operating system to schedule execution of the task. At line 5, a task-ID value is defined that is used to set the task ID register when the task is instantiated.In other embodiments, other means than a TCB may be provided for storing the task ID.Referring again to Figure 3A, task-ID field 302 can be set in response to information provided at line 5 of the TCB illustrated in Table 4. This information can be used directly by the MMU manager when loading a new entry in TLBs. This information could also be part of the page table descriptor in the MMU page table and loaded as part of the MMU software table walk.In the present embodiment, task-ID information is not maintained in page tables but is inserted by the TLB miss handler at the time of a TLB fault by using the task_ID value of the transaction request that caused the TLB fault. Other embodiments may use other means for setting the task-ID field in the TLB entry, such as by storing this information in a separate table or in the MMU page tables, for example. In the present embodiment the Valid bit associated with the task-ID field is loaded through the MMU table walk and is part of the MMU tables. Thus, when the TLB miss handler accesses a page table in response to a TLB miss, it queries the task-ID valid bit field of the MMU page table; if this bit field is asserted, then the TLB miss handler asserts the task-ID valid bit in the TLB entry and loads the task-ID value from the task-ID register of the requester that caused the TLB miss into task ID field 302. If the task-ID valid bit field of the MMU page table is not asserted, then the TLB miss handler deasserts the task-ID valid bit in the TLB entry and the task-ID value from the task-ID register of the requester that caused the TLB miss is ignored.In the present embodiment, the shared bit field 303 is loaded through the MMU table walk and is part of the MMU tables. Typically, shared pages are defined by the OS in response to semaphore commands, for example.In another embodiment, shared bit information is not maintained in page tables but is inserted by the TLB-miss handler at the time of a TLB fault by accessing the TCB directly based on the task ID of the request that caused the fault. The TCB is located by the TLB-miss handler via a look-up table keyed to the task ID value. Other embodiments may use other means for setting the shared bit in the TLB entry by storing this information in a separate table, for example.R-ID field 301 is set by using the R-ID of the request that caused the fault. A Master CPU could also load value in this field during the programming of a TLB entry by taking this information from the MMU tables or separate tables, for example.Figure 5 illustrates a TLB control word format used to operate on the TLB and µTLBs of Figure 3A in response to control operations as defined in Table 5. TLB control word format 400 includes a task-ID field 402, resource-ID field 404 and virtual address field 406. Note that the virtual address field refers to a page address, therefore lsb address bits that refer within a page are not needed. In some embodiments, certain of the processors might not be allowed to invalidate entries other than their own.As described previously, during execution of a program, the R-ID and Task-ID field comes from a register associated with a requester during each memory system access request. In a system embodiment with multi-processors with multiple independent Operating Systems (OS), the R-ID is static and indicates which of the resources is accessing a given location (address). The Task-ID indicates which of the tasks (or processes) of this resource is doing the access. The task ID is dynamic and changes on each context switch. For these systems, restricting operations on a system TLB to the associated resource is important to optimize the main system TLB usage. Each OS controls the TLB entries it uses.However, another system embodiment might be controlled by middleware that supports a unified task and memory management. For those, the notion of R-ID might disappear and be treated as part of the task_ID. Restriction of TLB command based on R-ID would not be necessary in those systems and the field R-ID could be re-used to extend the task-ID field. In that case, TLB control format 410 may be used in which the R_Id field is not needed. Recall that the R-ID of the requestor is provided with each transaction request, therefore control operations specified using format 410 can be confined to entries associated with the requestor.A processor can initiate various control operations on a TLB by writing a control word conforming to appropriate format to a specific memory mapped address associated with TLB controller 320. This control word can specify a target virtual address entry and an associated task ID or an associated resource ID. Depending on the operation, unneeded fields are ignored. For example, the operation "invalidate all entries related to an R-ID" will only use the R-ID field 404. The format and type of operation can be distinguished by using different memory mapped addresses, for example. Each address corresponds to a different TLB operation. Another embodiment would be to use a different processor instruction opcode for each of the TLB operation that would drive the appropriate control signal connected to TLB controller 2232. A state machine in TLB controller 320 then executes the requested control operation. These TLB control operations are listed in Table 5. These operations are described in more detail below. For many of the operations, certain processors in an embodiment will be restricted to only affecting their own entries. This restriction is enforced by using the resource-ID signals 2106 provided with each write to TLB controller 320 as part of each memory access request.In another embodiment, the control operations can be invoked by executing an instruction that invokes a hardware or software trap response. As part of this trap response, a sequence of instructions can be executed or a control word can be written to a selected address, for example. In another embodiment, one of the processors may include instruction decoding and an internal state machine(s) to perform a TLB or Cache control operation in response to executing certain instructions which may include parameters to specify the requested operation, for example.For an "invalidate entry" operation, a Virtual page address (VA) is provided in VA field 406 of the control word and the other fields of the control word are ignored. This generates an entry invalidate operation on the corresponding virtual address entry. Note that all processors of a given megacell embodiment might not be allowed to invalidate entries others than their own. In that case, the R-ID value from the R-ID register of the requestor is used to qualify the operation.For an "invalidate all entries related to a task" operation, all entries corresponding to the provided task identifier are invalidated. This allows a master-processor to free space from the shared TLB by invalidating all entries of a task belonging to another processor. In this case, the control word provides a task-ID value and an R_ID value. Processors other than the master-processor can free space from the shared TLB by invalidating all entries of one of its own tasks. This operation invalidates all the entries corresponding to the provided task and resource identifier or to a task of the resource requesting the operation. The R-ID value from the R-ID register of the requestor is used to qualify the operation.For an "invalidate all entry related to a Resource" operation, all entries corresponding to RID field 404 of the control word are invalidated. Note that all processors of a given megacell embodiment might not be allowed to invalidate entries other than their own. This provides, however, the capability to a master processor to free space from the shared TLB by invalidating all entries of another processor. The R-ID value from the R-ID register of the requestor is used to qualify the operation.For an "invalidate all shared entries" operation, all entries in the TLB marked as shared for the requester are invalidated. The R-ID register value limits the effect of this operation, as discussed above.For an "invalidate all entries of a task except shared entries" operation, all entries in the TLB for a task specified in the control word not marked as shared for the requester are invalidated. The R-ID value from the R-ID register of the requestor limits the effect of this operation, as discussed above.For an "invalidate all entries" operation, all entries in the TLB matching the R-ID of the requester are invalidated. For the master CPU, the operation invalidate all entry regardless of the R-ID. If all of the R-ID registers distributed in the system have the same value, then this operation invalidates all entries.For a "lock/unlock entry" operation, a control word is written providing the VA which needs to be locked/unlocked. This operation sets or resets lock field 304 in the selected entry. Restriction on R-ID applies as above.For a "lock/unlock all entry related to a task" operation, a control word is written providing the task identifier which needs to be locked/unlocked. Restriction on R-ID applies as above.In the case in which an independent OS is running on each processor, each OS can initiate the above operations. In that case, these operations must be restricted to entries with a resource identifier (R-Id) belonging to the requester.In the case of a single master OS, task and memory management can be viewed as unified, removing the need for an R-Id. The R-ID can be an extension of the task-ID. In an embodiment, in which the R-ID is hard-coded, the field R-ID in the TLB simply needs to be disabled (associated Valid bit is cleared) via a configuration control register. Disabling the R-ID is equivalent to having a single R-ID for all the system or for part of the system.As mentioned above, a global control bit can be used in an embodiment to determine if all the above functions must be limited to the entry corresponding to the resource ID requesting the operation.Although it is preferable to have the same page size for memory management on all processors, it is not mandatory. In a shared system, the TLB supports all page sizes of the system, in response to S/P field 306. Therefore, in a different embodiment, a TLB may support a different set of page sizes.Table 5 also lists some additional operations that are provided which allow a software TLB handler to access the shared system TLB: Read TLB entry, Write TLB entry, Check and select victim TLB entry, and Set victim TLB entry. These are described in more detail below.For a "Read TLB entry" operation, an entry in the TLB pointed to by the victim pointer is transferred into TLB entry register 330. The TLB entry register can then be read and analyzed by the software TLB handler. Again this operation might be restricted to the master CPU for security.For a "write TLB entry" operation, the contents of the TLB entry register is transferred to a selected victim entry of the TLB.The "check and select victim TLB entry" operation has multiple functions. Its first purpose is to determine an index value for the replacement of an entry. However, it can also be used to find out if an entry is already in the TLB. The R_ID & Task_ID & VA fields of a corresponding entry are checked for a match against a proffered virtual address entry. If there is no match, then the victim pointer is positioned according to the chosen replacement algorithm. This replacement can be random, cyclic, etc. The second usage is to verify if a given page is present in the TLB. If a matching entry is found, the victim entry points to this matching entry, and a flag bit in the status register is set to indicate this condition.The "Set victim TLB entry" operation allows the software TLB handler to select a particular entry as the next victim. This is useful to support certain lock mechanisms software replacement algorithms.As indicated earlier, each control operation is performed by a state machine within TLB control circuitry 320 in response to writing to a selected memory mapped address. For example, for the operation "invalidate all entries related to a task", all entries with a matching task-id TAG are invalidated in response to a single command, including the shared TLB and the associated µTLBs. In the present embodiment in which the TLB is a fully associative memory, the operation can be done in one cycle or as a loop as most appropriate.As mentioned above, control operation affect the shared TLB and the associated µTLBs for the various operations based on task-ID, resource-ID and shared bits. In an embodiment in which both uTLBs and TLB are fully associative, the flush and/or Lock/unlock can be done by the same command in the same cycle. But if the uTLB is fully associative and TLB is set associative, for example, a single command is still used, but the operation into the set associative TLB will be executed entry by entry by a HW loop. This will take longer time. If both the uTLB and TLB are fully associative there will typically be a single control block. If the uTLB is fully associative and TLB set associative, there may be separate control blocks 320, but the same command effects all of the control blocks. Alternatively, an embodiment may require sending copies of the operation command separately to each of the separate control blocks.Figure 6 is a simplified block diagram of the TLB of Figure 3A and will now be referred to explain selective invalidation of an entry for a given task or resource, as listed in Table 5. Processor 2100(m) is representative of one or more requestors that access TLB 2130. A physical address bus 2104(m), resource ID signals 2106(m), and task ID signals 2108(n) are provided by each processor 2100(n) for each TLB request. Traffic controller 2110 provides request priority selection and sends the highest priority request to TLB 2130 using physical address bus 2104, resource ID signals 2106, and task ID signals 2108 to completely identify each request.A task-ID field 302 and/or a resource ID field 301 stored as independent fields in the TLB TAG array is used to selectively invalidate (flush) all entries of a given task or a given resource (requester). A state machine within control circuitry 2132 receives a directive from a processor to perform an invalidation operation, as described above. The operation directive specifies which task-ID is to be flushed using format 400 or 410 (see Figure 5).For operations which use task ID field 402 in the control word, state machine 2132 accesses each entry in TLB 2130, examines the task-ID field, and if there is a match that entry is flushed by marking the valid bits in its valid field 307 as not valid. Thus, a single operation is provided to flush all entries of a given task located in a TLB. As discussed above, in this embodiment, the TLB cache is made of several levels of set associative TLB and µTLB, and all levels are flushed simultaneously in response to a single operation directive command by accessing each entry sequentially in a hardware controlled loop.For operations which use both task ID field 402 and R-ID field 404 in the control word, state machine 2132 accesses each entry in TLB 2130, examines the task-ID field and the resource ID field, and if there is a match in both the task ID and R-ID fields that entry is flushed by marking all valid bits in its valid field 307 as not valid. Advantageously, this allows discrimination of entries belonging to tasks from different resources that have the same task ID number. When the R-ID valid bit is set, an entry is not flushed if its R-ID field 301 does not match the value provided on R-ID signals 2106. This operation only invalidates entries with a valid task-ID.In a similar manner, the selective invalidation operation "Invalidate all entries related to a R-ID" is performed by examining the R-ID 301 field of each entry and if there is a match in the R-ID field that entry is flushed by marking its valid field 307 as not valid. This operation only invalidates entries with a valid R-ID.Likewise, the selective invalidation operation "Invalidate all shared entries" is performed by examining the share field 303 of each entry and if there is a match in the shared field that entry is flushed by marking its valid field 307 as not valid. All entries marked as shared can be flushed in one cycle.In the present embodiment, when shared entries are flushed, state machine 2132 ignores the task ID field since shared page entries may be used by different tasks having different task IDs. In an alternative embodiment, shared entry flushing could also be qualified by the task ID field. Alternatively, shared entry flushing could also be qualified by the task ID field, but only if the task ID valid bit in valid field 307 is asserted indicating a valid task ID value is in field 302.Figure 7 is a simplified block diagram of the TLB of Figure 3A and will now be referred to explain selective lock/unlocking of an entry for a given task or resource, as listed in Table 5. Advantageously, in this multi-processor system with system shared TLB, an innovative scheme of adaptive replacement is provided for controlling the TLB on a task basis, as discussed above. In order to support such a function in the most optimized way, an adaptive replacement algorithm taking into account locked entries and empty entries is provided. TLB full signal 2240 is asserted when one or more valid bits in field 307 is asserted for each TLB entry location. TLB miss signal 2242 is asserted to indicate a miss occurred in response to a transaction request from processor 2100(m), which invokes a TLB handler as described earlier.When the TLB is full with no locked entries, pseudorandom replacement based on a simple counter (Victim CNT) 2234 is used to select the victim entry. Another embodiment would be to keep a pseudo random replacement and to check the lock bit on a miss. If it is locked, signal 2244 is asserted and the victim counter is incremented further until a non-locked entry is found. This is done automatically by the control circuitry connected to victim counter 2234 so that response time of the TLB handler routine is not impacted.When the TLB is not full, the victim counter is incremented until an empty entry is found. This is done automatically by the control circuitry connected to victim counter 2234 so that response time of the TLB handler routine is not impacted.After a flush entry operation is performed, the victim "counter" is updated with the location value of the flushed entry and stays unchanged until a new line is loaded in order to avoid unnecessary searching.An alternative implementation provides the capability to do the victim entry search instantaneously by providing in an external logic the lock and valid bit or by using a CAM, for example. In another alternative embodiment, a shift register and associated circuitry is used to point to the next location in the TLB that is either not valid or valid and not locked.Digital System EmbodimentFigure 8 illustrates an exemplary implementation of an example of such an integrated circuit in a mobile telecommunications device, such as a mobile personal digital assistant (PDA) 10 with display 14 and integrated input sensors 12a, 12b located in the periphery of display 14. As shown in Figure 8, digital system 10 includes a megacell 100 according to Figure 1 that is connected to the input sensors 12a,b via an adapter (not shown), as an MPU private peripheral 142. A stylus or finger can be used to input information to the PDA via input sensors 12a,b. Display 14 is connected to megacell 100 via local frame buffer similar to frame buffer 136. Display 14 provides graphical and video output in overlapping windows, such as MPEG video window 14a, shared text document window 14b and three dimensional game window 14c, for example.Radio frequency (RF) circuitry (not shown) is connected to an aerial 18 and is driven by megacell 100 as a DSP private peripheral 140 and provides a wireless network link. Connector 20 is connected to a cable adaptor-modem (not shown) and thence to megacell 100 as a DSP private peripheral 140 provides a wired network link for use during stationary usage in an office environment, for example. A short distance wireless link 23 is also "connected" to ear piece 22 and is driven by a low power transmitter (not shown) connected to megacell 100 as a DSP private peripheral 140. Microphone 24 is similarly connected to megacell 100 such that two-way audio information can be exchanged with other users on the wireless or wired network using microphone 24 and wireless ear piece 22.Megacell 100 provides all encoding and decoding for audio and video/graphical information being sent and received via the wireless network link and/or the wire-based network link.It is contemplated, of course, that many other types of communications systems and computer systems may also benefit from the present invention, particularly those relying on battery power. Examples of such other computer systems include portable computers, smart phones, web phones, and the like. As power dissipation and processing performance is also of concern in desktop and line-powered computer systems and microcontroller applications, particularly from a reliability standpoint, it is also contemplated that the present invention may also provide benefits to such line-powered systems.Fabrication of the digital systems disclosed herein involves multiple steps of implanting various amounts of impurities into a semiconductor substrate and diffusing the impurities to selected depths within the substrate to form transistor devices. Masks are formed to control the placement of the impurities. Multiple layers of conductive material and insulative material are deposited and etched to interconnect the various devices. These steps are performed in a clean room environment.A significant portion of the cost of producing the data processing device involves testing. While in wafer form, individual devices are biased to an operational state and probe tested for basic operational functionality. The wafer is then separated into individual dice which may be sold as bare die or packaged. After packaging, finished parts are biased into an operational state and tested for operational functionality.The digital systems disclosed herein contain hardware extensions for advanced debugging features. These assist in the development of an application system. Since these capabilities are part of the megacell itself, they are available utilizing only a JTAG interface with extended operating mode extensions. They provide simple, inexpensive, and speed independent access to the core for sophisticated debugging and economical system development, without requiring the costly cabling and access to processor pins required by traditional emulator systems or intruding on system resources.As used herein, the terms "applied," "connected," and "connection" mean electrically connected, including where additional elements may be in the electrical connection path. "Associated" means a controlling relationship, such as a memory resource that is controlled by an associated port. The terms assert, assertion, de-assert, de-assertion, negate and negation are used to avoid confusion when dealing with a mixture of active high and active low signals. Assert and assertion are used to indicate that a signal is rendered active, or logically true. De-assert, de-assertion, negate, and negation are used to indicate that a signal is rendered inactive, or logically false.While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various other embodiments of the invention will be apparent to persons skilled in the art upon reference to this description. For example, in another embodiment, the TLB may be limited to a single processor and not shared, or it may include only a single level without µTLBs.In another embodiment, the TLB may be controlled by other means than a state machine controller, such as directly by an associated processor, for example.In another embodiment, there may be several distinct MMUs with associated TLBs, wherein certain of the TLBs may include aspects of the invention and certain others may not.It is therefore contemplated that the appended claims will cover any such modifications of the embodiments as fall within the true scope and spirit of the invention. |
An apparatus and method for predicting quantities of data required by requesting devices capable of requesting unspecified quantities of data from storage device, in which prediction of quantities that will be required are made based on past patterns of quantities of data required in past transfers. |
What is claimed is: 1. A method, comprising:monitoring patterns of quantities of data transferred in a plurality of earlier transfers of data from a storage device to a requesting device capable of initiating a request for data from the storage device; predicting, using at least one pattern, a quantity of data that will be required in a future transfer of data from the storage device to a requesting device that occurs after a future reoccurrence of a pattern of quantities of data transferred, and associating that prediction with that pattern; receiving a request from a requesting device for an unspecified quantity of data from the storage device; reading data, using a prediction, from the storage device in response to the request; and selectively modifying a prediction after two or more occasions in which the prediction proves to be inaccurate. 2. The method of claim 1, wherein the association of patterns and predictions is stored in a manner using patterns to select a prediction from among a plurality of predictions.3. The method of claim 1, wherein an indicator of past occasions in which the prediction proved to be inaccurate is associated with patterns and stored in a manner using patterns to select an indicator from among a plurality of indicators.4. The method of claim 1, wherein separate associations of patterns and predictions are maintained for each requesting device among a plurality of requesting devices.5. The method of claim 1, wherein separate associations of patterns and predictions are maintained for each bus by which a requesting device receives the data requested.6. An apparatus for predicting quantities of data that will be required in a transfer of data from a storage device, comprising:logic to monitor patterns of quantities of data transferred in at least two earlier transfers of data from the storage device to a requesting device capable of initiating a request for data from the storage device; logic to use at least one pattern to make a prediction of a quantity of data that will be required in a future transfer of data from the storage device to a requesting device that occurs after a future reoccurrence of a pattern of quantities of data transferred, and associating that prediction with that pattern; logic to use a prediction to read a quantity of data from the storage device in response to a request from a requesting device for an unspecified quantity of data from the storage device; and logic to selectively modify a prediction after two or more occasions in which the prediction proves to be inaccurate. 7. The apparatus of claim 6, wherein the association of patterns and predictions is stored in a manner using patterns to select a prediction from among a plurality of predictions.8. The apparatus of claim 6, wherein an indicator of past occasions in which the prediction proved to be inaccurate is associated with patterns and stored in a manner using patterns to select an indicator from among a plurality of indicators.9. The apparatus of claim 6, wherein separate associations of patterns and predictions are maintained for each requesting device among a plurality of requesting devices.10. Art apparatus for predicting quantities of data that will be required in a transfer of data from a storage device, wherein the apparatus is incorporated into a bus interface for a first bus that supports at least one type of transfer in which the quantity of data to be transferred cannot be specified, the apparatus comprising:logic to monitor patterns of quantities of data transferred in at least two earlier transfers of data from the storage device to a requesting device capable of initiating a request for data from the storage device; logic to use at least one pattern to make a prediction of a quantity of data that will be required in a future transfer of data from the storage device to a requesting device that occurs after a future reoccurrence of a pattern of quantities of data transferred, and associating that prediction with that pattern; logic to use a prediction to read a quantity of data from the storage device in response to a request from a requesting device for an unspecified quantity of data from the storage device; and logic to selectively modify a prediction if the prediction proves to be inaccurate. 11. The apparatus of claim 10, wherein separate associations of patterns and predictions are maintained for each bus by which a requesting device receives the data requested.12. The apparatus of claim 10, wherein the bus interface is incorporated into a bus bridge device that provides an interface between the first bus and a second bus.13. The apparatus of claim 10, wherein association of patterns and predictions is stored in a storage device that is also used to temporarily store the data requested by the requesting device.14. A computer system, comprising:at least one CPU; at least one storage device; at least one bus supporting a type of transfer of data in which the quantity of data cannot be specified when a request for data is initiated; at least one requesting device coupled to the bus and capable of initiating a request for data from the storage device; and prediction logic to predict the quantity of data that will be required to satisfy a request for data based on patterns of quantities of data transferred from the storage device to one or more requesting devices in response to earlier requests for data from the storage device wherein a prediction is modified after two or more occasions in which the prediction proves to be inaccurate. 15. The computer system of claim 14, wherein the association of patterns and predictions is stored in a manner using patterns to select a prediction from among a plurality of predictions.16. The computer system of claim 14, wherein an indicator of past occasions in which the prediction proved to be inaccurate is associated with patterns and stored in a manner using patterns to select an indicator from among a plurality of indicators.17. The computer system of claim 14, wherein separate associations of patterns and predictions are maintained for each requesting device among a plurality of requesting devices.18. A computer system, comprising:at least one CPU; at least one storage device; at least one bus supporting a type of transfer of data in which the quantity of data cannot be specified when a request for data is initiated; at least one requesting device coupled to the bus and capable of initiating a request for data from the storage device; and prediction logic incorporated into a bus interface coupled to the at least one bus to predict a quantity of data that will be required to satisfy a request for data based on patterns of quantities of data transferred in at least two earlier transfers from the storage device to the at least one requesting device in response to earlier requests for data from the storage device, and to use a prediction to read a quantity of data from the storage device to the at least one requesting device. 19. The computer system of claim 18, wherein separate associations of patterns and predictions are maintained for each bus by which a requesting device receives the data requested.20. The computer system of claim 18, wherein the prediction logic is incorporated into a bus bridge device that provides an interface between the at least one bus and another bus.21. The computer system of claim 18, wherein the association of patterns and predictions is stored in a storage device that is also used to temporarily store the data requested by the requesting device. |
FIELD OF THE INVENTIONThe present invention is related to a method and apparatus for adaptively predicting quantities of data to be prefetched in response to bus master read requests.ART BACKGROUNDComputer systems commonly have one or more storage devices, such as random access memory, and busses that can used to transfer data from such storage device, to other devices within the computer system in a memory read transfer. These busses are often designed to support memory read transfers of small quantities of data, and these transfers are sufficient to support commonly occurring random reads from storage locations within a storage device. In the case of many busses, such memory read transfers begin with a distinct address phase in which the address of the storage location from which data is to be retrieved is transmitted to the storage device. This is then followed by a distinct data phase in which a single transfer of data takes place. If more data is to be retrieved, then additional memory read transfers must be performed, each having both address and data phases. However, such memory read transfers are inefficient for use in retrieving a larger quantity of data from a group of adjacent storage locations within a storage device.Various higher performance busses support a form of transfer commonly referred to as a "burst" transfer in which there is one address phase, followed by multiple data Phases. In this way, efficiency can be increased for memory transfers from a larger number of adjacent storage locations by transmitting only the address of the first storage location. The first data phase entails the transfer of data from the storage location within the storage device that was specified by the transmitted address, and the subsequent data phases entail the transfer of data from adjacent storage locations. This is commonly referred to as a "burst read transfer." In some higher performance busses, burst read transfers can be interrupted or suspended between the address phase and the first data phase, or between data phases, in order to allow the bus to be made available for other uses.A common drawback in the implementation of various busses supporting burst read transfers, however, is that the bus does not provide a way for a device requesting a burst read transfer, such as a bus master device, to specify the exact quantity of data desired. Some bus implementations allow a bus master device to begin a burst read transfer, and then simply continue to perform data phases in that burst read transfer until the bus master device has received the desired quantity of data. This means that the exact quantity of data to be transferred cannot be known until the transfer has been completed.The fact that the quantity of data to be transferred in a burst read transfer cannot be known until the transfer has ended impairs efforts to optimize the reading of data for such transfers from a storage device. This can be especially true in the case of burst read transfers from such storage devices as random access memory (RAM). Some forms of RAM impose considerable latencies on accesses to storage locations to read data, resulting in adverse effects on the performance of burst read transfers. Were it possible to know the exact quantity of data to be read for a burst transfer, it might be possible to use various well known techniques to counteract the effects of such latencies, and the performance of such burst transfers could be improved.SUMMARY OF THE INVENTIONAn apparatus and method for predicting quantities of data required by requesting devices capable of requesting unspecified quantities of data from storage devices, in which patterns of quantities required in past transfers are monitored. Predictions are made based on those patterns, associated with a pattern, used to make requests for data from storage devices, and selectively modified if they prove to be inaccurate.BRIEF DESCRIPTION OF THE DRAWINGSThe objects, features, and advantages of the present invention will be apparent to one skilled in the art in view of the following detailed description in which:FIG. 1 is a simplified block diagram of one embodiment of a bus interface.FIG. 2 is a simplified block diagram of one embodiment of a method for predicting quantities of data required in burst read transfers.FIG. 3 is a simplified block diagram of another embodiment of a method for predicting quantities of data required in burst read transfers.FIG. 4 is a simplified block diagram of a computer system.DETAILED DESCRIPTIONIn the following description, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that these specific details are not required in order to practice the present invention. In other instances, well known electrical structures and circuits are shown in block diagram form in order not to obscure the present invention unnecessarily.The example embodiments of the present invention are described in the context of a buts master device retrieving data in a burst read transfer from a random access memory (RAM) storage device. However, the present invention is applicable to a variety of types of data transfers between a variety of devices. The present invention could be used with cache line fill transfers, and situations in which there are types of transfers in which a pre-defined quantity of data is to be transferred, but uncertainty exists as to which of the possible pre-defined quantities is to be transferred. The present invention could be used with bus bridge devices taking the role of a bus master device, as hereinafter described. Furthermore, storage devices other than RAM may be used with the present invention.FIG. 1 depicts one embodiment of a bus interface. Bus interface 10 provides access to storage device 40 from bus 20, and includes buffer 12 and control logic 14. Coupled to bus 20 are bus master device 30, and optionally, one or more additional devices such as device 32. Bus master device 30 reads data from storage locations within storage device 40 via bus 20 and bus interface 10. Furthermore, bus interface 10 and bus 20 support burst read transfers, allowing bus master device 30 to read data stored in multiple adjacent storage locations within storage device 40 with a single transfer.Bus master device 30 begins a burst read transfer on bus 20 to read from adjacent storage locations within storage device 40 by signaling bus interface 10 with a request to perform a read operation and by transmitting the address of the first storage location from which data is to be read to bus interface 10. Control logic 14 responds with a signal for bus master device 30 to temporarily suspend the requested burst read transfer while data is read from storage device 40. While this request is suspended, bus 20 may be used to carry out other function,.The quantities of data actually transferred in the last few burst read transfers are retained by control logic 14 and used together as a pattern to select and make predictions of how much data will be transferred in a current burst read transfer. These predictions are based on the assumption that if a bus master device retrieved a given quantity of data after a given pattern of quantities retrieved in previous burst transfer, then a bus master device will likely retrieve the same given quantity, again, after the same given pattern has occurred, again. Different storage locations within buffer 12 correspond to different patterns of quantities of data for previous burst read transfers. When control logic 14 receives a request for a burst read transfer, control logic 14 uses the pattern of the quantities of data transferred in the last few transfers to select the storage location within buffer 12 that corresponds to that pattern to retrieve the prediction value to be used in the current burst read transfer. From the same storage location within buffer 12, control logic 14 also retrieves an accuracy value providing a measure of the accuracy of the prediction.Control logic 14 uses the retrieved prediction to read a quantity of data from storage device 40, starting at the storage location within storage device 40 that was specified in the address transmitted by bus master device 30. As the data is read from storage device 40, it is stored in another part of buffer 12. Control logic 14 then signals bus master device 30 to proceed with the requested burst read transfer, and the burst read transfer then takes place with buffer 12 supplying data across bus 20 to bus master device 30. As the burst read transfer takes place, control logic 14 monitors the quantity of data actually transferred to evaluate the prediction made for the current transfer to determine if the prediction should be modified. If a prediction proves inaccurate such that too little data is initially requested from storage device 40 to supply the quantity required by bus master device 30 for a current transfer, then control logic 14 will read additional data from storage device 40 until bus master device 30 has received the quantity desired. The current transfer may be suspended one or more times between data phases to accommodate latencies incurred in reading data from storage device 40, especially if additional data must be read. At the end of the current transfer, if the prediction should be modified, control logic 14 stores the modified prediction into the same storage location in buffer 12 from which the prediction value was read for the current transfer. Control logic 14 also stores a 2-bit value indicating a measure of accuracy of prediction.In one Embodiment, if the prediction used in the current transfer proved to be inaccurate, the prediction would be immediately modified and stored in buffer 12, as just described. In another embodiment, the 2-bit value specifying a measure of accuracy would also be used in making the prediction for the next burst read transfer following the same pattern. In this other embodiment, this accuracy value is used to insert a delay or "hysteresis" in changes made to the prediction value. This is intended to counteract the effects of infrequent or "one-time" occurrences of burst read transfers in which anomalous quantities of data are transferred. In this other embodiment, a prediction is modified only after it has proven to be inaccurate in at least two successive transfers. Table 1, below, illustrates one set of "rules" by which prediction values may be modified, and what kind of accuracy value would be stored with a prediction.<tb> <sep>TABLE 1<tb> <sep>current accuracy<sep>accurate of<sep>how the<sep>modified<tb> <sep>value retrieved<sep>prediction for<sep>prediction will<sep>accuracy value<tb> <sep>from storage<sep>current transfer<sep>be modified<sep>to be stored<tb> <sep>too low<sep>too low<sep>increased by 1<sep>just right<tb> <sep>too low<sep>just right<sep>unmodified<sep>too low<tb> <sep>just right<sep>too low<sep>unmodified<sep>too low<tb> <sep>just right<sep>just right<sep>unmodified<sep>just right<tb> <sep>too low<sep>too high<sep>unmodified<sep>just right<tb> <sep>too high<sep>too low<sep>unmodified<sep>just right<tb> <sep>just right<sep>too high<sep>unmodified<sep>too high<tb> <sep>too high<sep>just right<sep>unmodified<sep>too high<tb> <sep>too high<sep>too high<sep>decreased by 1<sep>just rightIt will be understood by those skilled in the art that the implementation of a delay could entail storing and using more than one accuracy value. It will also be understood that the delay could be such that more than 2 occurrences of an inaccuracy would be required before a prediction would be modified.Although, in this embodiment, the data read from storage device 40 is temporarily stored in buffer 12, along with prediction and accuracy values, it will be understood that the data read from storage device 40 for the current transfer could be stored in a separate buffer, not shown. Also, although in this embodiment, new predictions for a future burst read transfer after a given pattern are made at the end of a current burst read transfer, it will be understood that the making of future predictions can be delayed until the request for the future burst read transfer is actually received. In such an embodiment, the prediction values stored in buffer 12 would reflect the predictions made for current transfers, and would not be prediction values made for future transfers.In one embodiment, control logic 14 would store separate sets of prediction and accuracy values for each bus master device coupled to bus 20, and would make sure separate predictions for each bus master device coupled to bus 20. Thus, if device 32 were another bus master device, control logic 14 would treat bus master device 30 and device 32, separately. However, in another embodiment, control logic 14 would store one set of predictions and results, and would make predictions for all bus master devices coupled to bus 20. In one embodiment, control logic 14 would maintain and use separate pattern for each bus master device coupled to bus 20, though a pattern could be maintained for all bus master devices coupled to bus 20.FIG. 2 depicts an embodiment of a method of predicting quantities of data required from a storage device. In this embodiment, 2-bit binary values are used to specify quantities of data actually transferred or predicted to be transferred. As will understood by those skilled in the art, varying numbers of bits corresponding to any of a number of values or ranges of values may be used. In this embodiment, binary values 0 through 3 correspond to ranges of quantities of bytes depicted in Table 2, below.<tb> <sep> <sep>TABLE 2<tb> <sep> <sep>00<sep>128 bytes or less<tb> <sep> <sep>01<sep>129 to 256 bytes<tb> <sep> <sep>10<sep>257 to 512 bytes<tb> <sep> <sep>11<sep>513 bytes or greaterIt will be understood by those skilled in the art that the number of bits used to specify quantities of data, as well as the quantities or ranges of quantities that each combination of bits specifies, can be made programmable. In this embodiment, 2-bit values are also used to specify a measure of accuracy of a prediction. Again, it will be understood by those skilled in the art that varying numbers of bits corresponding to various accuracy values may be used. In this embodiment, binary values 0 through 2 correspond to various measures of accuracy, as depicted in Table 3, below.<tb> <sep> <sep>TABLE 3<tb> <sep> <sep>00<sep>prediction too low<tb> <sep> <sep>01<sep>within range<tb> <sep> <sep>10<sep>prediction too high<tb> <sep> <sep>11<sep>no meaning assignedIt will be understood that the number of bits used to provide a measure of accuracy, as well as the measures that each combination of bits specifies, can be made programmable.Queue 200 is divided into 5 storage locations, each of which is 2 bits in size. Position N stores 2 bits indicating the actual quantity of bytes of data required by a current burst read transfer when the transfer has been completed and the quantity is then known. Positions N-1 through N-4 each store 2 bits indicating the actual quantity of data required in the last 4 burst read transfers, with position N-1 being the most recent of those transfers, and position N-4 being the least recent. Taken together, positions N-1 through N-4 describe the pattern of quantities of data transferred over the last 4 burst read transfers. Queue 200 is used in a manner analogous to a FIFO (first-in-first-out) buffer, although the physical implementation of queue 200 could take many forms, as will be readily understood by those skilled in the art. As a current burst read transfer is completed and the quantity of data transferred is then known, the 2 bits representing the quantity of bytes transferred in the current transfer are placed in position N, and the data that was in position N is shifted to position N-1, and so or through the positions of queue 200, with the data that was in position N-4 being discarded. In this way, the quantity of data transferred in the current transfer becomes part of the pattern that will be used when a request is received for a future burst read transfer.Pattern buffer 220 is used to store predictions of quantities of data for a burst read transfer that occurs after each possible pattern, along with measures of accuracy of prediction. Pattern buffer 220 includes 1024 storage locations, each of which is 4 bits, or 1 nibble in size. Storage location 230 is representative of these storage locations, with 2-bit prediction value and a 2-bit accuracy value.Address 210 is used in selecting storage locations within pattern buffer 220 to access predictions and accuracy values. As shown, the 2-bit values indicating the quantities of data transferred in the last 4 transfers are used to form part of address 210. In the depicted embodiment, 2 additional bits that identify 1 of up to 4 bus master devices also form part of address 210. The use of quantities of data transferred in creating addresses for selecting storage locations within pattern buffer 220 is a way of matching the pattern of quantities transferred in the last 4 burst read transfers to a prediction and an accuracy value. In this embodiment, a separate queue 200 is maintained for each bus master device. However, in another embodiment, a single set of data for predictions and accuracy values are maintained for all bus master devices on a given bus. In such an embodiment, pattern buffer 220 would also be smaller, requiring 256 storage locations to store data representing predictions and accuracy values, instead of 1024 storage locations. Furthermore, in still other embodiments, there may be only one queue 200 used to maintain a pattern for all bus master devices on a given bus, regardless of whether or not separate sets of prediction and accuracy values are maintained. It will also be understood that the number of positions in queue 200 may be implemented so as to be programmable, allowing larger or smaller patterns to be maintained.When a request is received from a bus master device to read data in a burst read transfer from a storage device, the 2-bit values in positions N-1 through N-4 are used, along with the 2-bit value identifying the bus master device, to create address 210. Address 210 is used to select the storage location within pattern buffer 220 that corresponds to the pattern of quantities transferred in the last 4 burst read transfers for a specific bus master device. As previously discussed, the prediction value from the selected storage location within buffer 220 is used in requesting data from a storage device. If, at the end of the current transfer, either the prediction or accuracy value is to be modified, then the modified values are written to the same storage location within pattern buffer 220 selected by address 210. After the prediction and/or accuracy values have been modified, if necessary, the bits in the positions of queue 200 are shifted such that the bits in position N are moved to position N-1, the bits in position N-1 are moved to position N-2, and so on throughout queue 200, with the bits in position N-4 being discarded. In this way, the pattern of quantities transferred now includes a quantity value at N-1 for the burst read transfer just completed.FIG. 3 depicts another embodiment of a method of predicting quantities of data required from a storage device. The numbered objects ofFIG. 3 correspond, generally, to the components of FIG. 2, with numbered objects sharing the last two digits of their numbers performing similar functions. Queue 300 is similarly divided into a set of five 2-bit positions storing a pattern of quantities transferred in the last 4 burst read transfers. The storage locations, such as storage location 330a, within buffer 320 are 64 bits in width. Each storage location holds 16 nibbles, such as nibble 330b, which are used to store a 2-bit prediction value and a 2-bit accuracy value.Buffer 320 is used to store more than the prediction and accuracy values used to practice the present invention. Other storage locations within buffer 320 are used to temporarily store data in the process of being transferred between devices. Buffer 320 may also be used to hold information needed to configure components of a computer system. The Address 310 is made up of the bits of queue 300 specifying the quantities transferred in the last 4 burst read transfers, 2 bits identifying the bus master device making the request to carry out a current burst read transfer, 1 bit identifying which 1 of 2 busses the requesting bus master device is connected to, and 1 or more bits forming a base address that specifies where within buffer 320 the prediction and accuracy values are located. Bits a through g specify the storage location to be accessed within buffer 320, while bits h through k specify the nibble to be accessed within that storage location.FIG. 4 depicts an embodiment of a computer system. The depicted computer system includes CPU 400, coupled to support logic 410, which is also coupled to storage device 420 and bus 430. Bus 430 further couples support logic 410 with bus bridge 440. Bus bridge 440 includes control logic 442, buffer 444 and control logic 446. Coupled to bus bridge 440, and associated with control logic 442, is bus 450 which is further coupled bus master device 460 and device 462. Coupled to bus bridge 440, and associated with control logic 446, is bus 470 which is further coupled bus master device 480 and device 482. Storage device 420 is typically, but not necessarily, a random access memory or RAM used by the computer system for the storage of programs and/or data. Busses 450 and 470 are peer busses, emanating from bus bridge 440.The manner in which prediction and accuracy values are stored in buffer 444 is as was depicted in buffer 320 of FIG. 3. The storage locations within buffer 444 are also 64 bits in width, and the storage locations within buffer 444 that are used to store prediction and accuracy values hold 16 nibbles, each of which carry a 2-bit prediction value and a 2-bit accuracy value. The manner in which these storage locations are selected employs the same addressing as was exemplified by address 310. A base address is used to differentiate the storage locations within buffer 444 that hold prediction and accuracy values from storage locations within buffer 444 that perform other functions. As in the case of address 310, the address includes of 8 bits made up of the pattern of quantities transferred in previous burst read accesses, 2 bits identifying the bus master device making the current request for a burst read transfer, and 1 bit differentiating between requests for burst transfers for bus master devices on bus 450 and from bus master devices on bus 470.Control logic 442 receives a request from bus master device 460 to perform a burst read transfer of data from storage locations within storage device 420, starting at an address supplied by bus master device 460. Control logic 442 responds by signaling bus master device 460 to temporarily suspend the request. Control logic 442, accesses buffer 444 to obtain a prediction, and then requests the predicted quantity of data from storage device 420, starting at the address supplied by bus master device 460. The data received from storage device 420, in response to the request made by control logic 442, is stored in buffer 444 at storage locations different from those used to store prediction and accuracy values.Control logic 442 signals bus master device 460 to proceed with the burst read transfer. The data collected from storage device 420 in buffer 444 is transmitted across bus 450 to bus master device 460. If the predicted quantity was either accurate or too large, then bus master 460 terminates the burst read transfer when the desired quantity of data has been received. If the predicted quantity was too small, then control logic 442 makes further requests for data from storage device 420, as needed, until bus master device 460 terminates the burst read transfer, thereby signaling that it has received all data needed in the current burst read transfer.Control logic 442 is configured to implement a delay in refining predictions to counteract the effects of occasional anomalous quantities of data being required by a given bus master device. If the prediction was found to be inaccurate for the current burst read transfer, but was marked by the accuracy value such that this same prediction was not previously inaccurate, then control logic 442 will modify the accuracy value and store the modified accuracy value in buffer 444. However, if the prediction was found to be inaccurate for the current burst read transfer, and was marked by the accuracy value to indicate that it was also previously inaccurate in the same way (i.e., it was too small or too large on both occasions), then control logic 442 will modify the prediction to refine it and modify the accuracy value to mark the new prediction as accurate, and store both in buffer 444.It will be understood by those skilled in the art that the accuracy and prediction values could be initialized to values chosen to be optimal for specific bus master devices, or to other values. Such initialization could be carried out by software such as device drivers. In one embodiment, device drivers specific to each bus master device installed in a computer system would initialize accuracy or initialization values, as desired, as each device driver is initialized. Furthermore, it will be understood that the number of quantities used to make up a pattern may also be made programmable, and may be similarly initialized by software.It is evident that numerous alternatives, modifications, variations and uses will be apparent to those skilled in the art in light of the foregoing description.It will be understood by those skilled in the art, that the present invention may be practiced in support of other combinations of functions in a display system in addition to or in lieu of texture mapping and/or motion compensation where the pre-fetching of pixel data is effective to differing degrees. |
Predicting literal load values using a literal load prediction table, and related circuits, methods, and computer-readable media are disclosed. In one aspect, an instruction processing circuit provides a literal load prediction table containing one or more entries, each comprising an address and a literal load value. Upon detecting a literal load instruction in an instruction stream, the instruction processing circuit determines whether the literal load prediction table contains an entry having an address of the literal load instruction. If so, the instruction processing circuit provides the predicted literal load value stored in the entry to at least one dependent instruction. The instruction processing circuit subsequently determines whether the predicted literal load value matches the actual literal load value loaded by the literal load instruction. If a mismatch exists, the instruction processing circuit initiates a misprediction recovery. The at least one dependent instruction is re-executed using the actual literal load value. |
1.An instruction processing circuit configured to:Detecting a first occurrence of a word load instruction in the instruction stream;Determine whether an address of the text load instruction exists in an entry of the text load prediction table; andIn response to determining that the address of the word load instruction exists in the entry:Provide a predictive text load value stored in the entry for execution of at least one dependent instruction related to the literal load instruction;Determine whether the predicted text load value matches the actual text load value loaded by the text load instruction immediately after execution of the text load instruction; andIn response to determining that the predicted text load value does not match the actual text load value:Start misprediction recovery; andRe-execute the at least one dependent instruction using the actual text load value.2.The instruction processing circuit of claim 1, further configured to:In response to determining that the address of the text load instruction does not exist in the entry of the literal load prediction table, generating the table in the literal load prediction table immediately after executing the word load instruction The entry including the address of the word load instruction and the actual word load value stored as the predicted word load value.3.The instruction processing circuit of claim 1 configured to initiate the false prediction recovery by updating the entry with the actual text load value stored as the predicted word load value.4.The instruction processing circuit of claim 1 configured to initiate the false prediction recovery by clearing the entry from the text load prediction table.5.The instruction processing circuit of claim 1, configured to initiate the false prediction recovery by setting an unpredicted indicator in the entry.6.The instruction processing circuit of claim 5, further configured to:Detect a second occurrence of the word load instruction in the instruction stream;Determine whether the address of the text load instruction exists in the entry of the text load prediction table; andIn response to determining that the address of the word load instruction exists in the entry:Determine whether the unpredicted indicator in the entry is set; andIn response to determining that the unpredicted indicator in the entry has been set, executing the word load instruction without providing the predicted word load value stored in the entry for the at least one dependent carried out.7.The instruction processing circuit of claim 1, integrated into the integrated circuit IC.8.The instruction processing circuit according to claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communication device; a fixed location data unit; a mobile location data unit; a mobile phone ; Cellular telephone; computer; portable computer; desktop computer; personal digital assistant PDA; monitor; computer monitor; television; tuner; radio; satellite radio; music player; digital music player; portable music player Digital video player, video player, digital video disc DVD player, and portable digital video player.9.An instruction processing circuit, comprising:Means for detecting a first occurrence of a word load instruction in an instruction stream;Means for determining whether an address of the text load instruction exists in an entry of the text load prediction table;In response to determining that the address of the word load instruction exists in the entry, providing a predictive text load value stored in the entry for execution of at least one dependent instruction related to the word load instruction s installation;For further responsive to determining that the address of the word load instruction exists in the entry, determining whether the predicted word load value matches the actual value loaded by the word load instruction immediately after execution of the word load instruction Text loading device;Means for initiating a false prediction recovery in response to determining that the predicted word load value does not match the actual word load value; andMeans for re-executing the at least one dependent instruction using the actual text load value in response to further determining that the predicted text load value does not match the actual text load value.10.A method for predicting text load values, comprising:Detecting a first occurrence of a word load instruction in the instruction stream;Determine whether an address of the text load instruction exists in an entry of the text load prediction table; andIn response to determining that the address of the word load instruction exists in the entry:Provide a predictive text load value stored in the entry for execution of at least one dependent instruction related to the literal load instruction;Determine whether the predicted text load value matches the actual text load value loaded by the text load instruction immediately after execution of the text load instruction; andIn response to determining that the predicted text load value does not match the actual text load value:Start misprediction recovery; andRe-execute the at least one dependent instruction using the actual text load value.11.The method of claim 10, further comprising:In response to determining that the address of the text load instruction does not exist in the entry of the literal load prediction table, generating the table in the literal load prediction table immediately after executing the word load instruction The entry including the address of the word load instruction and the actual word load value stored as the predicted word load value.12.The method of claim 10, wherein initiating the false prediction recovery comprises updating the entry with the actual literal load value stored as the predicted literal load value.13.The method of claim 10, wherein initiating the false prediction recovery comprises clearing the entry from the text load prediction table.14.The method of claim 10, wherein initiating the false prediction recovery comprises setting an unpredicted indicator in the entry.15.The method of claim 14, further comprising:Detect a second occurrence of the word load instruction in the instruction stream;Determine whether the address of the text load instruction exists in the entry of the text load prediction table; andIn response to determining that the address of the word load instruction exists in the entry:Determine whether the unpredicted indicator in the entry is set; andIn response to determining that the unpredicted indicator in the entry has been set, executing the word load instruction without providing the predicted word load value stored in the entry for the at least one dependent carried out.16.A non-transitory computer-readable storage medium having stored thereon computer-executable instructions that cause a processor to:Detecting a first occurrence of a word load instruction in the instruction stream;Determine whether an address of the text load instruction exists in an entry of the text load prediction table; andIn response to determining that the address of the word load instruction exists in the entry:Provide a predictive text load value stored in the entry for execution of at least one dependent instruction related to the literal load instruction;Determine whether the predicted text load value matches the actual text load value loaded by the text load instruction immediately after execution of the text load instruction; andIn response to determining that the predicted text load value does not match the actual text load value:Start misprediction recovery; andRe-execute the at least one dependent instruction using the actual text load value.17.The non-transitory computer-readable medium of claim 16 having computer-executable instructions stored thereon that further cause the processor to:In response to determining that the address of the text load instruction does not exist in the entry of the literal load prediction table, generating the table in the literal load prediction table immediately after executing the word load instruction The entry including the address of the word load instruction and the actual word load value stored as the predicted word load value.18.The non-transitory computer-readable medium of claim 16 having computer-executable instructions stored thereon that cause the processor to determine, by using the information stored as the predicted literal load value, The actual word load value is updated to update the entry to initiate the misprediction recovery.19.The non-transitory computer-readable medium of claim 16 having computer-executable instructions stored thereon that cause the processor to clear the table by loading the predictive table from the text And initiate the misprediction recovery.20.The non-transitory computer-readable medium of claim 16 having computer-executable instructions stored thereon that cause the processor to determine a non-predictive indicator And start the misprediction recovery.21.The non-transitory computer-readable medium of claim 20 having computer-executable instructions stored thereon that further cause the processor to:Detect a second occurrence of the word load instruction in the instruction stream;Determine whether the address of the text load instruction exists in the entry of the text load prediction table; andIn response to determining that the address of the word load instruction exists in the entry:Determine whether the unpredicted indicator in the entry is set; andIn response to determining that the unpredicted indicator in the entry has been set, executing the word load instruction without providing the predicted word load value stored in the entry for the at least one dependent carried out. |
Predict text load values using text load prediction tables, as well as related circuits, methods, and computer-readable mediaPriority applicationThis application claims the benefit of U.S. Provisional Patent Application No. HEADING PREDICTION TABLE, filed on September 12, 2014, entitled "Predicting Text Loads Using Text Load Prediction Tables, and Related Circuits, Methods, and Computer- CIRCUITS, METHODS, AND COMPUTER-READABLE MEDIA, "the contents of which are incorporated herein by reference in their entirety.Technical fieldThe techniques of this disclosure generally relate to word load instructions provided by a computer processor.Background techniqueComputer programs executed by modern computer processors can frequently use literal values. As used herein, "literal value" is a value that is expressed as self in the source code of a computer program (eg, a number 25 or the string "Hello World"). A literal value may provide a convenient way for a computer program to represent and utilize a value that does not change or only rarely changes during the execution of the computer program. A plurality of literal values to be accessed during execution of the computer program may be stored together in memory as data blocks called "constant repositories."The load instruction may be used by a computer program to access a literal value (ie, a "literal load value") at a specified address and place the literal load value in a register for processing in the processing pipeline following the load instruction One or more subsequent instructions are used. Such load instructions are referred to herein as "text load instructions," and subsequent instructions that use the literal load values as inputs are referred to as "dependent instructions." In some computer architectures, the word load instruction may specify the location of the literal load value in the constant store as an address related to the address of the literal load instruction itself. For example, the following instructions illustrate the text-loading instructions and subsequent dependencies that can be used by the ARM architecture:LDR R 0, [PC, # 0x40]; Retrieve the text load value stored at program counter (PC) + 0x40 + 8 into register R 0ADD R 1, R 0, R 0; The literal load value is used by adding the value in register R 0 to itself and storing the result in register R 1.However, due to the data cache latency inherent in many conventional processors, load instructions may throw a "load: use penalty" when loading a literal load value into a register. "Load: Usage Loss" refers to the minimum number of processor cycles that can occur due to the data cache latency between the scheduled load instruction and the scheduled dependent instruction. For example, in the example code above, the ADD instruction can not be scheduled until "Load: Loss of Use" caused by the LDR instruction occurs. Because dependent instructions do not schedule until the load instruction returns data, Load: Loss of Use may result in an underutilized processor cycle "bubble" that occurs within the processing pipeline.Content of the inventionThe aspects disclosed in the detailed description include using a text load prediction table to predict text load values. Also disclosed are related circuits, methods, and computer-readable media. In this regard, in one aspect, an instruction processing circuit provides a text load prediction table for generating a prediction of a text load value and a false prediction for detecting a text load value. The text load prediction table includes one or more table entries, and each table entry includes an address and a predicted text load value. After detecting a text load instruction in the instruction stream, the instruction processing circuit immediately determines whether the text load prediction table contains an entry with an address corresponding to the text load instruction. If included, the instruction processing circuitry provides the predicted word load value stored in the entry to at least one dependent instruction. Wherein the instruction processing circuit determines whether the predicted word load value previously provided to the at least one dependent instruction matches the actual word load value loaded by the word load instruction when the word load instruction is actually executed. If the predicted word load value and the actual word load value do not match, the instruction processing circuit initiates a false prediction recovery. In some aspects, the false prediction recovery may include updating the entry with an actual literal load value, emptying the entry from the literal load prediction table, and / or setting an unpredicted indicator in the entry. The at least one dependent instruction may then be re-executed using the actual text load value. In this manner, the instruction processing circuitry can enable dependent instructions to access literal load values without inducing "load: loss of use," thus providing improved processor utilization.In another aspect, an instruction processing circuit is provided. The instruction processing circuitry is configured to detect a first occurrence of a word load instruction in the instruction stream. The instruction processing circuitry is further configured to determine whether the address of the text load instruction is present in the entry of the text load prediction table. The instruction processing circuit is also configured to provide a predictive text load value stored in the table entry for at least the instruction associated with the text load instruction in response to determining that an address of the text load instruction exists in the table entry Execution of a dependent instruction. The instruction processing circuitry is further configured to determine, immediately after executing the word load instruction, in response to determining that an address of the word load instruction exists in the entry, whether the predicted word load value matches the word loaded by the word The actual text load value loaded by the load instruction. The instruction processing circuitry is further configured to initiate a false prediction recovery in response to determining that the predicted word load value does not match the actual word load value and to re-execute the at least one dependent instruction using the actual word load value.In another aspect, an instruction processing circuit is provided. The instruction processing circuitry includes means for detecting a first occurrence of a word load instruction in the instruction stream. The instruction processing circuit further includes means for determining whether an address of the text load instruction exists in an entry of the text load prediction table. The instruction processing circuitry further comprises means for providing a predictive text load value stored in the entry for at least the instruction associated with the text load instruction in response to determining that an address of the text load instruction exists in the entry A dependent instruction execution device. The instruction processing circuit further comprises means for determining, in response to determining that the address of the word load instruction is further present in the entry, after the execution of the word load instruction, determining whether the predicted word load value matches the word represented by the word Device to load the actual text load value loaded by the instruction. The instruction processing circuitry further includes means for initiating a false prediction recovery in response to determining that the predicted word load value does not match the actual word load value. The instruction processing circuitry further comprises means for re-executing the at least one dependent instruction using the actual text load value in response to determining that the predicted text load value does not match the actual text load value.In another aspect, a method for predicting text load values is provided. The method includes detecting a first occurrence of a text load instruction in an instruction stream. The method further includes determining whether the address of the text load instruction is present in the entry of the text load prediction table. The method further comprises providing a predictive text load value stored in the entry for at least one dependent instruction associated with the text load instruction in response to determining that an address of the text load instruction exists in the entry carried out. The method further comprises determining whether the predicted word load value matches the word loaded by the word load instruction immediately after executing the word load instruction in response to determining that the address of the word load instruction exists in the entry The actual text load value. The method further includes initiating a false prediction recovery in response to determining that the predicted text load value does not match the actual text load value and re-executing the at least one dependent instruction using the actual text load value.In another aspect, a non-transitory computer-readable medium is provided having computer-executable instructions stored thereon for causing a processor to detect a first occurrence of a text load instruction in an instruction stream. The computer-executable instructions stored thereon further cause the processor to determine whether an address of the text-loading instruction exists in an entry of the text-on-load prediction table. The computer-executable instructions stored thereon further cause the processor to provide a predictive text load value stored in the table entry for interaction with the text in response to determining that an address of the text load instruction exists in the table entry Execution of at least one dependent instruction related to loading an instruction. The computer executable instructions stored thereon additionally cause the processor to further determine whether the predicted text load value is responsive to determining that an address of the text load instruction exists in the list item immediately after executing the word load instruction Matches the actual text load value loaded by the text load instruction. The computer executable instructions stored thereon further cause a processor to initiate a false prediction recovery in response to determining that the predicted text load value does not match the actual text load value and reperforming the at least the actual text load value A dependent instruction.BRIEF DESCRIPTION OF THE DRAWINGS FIGFigure 1 is a block diagram of an exemplary computer processor including an instruction processing circuit for predicting text load values using text-loading prediction tables and detecting false prediction of text-loading values;2A through 2C illustrate an exemplary communication flow for establishing the entry of the literal load prediction table of FIG. 1, providing predicted word load values of the entry to dependent instructions, and handling false prediction of the literal load of the instruction processing circuit of FIG. 1;Figure 3 is a flow chart illustrating an exemplary operation for predicting word load values and detecting mispredictions using a literal load prediction table of the instruction processing circuit of Figure 1;Figure 4 illustrates a diagram of an exemplary operation for initiating a false prediction recovery in some aspects of the instruction processing circuitry of Figure 1;Figure 5 is a flow chart illustrating an operation for using a non-predictive indicator of a text load prediction table in some aspects of the instruction processing circuitry of Figure 1; and6 is a block diagram of an example processor-based system that can include the instruction processing circuitry of FIG. 1.detailed descriptionReferring now to the drawings, several exemplary aspects of the invention are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.The aspects disclosed in the detailed description include using a text load prediction table to predict text load values. Also disclosed are related circuits, methods, and computer-readable media. In this regard, in one aspect, an instruction processing circuit provides a text load prediction table for generating a prediction of a text load value and a false prediction for detecting a text load value. The text load prediction table includes one or more table entries, and each table entry includes an address and a predicted text load value. After detecting a text load instruction in the instruction stream, the instruction processing circuit immediately determines whether the text load prediction table contains an entry with an address corresponding to the text load instruction. If included, the instruction processing circuitry provides the predicted word load value stored in the entry to at least one dependent instruction. Wherein the instruction processing circuit determines whether the predicted word load value previously provided to the at least one dependent instruction matches the actual word load value loaded by the word load instruction when the word load instruction is actually executed. If the predicted word load value and the actual word load value do not match, the instruction processing circuit initiates a false prediction recovery. In some aspects, the false prediction recovery may include updating the entry with an actual literal load value, emptying the entry from the literal load prediction table, and / or setting an unpredicted indicator in the entry. The at least one dependent instruction may then be re-executed using the actual text load value. In this manner, the instruction processing circuitry can enable dependent instructions to access literal load values without inducing "load: loss of use," thus providing improved processor utilization.In this regard, FIG. 1 is a block diagram of an exemplary computer processor 100. Computer processor 100 includes an instruction processing circuit 102 that provides a word load prediction table 104 for predicting word load values and detecting erroneous word load values as disclosed herein. Computer processor 100 may encompass any of the known digital logic elements, semiconductor circuits, processing cores and / or memory structures, and other elements, or combinations thereof. Aspects described herein are not limited to any particular arrangement of elements, and the disclosed techniques can be readily extended to various configurations and layouts on semiconductor die or package.The computer processor 100 includes an input / output circuit 106, an instruction cache 108, and a data cache 110. The computer processor 100 further includes an execution pipeline 112 including a front-end circuit 114, an execution unit 116, and a completion unit 118. The computer processor 100 additionally includes a register 120 that includes one or more general purpose registers (GPRs) 122, a program counter 124, and a link register 126. In some aspects, for example, using the _ENTO_ARM7 (TM) architecture, the link register 126 is one of the GPRs 122, as shown in FIG. Alternatively, for example, some aspects of thearchitecture may be used to provide the following scenario: The link register 126 is separate from the GPR 122 (not shown).In an exemplary operation, the front-end circuitry 114 of the execution pipeline 112 fetches instructions (not shown) from the instruction cache 108; in some aspects, the instruction cache may be an on-chip level 1 (L1) cache Non-limiting examples). The fetched instruction is decoded by the front end circuit 114 and issued to the execution unit 116. Execution unit 116 executes the issued instruction, and completion unit 118 reclaims the executed instruction. In some aspects, the completion unit 118 may include a writeback mechanism (not shown) that stores the execution result in one or more of the registers 120. It should be understood that execution unit 116 and / or completion unit 118 may each include one or more sequential pipeline stages. In the example of FIG. 1, the front end circuitry 114 includes one or more fetch / decode pipeline stages 128 that enable multiple instructions to be fetched and decoded simultaneously. An instruction queue 130 for holding the fetched instructions waiting to be dispatched to execution unit 116 is communicatively coupled to one or more of fetch / decode pipeline stages 128.The computer processor 100 of FIG. 1 further provides a constant cache 132 communicatively coupled to one or more elements of the execution pipeline 112. The constant cache 132 provides a fast access mechanism by which the value previously stored in one of the registers 120 may be provided to an instruction that uses the value as an input operand. The constant cache 132 may therefore improve the performance of the computer processor 100 by providing access to stored values faster than the register 120.Although instructions are processed in execution pipeline 112, instruction processing circuitry 102 may fetch and execute word load instructions (not shown) for loading word load values into one of registers 120. Processing the literal load instruction may thus include retrieving the literal load value from the data cache 110. However, in doing so, the word load instruction may cause "load: loss of use" caused by the inherent latency of accessing the data cache 110. For example, in some computer architectures, accessing data cache 110 may require two to three processor cycles to complete. Thus, the instruction processing circuit 102 may not be able to dispatch subsequent dependent instructions (not shown) until "Load: Loss of Use" caused by the word load instruction occurs. This may result in underutilized computer processor 100 within execution pipeline 112.At this point, the instruction processing circuit 102 of FIG. 1 provides the word load prediction table 104 for minimizing "load: loss of use" by predicting the word load value of the word load instruction, providing the predicted word load value to the dependent instruction And detect the text load value mispredict. The instruction processing circuit 102 is configured to detect a word load instruction (not shown) in the instruction stream (not shown) processed within the execution pipeline 112. In some aspects, the instruction processing circuitry 102 may be configured to detect the word load instruction based on the usual form of load instructions used by the computer processor 100. As a non-limiting example, in a computer processor utilizing an ARM architecture, a word load instruction may be detected by determining that the word load instruction uses a program counter related addressing mode in which a program counter offset is specified by a constant.As the text loading instruction is extracted by the front-end circuit 114 of the instruction processing circuit 102, the instruction processing circuit 102 checks the text-loading prediction table 104. Text load prediction table 104 contains one or more entry (not shown). Each entry may include an address of a previously detected text load instruction and a predicted text load value corresponding to the address that was previously loaded by the text load instruction.The instruction processing circuit 102 determines whether the extracted address of the word load instruction exists in the entry of the word load prediction table 104. If an address (ie, "hit") of the word load instruction is found, the instruction processing circuit 102 provides the word load value from the entry to at least one dependent instruction as a predicted literal load value. In some aspects, the predictive word load value may be provided to the at least one dependent instruction via the constant cache 132. In this manner, the at least one dependent instruction may obtain the predictive word load value for the text load instruction without causing a corresponding "Load: Loss of Use."After "hit", the word load instruction may be ultimately executed by the execution unit 116 of the instruction processing circuit 102. When the text load instruction is executed, the instruction processing circuit 102 compares the predicted text load value provided to the at least one dependent instruction with the actual text load value loaded when the text load instruction is executed. If the predicted text load value does not match the actual text load value, then a word load value misprediction has occurred. In response, the instruction processing circuit 102 initiates a false prediction recovery. Some aspects may provide that the operation of the misprediction recovery includes updating the entry in the word load prediction table 104, clearing the entry from the word load prediction table 104, and / or in the entry of the word load prediction table 104 Set no prediction flag (not shown). The at least one dependent instruction may then be re-executed using the actual text load value.According to some aspects disclosed herein, a "miss" occurs if the instruction processing circuit 102 detects a word load instruction but does not find the address of the word load instruction in the entry of the word load prediction table 104. In this case, the instruction processing circuit 102 may generate an entry corresponding to the text load instruction in the text load prediction table 104 immediately after executing the text load instruction. The generated entry includes an address of the text load instruction and stores the actual text load value loaded by the text load instruction as a predicted text load value of the entry. Therefore, a "hit" in the word load prediction table 104 may occur and if the word load instruction is detected again by the instruction processing circuit 102, and the predicted word load value may be provided to the dependent instruction.As noted above, in some aspects, the instruction processing circuitry 102 may set the unpredicted indicator (not shown) as part of a false prediction recovery in the entry of the text load prediction table 104. The unpredicted indicator may be used by the instruction processing circuitry 102 to identify a load instruction appearing as a literal load instruction, but known to or determining to load different values at different points during the execution of the computer program. Therefore, upon detecting a significant word load instruction and determining that the address of the word load instruction exists in the entry of the word load prediction table 104, the instruction processing circuit 102 may check the unpredicted indicator of the entry. If the non-predictive indicator has been set, the instruction processing circuit 102 may continue executing the word load instruction without providing the dependent literal load value to the dependent instruction. This ensures that the dependent instruction always receives the actual literal load value loaded by the word load instruction and may avoid the possibility of repeated false prediction and associated computer processor 100 degradation.To better illustrate an exemplary communication flow between instruction processing circuit 102, data cache 110, and constant cache 132 of FIG. 1, FIGS. 2A-2C are provided. 2A illustrates an example communication flow for establishing an entry in a word load prediction table 104 and FIG. 2B shows an example communication flow for providing a predictive word load value for the entry to dependent instructions. Figure 2C illustrates an example communication flow for handling misprediction of text load values.In FIGS. 2A-2C, the instruction processing circuit 102 processes an instruction stream 200 that includes two instructions: a word load instruction 202 and a dependency instruction 204. The word load instruction 202 is associated with an address 206, which in this example is a hexadecimal value of 0x400. It will be appreciated that in some aspects, address 206 may be retrieved from, for example, program counter 124 of FIG. 1. It will be further understood that while the instruction stream 200 of FIGS. 2A-2C includes only one dependent instruction 204, in some aspects, the dependent instruction 204 may include a plurality of dependent instructions.The word load instruction 202 in this example is an LDR instruction that directs the computer processor 100 to load a literal load value from the address specified by the program counter 124 (PC) plus the hexadecimal value 0x40. The literal load value is then stored in a register R 0, which may be one of the registers 120 of FIG. 1 (as a non-limiting example). The dependent instruction 204 follows the word load instruction 202 in the instruction stream 200, which in this example is an ADD instruction. Dependent instruction 204 receives the literal load value stored in register R 0 as input and sums it with the value of register R 1 (eg, the other of registers 120 of FIG. 1). The result is then stored in register R 1.The word load prediction table 104 illustrated in FIGS. 2A to 2C includes a plurality of entry 208 (0) to 208 (X). To facilitate the prediction of the word load value, each entry 208 (0) through 208 (X) in the word load prediction table 104 contains a program counter (PC) field 210, a value field 212 and an optional unpredictable field 214. The program counter field 210 of each entry 208 (0) through 208 (X) may be used to store the address 206 of the word load instruction 202 detected by the instruction processing circuit 102. The value field 212 may store a predictive word load value based on a literal load value loaded by the word load instruction 202 associated with the address 206 in the program counter field 210. In some aspects, each of the entries 208 (0) through 208 (X) may also include an unpredicted field 214.As seen in FIGS. 2A-2C, data cache 110 consists of entries 216 (0) through 216 (Z), each entry including an address field 218 and a value field 220. Each of the entries 216 (0) through 216 (Z) corresponds to the value retrieved during the previous execution of the load instruction. At this point, the address field 218 stores the address of the previously retrieved value and the value field 220 stores a copy of the value.Constant caches 132 shown in FIGS. 2A-2C include entries 222 (0) through 222 (Y). Each of entries 222 (0) through 222 (Y) contains a register field 224 and a value field 226. The register field 224 of each entry 222 (0) through 222 (Y) indicates one of the registers 120 in FIG. 1 associated with entries 222 (0) through 222 (Y), while the value field 226 indicates the most recently stored The value in the corresponding register 120. As discussed above, constant cache 132 may provide a fast access mechanism that provides cache value access faster than loading values directly from registers 120.Referring now to FIG. 2A, a communication flow for establishing entry 208 (X) in a word load prediction table 104 in some aspects is illustrated. As the instruction processing circuit 102 first processes the instruction stream 200, the first example of the word load instruction 202 is detected. As indicated by the arrow 228, the instruction processing circuit 102 checks the literal load prediction table 104 to determine whether the address 206 (ie, the hexadecimal value 0x400) of the literal load instruction 202 is available in the entries 208 (0) through 208 (X) Found in any one. The instruction processing circuit 102 does not find the address 206 in the entries 208 (0) through 208 (X), and thus continues the normal processing of the text load instruction 202 in response to the "miss."Upon execution of the word load instruction 202, the entry 216 (0) of the data cache 110 fills in the actual literal load value 230 (here, hexadecimal value 0x1234) loaded by the word load instruction 202. As indicated by arrow 232, the instruction processing circuit 102 accesses the entry 216 (0) of the data cache 110 and obtains the actual text load value 230. The instruction processing circuit 102 next generates an entry 208 (X) in the word load prediction table 104 based on the actual word load value 230, as indicated by the arrow 234. The address 206 of the word load instruction 202 will be stored in the program counter field 210 of the entry 208 (X) and the actual literal load value 230 will be stored as the predicted literal load value in the value field 212 of the entry 208 (X). The actual word load value 230 in the load register R 0 by the word load instruction 202 is then forwarded to the dependent instruction 204 by using the conventional mechanism as indicated by the arrow 236.2B illustrates the entry 208 (X) of using the word load prediction table 104 for providing the predicted word load value 238 to the dependent instruction 204. As seen in FIG. 2B, the address 206 of the word load instruction 202 has been stored in the program counter field 210 of the entry 208 (X), whereas the actual literal load value 230 in FIG. 2A has been stored as the predicted word load value 238 in the table In the value field 212 of item 208 (X). In the example of FIG. 2B, the unpredicted indicator 239 is also stored in the entry 208 (X) and the unpredictable indicator 239 is not set (thus indicating that the entry 208 (X) is available to predict the literal load value). The instruction processing circuit 102 now processes the instruction stream 200 again and detects the second example of the word load instruction 202. As indicated by the arrow 240, the instruction processing circuit 102 checks the word load prediction table 104 to determine whether the address 206 is found in any of the entries 208 (0) through 208 (X), and this time the entry 208 (X ) To locate.In response, the instruction processing circuitry 102 assigns the predictive text load value 238 provided by the entry 208 (X) to the entry 222 (0) in the constant cache 132 that corresponds to the register R 0, as indicated by the arrow 242. Predictive text load value 238 is then provided to dependent instruction 204 via constant cache 132, as indicated by arrow 244. In this manner, dependent instructions 204 can receive predicted word load values 238 without inducing "load: loss of use."To verify that no false predictions have occurred, the instruction processing circuit 102 accesses the entry 216 (0) of the data cache 110 immediately after executing the word load instruction 202 and obtains the actual text load value 230 as indicated by arrow 246. The instruction processing circuit 102 may then determine whether the predicted word load value 238 provided by the word load prediction table 104 matches the actual word load value 230 loaded by the word load instruction 202. In the example of FIG. 2B, the actual word load value 230 and the predicted word load value 238 match, and thus the prediction is successful.To illustrate the misprediction in some aspects of the handling instruction processing circuit 102, FIG. 2C is provided. In FIG. 2C, it is assumed that the entry 216 (0) in the data cache 110 has been updated to reflect the new actual literal load value 230 of 0x5678. As the instruction processing circuit 102 processes the instruction stream 200 again, the text load instruction 202 is detected. The instruction processing circuit 102 examines the literal load prediction table 104 to determine whether the address 206 is found in any of the entries 208 (0) through 208 (X) and locates the entry 208 (X) as indicated by arrow 248 Instructions. 2B, the instruction processing circuit 102 allocates the predictive word load value 238 provided by the entry 208 (X) to the entry 222 (0) in the constant cache 132 that corresponds to the register R 0, as indicated by the arrow 250. Predictive text load value 238 is then provided to dependent instruction 204 via constant cache 132, as indicated by arrow 252.After executing the word load instruction 202, the instruction processing circuit 102 immediately accesses the entry 216 (0) of the data cache 110 and obtains the actual text load value 230 as indicated by the arrow 254. The instruction processing circuit 102 then determines that the predicted text load value 238 provided by the text load prediction table 104 does not match the actual text load value 230 loaded by the text load instruction 202. Therefore, false prediction was detected.In response to the misprediction, the instruction processing circuit 102 starts erroneous prediction recovery. In the example of FIG. 2C, the operation for initiating the misprediction recovery includes updating the predictive word load value 238 in the entry 208 (X) of the literal load prediction table 104 so as to store the predicate load values generated by the execution literal load instruction 202 The actual text load value 230 (as indicated by arrow 256). In this manner, the actual word load value 230 may be provided to a future multi-type word load instruction 202 detected by the instruction processing circuit 102. It should be noted that in some aspects, different and / or additional operations may be performed as part of the mispredicted recovery, discussed below in more detail with respect to FIG. 4.3 is a flowchart illustrating an exemplary operation for predicting a word load value using the word load prediction table 104 of FIG. 1 and detecting a false prediction. For the sake of clarity, reference is made to the elements of FIG. 1 and FIGS. 2A to 2C when describing FIG. 3. The operation in FIG. 3 begins with the instruction processing circuit 102 of FIG. 1 detecting a first occurrence of a text load instruction 202 in the instruction stream 200 (block 300). Detecting the text load instruction 202 may be accomplished by, for example, identifying a customary form of load instruction in the instruction stream 200.The instruction processing circuit 102 next determines whether the address 206 of the word load instruction 202 exists in the entry 208 (X) of the word load prediction table 104 (block 302). If present, the instruction processing circuitry 102 provides the predicted word load value 238 stored in the entry 208 (X) for execution of at least one dependent instruction 204 (block 304) in relation to the word load instruction 202. Dependent instruction 204 may thus receive predicted word load value 238 without inducing "Load: Loss of Use."In order to check the mispredicted text load value, the instruction processing circuit 102 then determines whether the predictive text load value 238 matches the actual text load value 230 loaded by the text load instruction 202 (block 306) immediately after execution of the text load instruction 202. If the predicted word load value 238 matches the actual word load value 230, the instruction processing circuit 102 continues to process the instruction stream 200 (block 308). However, if a mismatch between the predicted word load value 238 and the actual word load value 230 is detected, the instruction processing circuit 102 initiates a false prediction recovery (block 310). The at least one dependent instruction 204 may then be re-executed using the actual text load value 230 (block 312), and the process continues at block 308.At decision block 302, if the instruction processing circuit 102 determines that the address 206 of the word load instruction 202 does not exist in the entry 208 (X) of the word load prediction table 104, the instruction processing circuit 102, immediately after executing the word load instruction 202 The entry 208 (X) is generated in the literal load prediction table 104 (block 314). The entry 208 (X) includes the address 206 of the text load instruction 202 and the actual text load value 230 stored as the predicted text load value 238. Processing then continues at block 308.To illustrate an example operation for initiating false prediction recovery in some aspects of the instruction processing circuit 102 of FIG. 1, FIG. 4 is provided. For the sake of clarity, reference is made to the elements of FIG. 1 and FIGS. 2A to 2C when describing FIG. 4. As seen in FIG. 3, the instruction processing circuit 102 may initiate a false prediction recovery (block 310 of FIG. 3) in response to detecting a mispredicted text load value. In some aspects, initiating the false prediction recovery may include updating the entry 208 (X) (block 400) by storing the actual text load value 230 as the predicted text load value 238. This may enable the instruction processing circuit 102 to provide the corrected predicted word load value 238 in response to detecting subsequent instances of the word load instruction 202.Some aspects may provide that initiating a misprediction recovery includes clearing entry 208 (X) from text load prediction table 104 (block 402). As a non-limiting example, emptying the entry 208 (X) may include deleting or deallocating the entry 208 (X) from the text load prediction table 104 or otherwise indicating that the entry 208 (X) is available for writing. Clearing the entry 208 (X) may thus form free space in the text load prediction table 104 for the text load instruction 202 that is encountered more frequently.According to some aspects of the instruction processing circuitry 102, initiating a false prediction recovery may include setting the unpredicted indicator 239 in the entry 208 (X) (block 404). In such aspects, the unpredictable indicator 239 is set to indicate that the literal load value prediction should not be performed on subsequent instances of the literal load instruction 202. This may apply to situations where, for example, a particular load instruction may be repeatedly detected as a literal load instruction 202, but it is known to load different values at different points during execution of the computer program. By using the unpredicted indicator 239, the instruction processing circuit 102 can avoid unnecessary processing cycle consumption when less likely correct word load value predictions are made.In this regard, FIG. 5 illustrates the operation of loading the unpredicted indicator 239 using the word load prediction table 104 of FIG. 1. For the sake of clarity, reference is made to the elements of FIG. 1 and FIGS. 2A to 2C when describing FIG. 5. In FIG. 5, operation begins with the instruction processing circuitry 102 of FIG. 1 detecting a second occurrence of a text load instruction 202 in the instruction stream 200 (block 500). In response, the instruction processing circuit 102 determines whether the address 206 of the word load instruction 202 exists in the entry 208 (X) of the word load prediction table 104 (block 502). If address 206 is not found, processing continues at block 314 of FIG. 3.If the instruction processing circuit 102 determines at block 502 that the address 206 was found in the entry 208 (X), the instruction processing circuit 102 next determines whether the unpredicted indicator 239 in the entry 208 (X) has been set (block 504 ). If not set, processing continues at block 304 of FIG. 3. However, if it is not predicted that the indicator 239 has been set, the instruction processing circuit 102 executes the word load instruction 202 without providing the predictive word load value 238 stored in the entry 208 (X) for execution of the at least one dependent instruction 204 506). Processing then continues at block 308 of FIG. 3.Predicting word load values using word load prediction tables according to aspects disclosed herein may be provided or integrated into any processor-based device. Examples include, but are not limited to, set top boxes, entertainment units, navigation devices, communication devices, fixed location data units, mobile location data units, mobile phones, cellular phones, computers, laptops, desktops, personal digital assistants ), Monitor, computer monitor, television, tuner, radio, satellite radio, music player, digital music player, portable music player, digital video player, video player, digital video disc (DVD) player Device and portable digital video player.In this regard, FIG. 6 illustrates an example of a processor-based system 600 that may use instruction processing circuitry 102 illustrated in FIGS. 1 and 2A-2C. In this example, the processor-based system 600 includes one or more central processing units (CPUs) 602, each including one or more processors 604. One or more processors 604 may include an instruction processing circuit (IPC) 102 of FIGS. 1 and 2A-2C. The CPU 602 may be a master device. The CPU 602 may have a cache memory 606 coupled to the processor 604 for quickly accessing temporarily stored data. The CPU 602 is coupled to the system bus 608 and may couple the master device and the controlled device included in the processor-based system 600 to each other. As is well known, the CPU 602 communicates with these other devices by exchanging address, control and data information on the system bus 608. For example, CPU 602 may transmit a bus transaction request to memory controller 610, which is an example of a controlled device.Other master and controlled devices may be connected to the system bus 608. As illustrated in FIG. 6, these devices may include, as an example, a memory system 612, one or more input devices 614, one or more output devices 616, one or more network interface devices 618, and one or more display controllers 620. Input device 614 may include any type of input device, including but not limited to input keys, switches, voice processors, and the like. Output device 616 may include any type of output device, including but not limited to audio, video, other visual indicators, and the like. The network interface device 618 may be any device that is configured to allow data to be exchanged to the network 622 and to exchange data from the network. Network 622 may be any type of network including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wide area network (WLAN), and the Internet. Network interface device 618 may be configured to support any type of communication protocol desired. Memory system 612 may include one or more memory cells 624 (0-N).The CPU 602 may also be configured to access the display controller 620 via the system bus 608 to control the information sent to the one or more displays 626. The display controller 620 sends the information to the display 626 via one or more video processors 628 for display, the one or more video processors process the information to be displayed into a format suitable for the display 626. Display 626 may include any type of display including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, and the like.Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, in a memory or in another computer-readable medium and represented by Instructions executed by a processor or other processing device, or a combination of both. As an example, the masters and controlled devices described herein can be used in any circuit, hardware component, integrated circuit (IC), or IC chip. The memory disclosed herein may be any type and size of memory and may be configured to store any type of information as desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How to implement such functionality depends on the particular application, design options, and / or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or implemented with a processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array A programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. The processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.Aspects disclosed herein may be embodied in hardware and in instructions stored in hardware and may reside, for example, in a random access memory (RAM), flash memory, read only memory (ROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of computer-readable media known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from the storage medium and write information to the storage medium. In the alternative, the storage medium may be integral with the processor. The processor and storage medium may reside in an ASIC. The ASIC can reside in a remote station. In the alternative, the processor and storage media may reside as discrete components in remote stations, base stations or servers.It should also be noted that the operating steps described in any of the exemplary aspects herein are described to provide examples and discussions. The described operations may be performed in a number of different sequences in addition to the ones illustrated. In addition, the operations described in a single operation step can actually be performed in a number of different steps. Additionally, one or more of the operational steps discussed in the illustrative aspects may be combined. It should be appreciated that those skilled in the art will readily appreciate that the operational steps illustrated in the flowcharts may be subject to numerous different modifications. Those of skill in the art should also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and codes that may be referred to in the above description may be represented by voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or particles of light or any combination thereof sheet.The previous description of the present invention is provided to enable any person skilled in the art to make or use the present invention. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Therefore, it is intended that the invention not be limited to the examples and designs described herein, but rather be accorded the broadest scope consistent with the principles and novel features disclosed herein. |
The invention includes methods of forming field effect transistors, methods of forming field effect transistor gates, methods of forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array, and methods of forming integrated circuitry comprising a transistor gate array including first gates and second grounded isolation gates. In one implementation, a method of forming a field effect transistor includes forming masking material over semiconductive material of a substrate. A trench is formed through the masking material and into the semiconductive material. Gate dielectric material is formed within the trench in the semiconductive material. Gate material is deposited within the trench in the masking material and within the trench in the semiconductive material over the gate dielectric material. Source/drain regions are formed. Other aspects and implementations are contemplated. |
CLAIMS What is claimed is: 1. A method of forming a field effect transistor, comprising: forming masking material over semiconductive material of a substrate; forming a trench through the masking material and into the semiconductive material; forming gate dielectric material within the trench in the semiconductive material; depositing gate material within the trench in the masking material and within the trench in the semiconductive material over the gate dielectric material; and forming source/drain regions. 2. The method of claim 1 wherein the masking material comprises silicon dioxide received over silicon nitride. 3. The method of claim 1 wherein forming at least a majority of the gate dielectric material comprises thermal oxidation of the semiconductive material within the trench. 4. The method of claim 1 wherein the depositing of the gate material at least fills the trench in the masking material and the trench in the semiconductive material with the gate material. 5. The method of claim 1 wherein the depositing of the gate material overfills the trench in the masking material and the trench in the semiconductive material with the gate material. 6. The method of claim 1 wherein the source/drain regions are formed within the semiconductive material of the substrate. 7. The method of claim 1 comprising removing at least a majority of the masking material after depositing the gate material. 8. The method of claim 1 being void of photolithographic patterning of the gate material after its deposition. 9. The method of claim 1 wherein depositing the gate material covers the masking material with the gate material, and comprising removing the gate material selectively relative to and exposing the masking material effective to isolate the gate material within the trench in the masking material and the trench in the semiconductive material. 10. A method of forming a field effect transistor, comprising: forming masking material over semiconductive material of a substrate, the masking material comprising an outer insulative material layer and an inner insulative material layer, the outer insulative material layer being selectively etchable relative to the inner insulative material layer; forming a trench through the masking material and into the semiconductive material; forming gate dielectric material within the trench in the semiconductive material; depositing gate material within the trench in the masking material and within the trench in the semiconductive material over the gate dielectric material; recessing the gate material within the trench in the masking material; capping the recessed gate material within the trench within the masking material with insulative material of common composition to that of the inner insulative material layer; etching the outer insulative material layer selectively relative to the inner insulative material layer and to the capping insulative material received over the recessed gate material; after etching the outer insulative material layer, depositing insulative material of common composition to that of the inner insulative material layer; anisotropically etching the insulative material of common composition to that of the inner insulative material layer effective to form insulative sidewall spacers about the gate material; and forming source/drain regions. 11. The method of claim 10 wherein the outer insulative material layer is thicker than the inner insulative material layer. 12. The method of claim 10 wherein the outer insulative material layer contacts the inner insulative material layer. 13. The method of claim 10 wherein the outer insulative material layer is the outermost material of the masking material. 14. The method of claim 10 further comprising another insulative material layer received inwardly of the inner insulative material layer. 15. The method of claim 10 wherein the outer insulative material layer comprises silicon dioxide and the inner insulative material layer comprises silicon nitride. 16. The method of claim 10 wherein the outer insulative material layer comprises silicon nitride and the inner insulative material layer comprises silicon dioxide. 17. The method of claim 10 wherein the depositing of the gate material at least fills the trench in the masking material and the trench in the semiconductive material with the gate material. 18. The method of claim 10 wherein the source/drain regions are formed within the semiconductive material of the substrate. 19. The method of claim 10 being void of photolithographic patterning of the gate material after its deposition. 20. A method of forming a field effect transistor gate, comprising: forming a silicon nitride-comprising masking material over semiconductive material of a substrate; forming a trench through the silicon nitride-comprising masking material and into the semiconductive material; removing silicon nitride of the masking material after forming the trench into the semiconductive material; prior to removing silicon nitride of the masking material, forming gate dielectric material within the trench in the semiconductive material; and depositing gate material within the trench in the semiconductive material over the gate dielectric material. 21. The method of claim 20 wherein forming at least a majority of the gate dielectric material comprises thermal oxidation of the semiconductive material within the trench. . . <.> 22. The method of claim 20 wherein the silicon nitride-comprising masking material comprises a silicon dioxide-comprising layer received over a silicon nitride-comprising layer. 23. The method of claim 22 wherein the silicon dioxide-comprising layer is thicker than silicon nitride of the silicon nitride-comprising masking material. 24. The method of claim 20 wherein the silicon nitride-comprising masking material comprises a first silicon dioxide-comprising layer, a silicon nitride-comprising layer over the first silicon dioxide-comprising layer, and a second silicon dioxide-comprising layer received over the silicon nitride- comprising layer. 25. The method of claim 24 wherein the second silicon dioxide- comprising layer is thicker than the silicon nitride-comprising layer. 26. The method of claim 25 wherein the first silicon dioxide-comprising layer is thinner than the silicon nitride-comprising layer. 27. The method of claim 20 wherein the depositing of the gate material at least fills the trench in the masking material and the trench in the semiconductive material with the gate material. 28. The method of claim 20 wherein the source/drain regions are formed within the semiconductive material of the substrate. 29. The method of claim 20 being void of photolithographic patterning of the gate material after its deposition. 30. A method of forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array, comprising: forming masking material over semiconductive material of a substrate; forming array circuitry trenches through the masking material and into the semiconductive material; depositing array gate material within the array circuitry trenches in the masking material and within the array circuitry trenches in the semiconductive material; after depositing the array gate material, forming peripheral circuitry trenches through the masking material; and depositing peripheral circuitry gate material within the peripheral circuitry trenches within the masking material. 31. The method of claim 30 wherein forming the peripheral circuitry trenches exposes the semiconductive material of the substrate, and further comprising forming a gate dielectric layer over the exposed semiconductive material of the substrate prior to depositing the peripheral circuitry gate material, the gate dielectric layer also forming over the array gate material. 32. The method of claim 31 wherein the gate dielectric layer is formed on the array gate material. 33. The method of claim 32 wherein forming at least a majority of the gate dielectric layer comprises thermal oxidation of the exposed semiconductive material and of the array gate material. 34. The method of claim 30 wherein the array circuitry trenches are formed using a masking step, and further comprising forming grounded gate trenches through the masking material in the array in the same masking step in which the array circuitry trenches are formed. 35. The method of claim 30 wherein the peripheral circuitry trenches are formed using a masking step, and further comprising forming grounded gate trenches through the masking material in the array in the same masking step in which the peripheral circuitry trenches are formed. 36. The method of claim 30 wherein the masking material comprises silicon dioxide received over silicon nitride. 37. The method of claim 30 wherein the depositing of the array gate material at least fills the array circuitry trenches in the masking material and the array circuitry trenches in the semiconductive material with the array gate material. 38. The method of claim 30 wherein the depositing of the array gate material overfills the array circuitry trenches in the masking material and the array circuitry trenches in the semiconductive material with the array gate material. 39. The method of claim 30 wherein the depositing of the peripheral circuitry gate material at least fills the peripheral circuitry trenches in the masking material and the peripheral circuitry trenches in the semiconductive material with the peripheral circuitry gate material. 40. The method of claim 30 wherein the depositing of the peripheral circuitry gate material overfills the peripheral circuitry trenches in the masking material and the peripheral circuitry trenches in the semiconductive material with the peripheral circuitry gate material. 41. The method of claim 30 comprising removing at least a majority of the masking material after depositing the peripheral circuitry gate material. 42. The method of claim 30 being void of photolithographic patterning of the array gate material after its deposition. i 43. The method of claim 30 being void of photolithographic patterning of the peripheral circuitry gate material after its deposition. 44. The method of claim 30 being void of photolithographic patterning of the array gate material after its deposition, and being void of photolithographic patterning of the peripheral circuitry gate material after its deposition. 45. The method of claim 30 comprising forming some peripheral circuitry trenches and some array circuitry trenches in the same masking step. 46. A method of forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array, comprising: forming masking material over semiconductive material of a substrate; forming array circuitry trenches through the masking material and into the semiconductive material; depositing array gate material within the array circuitry trenches in the masking material and within the array circuitry trenches in the semiconductive material; forming peripheral circuitry trenches through the array gate material and through the masking material; and depositing peripheral circuitry gate material within the peripheral circuitry trenches within the array gate material and within the masking material. 47. The method of claim 46 wherein forming the peripheral circuitry trenches exposes the semiconductive material of the substrate, and further comprising forming a gate dielectric layer over the exposed semiconductive material of the substrate prior to depositing the peripheral circuitry gate material, the gate dielectric layer also depositing over the array gate material. 48. The method of claim 47 wherein the gate dielectric layer forms on the array gate material. 49. The method of claim 48 wherein forming at least a majority of the gate dielectric layer comprises thermal oxidation of the exposed semiconductive material and of the array gate material. 50. The method of claim 46 wherein the array circuitry trenches are formed using a masking step, and further comprising forming grounded gate trenches through the masking material in the array in the same masking step in which the array circuitry trenches are formed. 51. The method of claim 46 wherein the peripheral circuitry trenches are formed using a masking step, and further comprising forming grounded gate trenches through the masking material in the array in the same masking step in which the peripheral circuitry trenches are formed. 52. The method of claim 46 wherein the masking material comprises silicon dioxide received over silicon nitride. 53. The method of claim 46 wherein the depositing of the array gate material at least fills the array circuitry trenches in the masking material and the array circuitry trenches in the semiconductive material with the array gate material. 54. The method of claim 46 wherein the depositing of the array gate material overfills the array circuitry trenches in the masking material and the array circuitry trenches in the semiconductive material with the array gate material. 55. The method of claim 46 wherein the depositing of the peripheral circuitry gate material at least fills the peripheral circuitry trenches in the masking material and the peripheral circuitry trenches in the semiconductive material with the peripheral circuitry gate material. 56. The method of claim 46 wherein the depositing of the peripheral circuitry gate material overfills the peripheral circuitry trenches in the masking material and the peripheral circuitry trenches in the semiconductive material with the peripheral circuitry gate material. 57. The method of claim 46 comprising removing at least a majority of the masking material after depositing the peripheral circuitry gate material. 58. The method of claim 46 being void of photolithographic patterning of the array gate material after its deposition. 59.. The method of claim 46 being void of photolithographic patterning of the peripheral circuitry gate material after its deposition. 60. The method of claim 46 being void of photolithographic patterning of the array gate material after its deposition, and being void of photolithographic patterning of the peripheral circuitry gate material after its deposition. 61. A method of forming field effect transistor gates, comprising: forming masking material over semiconductive material of a substrate, the substrate comprising a trench isolation region; in a common masking step, forming a first trench through the masking material and into the semiconductive material and forming a second grounded isolation gate trench through the masking material over the trench isolation region; and in a common deposition step, depositing gate material within the first trench and second trench. 62. The method of claim 61 comprising forming the second grounded isolation gate trench to within the trench isolation region during the common masking step. 63. The method of claim 61 wherein the depositing of the gate material at least fills the first and second trenches with the gate material. 64. The method of claim 61 wherein the depositing of the gate material overfills the first and second trenches with the gate material. 65. The method of claim 61 wherein the masking material comprises silicon dioxide received over silicon nitride. 66. The method of claim 61 comprising removing at least a majority of the masking material after depositing the gate material. 67. The method of claim 61 being void of photolithographic patterning of the gate material after its deposition. 68. The method of claim 61 wherein depositing the gate material covers the masking material with gate material, and comprising removing the gate material selectively relative to and exposing the masking material effective to isolate the gate material within the first and second trenches. 5 69. The method of claim 61 wherein the masking step comprises photolithography. 70. A method of forming integrated circuitry comprising a transistor gate 0 array including first gates and second grounded isolation gates, comprising: forming masking material over semiconductive material of a substrate, the substrate comprising trench isolation regions; forming first trenches through the masking material and into the semiconductive material for the first gates; 5 forming second grounded isolation gate trenches through the masking = material over the trench isolation regions for the second grounded isolation gates; and depositing gate material within the first and second trenches. 0 71. The method of claim 70 comprising forming the first and second trenches at the same time. 72. The method of claim 70 comprising forming the second trenches after forming the first trenches. 5 73. The method of claim 70 comprising forming the second trenches to within the trench isolation regions. 74. The method of claim 73 comprising forming the first and second 0 trenches at the same time. 75. The method of claim 73 comprising forming the second trenches after forming the first trenches. 76. The method of claim 70 wherein the depositing of gate material within the first and second trenches occurs in the same deposition step. 77. The method of claim 70 wherein the depositing of gate material within the first and second trenches occurs in different deposition steps. 78. The method of claim 70 wherein some of the depositing of gate material within the first and second trenches occurs in the same deposition step, and another some of the depositing of gate material within the first and second trenches occurs in different deposition steps. ' 79. The method of claim 70 wherein the depositing of the gate material at least fills the first and second trenches with the gate material. 80. The method of claim 70 wherein the depositing of the gate material overfills the first and second trenches with the gate material. 81.<'> The method of claim 70 wherein the masking material comprises silicon dioxide received over silicon nitride. 82. The method of claim 70 comprising removing at least a majority of the masking material after depositing the gate material. 83. The method of claim 70 being void of photolithographic patterning of the gate material after its deposition. 84. The method of claim 70 wherein depositing the gate material covers the masking material with gate material, and comprising removing the gate material selectively relative to and exposing the masking material effective to isolate the gate material within the first and second trenches. |
DESCRIPTIONMETHODS OF FORMING FIELD EFFECT TRANSISTORS, METHODS OF FORMING FIELD EFFECT TRANSISTOR GATES, METHODS OF FORMING INTEGRATEDCIRCUITRY COMPRISING A TRANSISTOR GATE ARRAY AND CIRCUITRYPERIPHERAL TO THE GATE ARRAY, AND METHODS OF FORMING INTEGRATEDCIRCUITRY COMPRISING A TRANSISTOR GATE ARRAY INCLUDING FIRSTGATES AND SECOND GROUNDED ISOLATION GATESTECHNICAL FIELDThis invention relates to fabrication of field effect transistors and components thereof.BACKGROUND ARTField effect transistors are common devices utilized in integrated circuitry, for example in logic circuitry, memory circuitry and control circuitry for memory circuitry. Such devices typically comprise a pair of source/drain regions having a channel region received therebetween. A conductive gate is provided operably proximate the channel region, and is spaced therefrom by a gate dielectric region. Application of a suitable voltage to the conductive gate causes current flow between the source/drain regions through the channel region.By way of example only, the conductive material of the gate might be formed above or over semiconductive material or within openings formed in the semiconductive material, and for example whether within bulk monocrystalline substrate material or within semiconductor-on-insulator material. When formed within trenches or other openings in semiconductive material, some of such are referred to as recessed access devices. Here, masking material is provided over the semiconductive material of the substrate and patterned to form gate line trenches within the substrate. With the trenches so formed, the masking material is removed, and then a gate dielectric is formed within the trench openings, for example by thermal oxidation of exposed semiconductive material within the trench. Gate material is then deposited to overfill the trenches. The gate material received outwardly of the trenches is then patterned, typically using photolithography and etch, to form desired gate outlines over the trenches within which the gate material is also received. Typically, the gate material patterning forms the gate lines over the trenches to be very close to or of the same width as the underlying trenches. Photomask misalignment can undesirably place an edge of the desired gate line pattern within the lateral confines of the previously etched trench. This is highly undesirable, as the gate pattern etch can etch gate material within the trench, ultimately leading to circuitry failure or at least unacceptable device configuration and performance.While the invention was motivated in addressing the above identified issues, it is in no way so limited. The invention is only limited by the accompanying claims as literally worded, without interpretative or other limiting reference to the specification, and in accordance with the doctrine of equivalents.SUMMARYThe invention includes methods of forming field effect transistors, methods of forming field effect transistor gates, methods of forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array, and methods of forming integrated circuitry comprising a transistor gate array including first gates and second grounded isolation gates. In one implementation, a method of forming a field effect transistor includes forming masking material over semiconductive material of a substrate. A trench is formed through the masking material and into the semiconductive material. Gate dielectric material is formed within the trench in the semiconductive material. Gate material is deposited within the trench in the masking material and within the trench in the semiconductive material over the gate dielectric material. Source/drain regions are formed.In one implementation, a method of forming a field effect transistor gate includes forming a silicon nitride-comprising masking material over semiconductive material of a substrate. A trench is formed through the silicon nitride-comprising masking material and into the semiconductive material. Silicon nitride of the masking material is removed after forming the trench into the semiconductive material. Prior to removing silicon nitride of the masking material, gate dielectric material is formed within the trench in the semiconductive material. Gate material is deposited within the trench in the semiconductive material over the gate dielectric material.In one implementation, a method of forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array includes forming masking material over semiconductive material of a substrate. Array circuitry trenches are formed through the masking material and into the semiconductive material. Array gate material is deposited within the array circuitry trenches in the masking material and within the array circuitry trenches in the semiconductive material. After depositing the array gate material, peripheral circuitry trenches are formed through the masking material. Peripheral circuitry gate material is deposited within the peripheral circuitry trenches within the masking material.In one implementation, a method of forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array includes forming masking material over semiconductive material of a substrate. Array circuitry trenches are formed through the masking material and into the semiconductive material. Array gate material is deposited within the array circuitry trenches in the masking material and within the array circuitry trenches in the semiconductive material. Peripheral circuitry trenches are formed through the array gate material and through the masking material. Peripheral circuitry gate material is deposited within the peripheral circuitry trenches within the array gate material and within the masking material.In one implementation, a method of forming field effect transistor gates includes forming masking material over semiconductive material of a substrate. The substrate comprises a trench isolation region. In a common masking step, a first trench is formed through the masking material and into the semiconductive material and a second grounded isolation gate trench is formed through the masking material over the trench isolation region. In a common deposition step, gate material is deposited within the first trench and second trench.In one implementation, a method of forming integrated circuitry comprising a transistor gate array including first gates and second grounded isolation gates comprises forming masking material over semiconductive material of a substrate. The substrate comprises trench isolation regions. First trenches are formed through the masking material and into the semiconduct[iota]ve material for the first gates. Second grounded isolation gate trenches are formed through the masking material over the trench isolation regions. Gate material is deposited within the first and second trenches. Other aspects and implementations are contemplated.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings. Fig. 1 is a diagrammatic sectional view of a semiconductor substrate fragment in process in accordance with an aspect of the invention.Fig. 2 is a view of the Fig. 1 substrate fragment at a processing step subsequent to that shown by Fig. 1.Fig. 3 is a view of the Fig. 2 substrate fragment at a processing step subsequent to that shown by Fig. 2.Fig. 4 is a view of the Fig. 3 substrate fragment at a processing step subsequent to that shown by Fig. 3.Fig. 5 is a view of the Fig. 4 substrate fragment at a processing step subsequent to that shown by Fig. 4. Fig. 6 is a view of the Fig. 5 substrate fragment at a processing step subsequent to that shown by Fig. 5.Fig. 7 is a view of the Fig. 6 substrate fragment at a processing step subsequent to that shown by Fig. 6.Fig. 8 is a view of the Fig. 7 substrate fragment at a processing step subsequent to that shown by Fig. 7.Fig. 9 is a view of the Fig. 8 substrate fragment at a processing step subsequent to that shown by Fig. 8.Fig. 10 is a view of the Fig. 9 substrate fragment at a processing step subsequent to that shown by Fig. 9. Fig. 11 is a view of the Fig. 10 substrate fragment at a processing step subsequent to that shown by Fig. 10.Fig. 12 is a view of the Fig. 11 substrate fragment at a processing step subsequent to that shown by Fig. 11. Fig. 13 is a view of the Fig. 12 substrate fragment at a processing step subsequent to that shown by Fig. 12.Fig. 14 is a view of the Fig. 13 substrate fragment at a processing step subsequent to that shown by Fig. 13. Fig. 15 is a diagrammatic sectional view of an alternate embodiment semiconductor substrate fragment in process in accordance with an aspect of the invention.Fig. 16 is a view of the Fig. 15 substrate fragment at a processing step subsequent to that shown by Fig.. 15. Fig. 17 is a view of the Fig. 16 substrate fragment at a processing step subsequent to that shown by Fig. 16.Fig. 18 is a diagrammatic sectional view of another alternate embodiment semiconductor substrate fragment in process in accordance with an aspect of the invention. Fig. 19 is a view of the Fig. 18 substrate fragment at a processing step subsequent to that shown by Fig. 18.Fig. 20 is a diagrammatic sectional view of still another alternate embodiment semiconductor substrate fragment in process in accordance with an aspect of the invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe invention includes methods of forming field effect transistor gates, methods of forming field effect transistors, and methods of forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array. The discussion proceeds primarily with reference to forming integrated circuitry comprising a transistor gate array and circuitry peripheral to the gate array, while the artisan will appreciate aspects of the invention apply to forming a single field effect transistor as well as to multiple field effect transistors, and one or more field effect transistor gates thereof. Referring initially to Fig. 1, a semiconductor substrate in process is indicated generally with reference 10. In the context of this document, the term "semiconductor substrate" or "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. Substrate 10 is depicted as comprising an array area or region 12 within which a field effect transistor gate array will be fabricated and a peripheral circuitry area 14 peripheral to gate array area 12. By way of example only, array area 12 might be utilized for fabrication of memory circuitry, for example DRAM circuitry, while peripheral circuitry area 14 might include control circuitry for operating/controlling memory circuitry within array area 12. Alternate configurations are of course contemplated, for example utilizing gate arrays and field effect transistors within logic, control or other circuitries. Substrate 10 is depicted as comprising semiconductive material 11, for example bulk monocrystalline silicon. Other semiconductive material substrates are also of course contemplated, for example semiconductor-on-insulator substrates, and whether existing or yet-to-be developed. Semiconductive material 11 is ideally suitably background doped, or doped to form a doped well, to be of a suitable conductivity type(s) and concentration(s). Exemplary preferred trench isolation regions 13, 15, 16, 17 and 18 have been fabricated relative to semiconductive substrate material 11.Referring to Fig. 2, masking material 20 has been formed over semiconductive material 11 of substrate 10. Such is depicted as comprising an innermost pad oxide layer 22 (exemplary preferred thickness range of from 30 Angstroms to 100 Angstroms), a masking layer 24 of different composition to that of material 22 received over material 22 (a preferred exemplary thickness range being from 50 Angstroms to 300 Angstroms), and a masking layer 26 formed over and of different material to that of masking layer 24 (an exemplary preferred thickness range being from 1 ,000 Angstroms to 3,000 Angstroms). Some or all of masking material 20 might be sacrificial, thereby being ultimately removed from the substrate. Accordingly, some portions or all of masking material 20 might be any of electrically insulative, semiconductive, or conductive. An exemplary preferred material for layer 24 is silicon nitride, while an exemplary preferred material for layer 26 is undoped silicon dioxide. A further exemplary alternate embodiment, and by way of example, forms layer 24 to comprise silicon dioxide and layer 26 to comprise silicon nitride. Regardless and accordingly in but only one preferred implementation, masking material 20 comprises silicon dioxide and silicon nitride, and in a more preferred embodiment comprises silicon dioxide received over silicon nitride.In one preferred implementation, layer 26 can be considered as comprising an outer insulative material layer and layer 24 can be considered as comprising an inner insulative material layer, wherein the outer insulative material layer is selectively etchable relative to the inner insulative material layer, and independent of whether another insulative material layer (such as layer 22) is received inwardly of inner insulative material layer 24. In one preferred implementation, outer insulative material layer 26 is thicker than inner insulative material layer 24, and in one preferred implementation as shown, contacts inner insulative material layer 24. Further in the depicted exemplary embodiment, outer insulative material layer 26 is the outermost material of masking material 20 at least at the conclusion of its patterning. Further, layer 24 is preferably thicker than layer 22 in but one exemplary implementation.Referring to Fig. 3, array circuitry trenches 28 have been formed through masking material 20. An exemplary preferred technique includes photolithographic patterning and etch using one or more photoresist or other layers (not shown). Fig. 3 depicts such photoresist or other layers as having been removed over masking material 20, although some or all of such might remain at the conclusion of the Fig. 3 processing where photolithography is utilized.Referring to Fig. 4, masking material 20 has been utilized as a mask to form array circuitry trenches 30 into semiconductive material 11. Accordingly in one preferred embodiment, depicted trenches 28 and 30 are formed using a single masking step, for example utilizing photolithography. An exemplary preferred depth range for trenches 30 within semiconductive material 11 from an outer surface thereof is from 300 Angstroms to 2,500 Angstroms.Referring to Fig. 5, gate dielectric material 32 has been formed within trenches 30 in semiconductive material 11. In one preferred implementation, at least a majority of gate dielectric material 32 is formed by thermal oxidation of semiconductive material 11 within trenches 30. The depicted exemplary embodiment depicts essentially all of such gate dielectric material having been formed by thermal oxidation, although deposition of gate dielectric material with or without thermal oxidation of material 11 within array trenches 30 is also of course contemplated.Referring to Fig. 6, array gate material 34 has been deposited within array circuitry trenches 28 within masking material 20 and within array circuitry trenches 30 within semiconductive material 11, and over gate dielectric material 32. Preferably, array gate material 34 is deposited to at least fill trenches 28 and 30, and most preferably to overfill such trenches and also depositing gate material 34 to cover masking material 20. Exemplary preferred materials 34 include conductively doped semiconductive materials, such as conductively doped polysilicon either in situ doped during deposition or subsequently. Other conductive materials might also be utilized, such as conductive metal or metal compounds but are not preferred at this point in the process.Referring to Fig. 7, after depositing array gate material 34, peripheral circuitry trenches 36 have been formed through masking material 20 and, in the depicted embodiment where material 34 is received thereover, also through array gate material 34. Fig. 7 also depicts in one implementation fabrication of a grounded gate trench 37 through masking material 20 within array region 12, for example over one or more of the trench isolation regions. In the context of this document, a grounded gate is an isolation gate which is fabricated to be received over at least some field isolation and held at ground or other suitable potential for providing an isolation function towards precluding or reducing formation of parasitic field effect transistor current flow beneath or around field isolation regions. If desired, some or all of trenches 36, 37 might be fabricated to etch/extend into material of semiconductive material- 11 and/or field/trench isolation material.Referring to Figs. 7 and 8, preferred embodiment trenches 36, 37 preferably expose semiconductive material 11 of substrate 10. Fig. 8 depicts one preferred implementation wherein a gate dielectric layer 38 is formed over exposed semiconductive material 11 within peripheral circuitry trenches 36. Such might be formed, by way of example only, by a thermal oxidation wherein at least a majority of the gate dielectric layer is comprised of oxidized semiconductive material (as shown). Such might also of course be combined with or substituted by deposition of a gate dielectric layer with or without thermal oxidation of substrate material 11. Further in the depicted exemplary embodiment, gate dielectric layer 38 also essentially forms over (and "on" as shown) array gate material 34, and will typically be subsequently removed from thereover as described below. Regardless, the gate dielectric material 38 might be the same or different as gate dielectric material 32 of the array circuitry trenches 30, thereby enabling optimization of gate dielectric for different areas of circuitry. A preferred manner of forming trenches 36 and 37 is in a single masking step common to the formation of both types of trenches, for example, utilizing photolithography. In certain implementations, one or both of trenches 36 and 37 might not be formed at all, or at other times if formed, and which is described below by way of example only in possible likely alternative embodiments.Regardless, Fig. 7 depicts one exemplary preferred embodiment wherein grounded gate trenches in the array and peripheral circuitry trenches are formed in the same masking step. Further, of course, grounded gate trenches might also be fabricated within peripheral circuitry area 14.Referring to Fig. 9, peripheral circuitry gate material 40 has been deposited within peripheral circuitry trenches 36 within masking material 20, and in the depicted exemplary embodiment within the corresponding peripheral circuitry trenches also formed within array gate material 34. Gate material 40 might be the same as or different from material 34, thereby enabling optimization of conductivity type and/or work function of the conductive gate material being formed for different gates. Further in the depicted exemplary embodiment, peripheral circuitry gate material 40 is also utilized in the fabrication of grounded gates, depositing also within grounded gate trenches 37. In the depicted exemplary preferred embodiment, peripheral circuitry gate material 40 is deposited to a thickness to at least fill, and preferably overfill, peripheral circuitry trenches 36 with peripheral circuitry gate material 40, and to at least fill, and preferably overfill, grounded gate trenches 37.Referring to Fig. 10, array gate material 34, peripheral circuitry gate material 40, and dielectric layer 38 therebetween have been removed selectively relative to and outwardly exposes masking material 20 effective to isolate the respective gate materials within the respective trenches in masking material 20 and in semiconductive material 11 where such are so formed. In the context of this document, a selective removal requires removal (for example by etching or other means) at a rate which removes one material relative to another at 2:1 or greater. In the depicted exemplary embodiment, such removing has been effective to recess gate materials 34 and 40 within the depicted trenches 28, 36 and 37 formed within masking material 20. Exemplary preferred techniques include any one or combination of chemical mechanical polishing, resist etch back or timed chemical etching. Where, for example, materials 34 and 40 comprise polysilicon and outer layer 26 of masking material 20 comprises silicon nitride, an exemplary etching chemistry capable of producing the Fig. 10 construction in a timed etch includes tetramethyl ammonium hydroxide followed by exposure to a hydrofluoric acid solution.Referring to Fig. 11, an exemplary higher conductive layer 42 has been deposited (i.e., a refractory metal, other metal, or metal suicide) and polished or etched back, followed by deposition of an insulative material layer 44 followed by polishing or other etch back of it. Such thereby, in one exemplary preferred embodiment, caps recessed gate materials 34 and 40 within masking material 20 with insulative material 44. In one preferred embodiment, insulative material 44 is of common composition to that of inner layer 24 of masking material 20 where such is formed of insulative material. Accordingly by way of example only, materials 44 and 24 might comprise silicon nitride where material 26 comprises silicon dioxide, or the reverse in but preferred embodiments. Referring to Fig. 12 and in but one preferred embodiment, outer layer 26 of masking material 20 has been etched selectively relative to inner layer 24 and to capping insulative material 44 received over recessed gate materials 34 and 40. In one preferred implementation, an aspect of the invention includes forming gate dielectric material within the trenches, for example material 32, prior to removing silicon nitride of the masking material when such is utilized.Referring to Fig. 13 and in but one preferred embodiment, insulative material 50 preferably of common composition to that of inner insulative material layer 24 of masking material 20 has been deposited over substrate 10 as shown. Referring to Fig. 14, material 50 and material 24 have been anisotropically etched effective to form insulative sidewall spacers 52 about gate materials 34,40 and 42. Some or all of pad oxide layer 22 (when such is utilized) might be removed earlier or at this point in the process, or some might remain as part of the finished circuitry construction. Regardless in one preferred embodiment, aspects of the invention include removing at least a majority of the masking material at some point after at least gate material 34 has been deposited. In i most preferred embodiments, such methods of forming field effect transistor: gates, field effect transistors, and transistor gate arrays and circuitry peripheral to the gate array are preferably void of photolithographic patterning of any one or combination of gate materials 34, 38 and 42 after such has/have been deposited.Fig. 14 depicts fabrication of source/drain regions 56, with such most preferably being formed within semiconductive material 11 of substrate 10. Such might be formed by one or a combination of ion implants of suitable conductivity enhancing dopant(s) during any of the above processing steps. Further of course, other channel, channel stopping, or other implants, whether existing or yet-to-be developed, could be conducted during any of the above processing.Alternate embodiments are of course contemplated with the invention only being limited by the claims as literally worded without reading limitations from other claims, the drawings, or specifications into the claims. By way of example only, a few exemplary alternate embodiments will. now be described. Referring to Fig. 15, such depicts a semiconductor substrate 10a corresponding to or a substitute for the Fig. 4 depicted processing with respect to the first described embodiments. Like numerals from the first described embodiments have been utilized where appropriate, with differences being indicated with the suffix "a" or with different numerals. Fig. 15 depicts substrate fragment 10a which includes the forming of grounded gate trenches 37a through masking material 20 in the array in the same masking step in which array circuitry trenches 28 and 30 are formed. Further by way of example only in the depicted embodiment, grounded gate trenches 37a have been formed to extend into the trench isolation regions, such as trench isolation region 15.Referring to Fig. 16, gate dielectric material 32 has been formed, and gate material 34a has been deposited to within grounded gate trench 37a.Referring to Fig. 17, subsequent processing has occurred to a point of fabrication of anisotropically etched insulative sidewall spacers 52 and source/drain regions 56. Processing, materials, etc. are otherwise preferably as provided above in the first described embodiments of Figs. 1-14. Further by way of example only, another exemplary embodiment<-> processing with respect to a substrate fragment 10b is described with reference to Figs. 18 and 19. Like numerals from the first and second described embodiments have been utilized where appropriate, with differences being indicated with the suffix "b" or with different numerals. Fig. 18 corresponds in processing sequence to that of Fig. 4, and wherein one or more peripheral circuitry trenches 36b have been formed commensurate with formation of array circuitry trenches 28, 30. Such might be advantageously utilized wherein certain transistors of the peripheral circuitry and the array circuitry are desired to be of the same conductivity type and/or work function, and/or other desired property. Fig. 19 depicts subsequent gate dielectric 32 fabrication, gate material 34b deposition, and then subsequent patterning of masking material 20b and gate material 34b to form, by way of example only, grounded gate trenches 37b and another peripheral circuitry trench 36b. Accordingly, some of the peripheral circuitry trenches might be formed commensurate with formation of the array circuitry trenches. Subsequent processing could occur, for example, analogously or otherwise to that depicted and described relative to Figs. 8-14. Fig. 20, by way of example only, depicts alternate exemplary processing with respect to a substrate fragment 10c. Like numerals from the above- described embodiments have been utilized where appropriate, with differences being indicated with the suffix "c" or with different numerals. Fig. 20 depicts processing whereby array trenches 28, 30 have been fabricated using a masking step separate from fabrication of any other line trenches in the depicted cross section. Subsequent thereto, grounded gate isolation trenches 37 and one peripheral circuitry gate trench 70 have been fabricated in a common masking step, and gate material 40c deposited thereover. Thereafter, another masking has been conducted through masking material 20 and the previously deposited gate materials to form another peripheral circuitry trench 74. Gate dielectric 71 has been formed (for example by any of the above described processes relative . to gate dielectric material fabrication). Subsequently, gate material 76 has been deposited which may be the same or different from any of the above exemplary gate materials. Processing could otherwise ideally proceed subsequently commensurate with or different from the above-described embodiments as . depicted and described relative to Figs. 8-14 for example.Aspects of the invention also encompass a method of forming field effect* transistor gates which include forming masking material over semiconductive material of the substrate, and where the substrate comprises a trench isolation region. Exemplary embodiments, by way of example only, are those described above. In a common masking step, a first trench is formed through the masking material and into the semiconductive material and a second grounded isolation gate trench is formed through the masking material over the field isolation region. Such masking step in one preferred implementation comprises photolithography.Further in one implementation, the second grounded isolation gate trench might be fabricated to extend within the field isolation region during the stated common masking step.Subsequently in a common deposition step, gate material is deposited within the first trench and the second trench. Such common deposition step preferably at least fills, and more preferably overfills, the first and second trenches with the gate material. In one preferred implementation, at least amajority of the masking material is removed after depositing the gate material. In one preferred implementation, the process is void of any photolithographic patterning of the gate material after its deposition. In one implementation, the gate material as deposited covers the masking material with gate material, and the process further comprises removing the gate material selectively relative to and exposing of the masking material effective to isolate the gate material within the first and second trenches.In one implementation, an aspect of the invention encompasses a method of forming integrated circuitry comprising a transistor gate array including first gates and second grounded isolation gates. Masking material is formed over semiconductive material of a substrate, and the substrate comprises trench isolation regions. First trenches are formed through the masking material and into the semiconductive material for the first gates. Second grounded isolation gate trenches are formed through the masking material over the field isolation regions for the second grounded isolation gates. Gate material is deposited within the first and second trenches.The first and second trenches might be formed at the same time or at different times, for example either before or after the other. The second trenches might be formed within the field isolation regions or received only outwardly thereof.Depositing of the gate material within the first and second trenches might occur in the same deposition step, or might occur in different deposition steps. Further, some of the depositing of the gate material within the first and second trenches might occur in the same deposition step, and another of some of the depositing of gate material within the first and second trenches might occur in different deposition steps. Regardless and preferably, depositing of the gate material at least fills, and even more preferably overfills, the first and second trenches with the gate material. Processing is otherwise preferably as described above with respect to the other embodiments. |
This disclosure provides systems, methods and apparatus for glass via bars that can be used in compact three-dimensional packages, including package-on-packages (PoPs). The glass via bars can provide high density electrical interconnections in the PoPs. In some implementations, the glass via bars can include integrated passive components. Packaging methods employing glass via bars are also provided. |
1.A stacked package comprising:a bottom package that is vertically integrated with the second package, whereinThe bottom package includes a first die and a glass via bar;The second package includes a second die;The first die is in electrical communication with the second die through the glass via bar.2.The stacked package of claim 1 wherein the first die is selected from the group consisting of: a logic die, a memory die, a MEMS MEMS die, a RF RF die, a power integrated circuit IC die , sensor die, and actuator die.3.The stacked package of claim 1 or claim 2, wherein the second die is selected from the group consisting of: a logic die, a memory die, a MEMS MEMS die, a RF RF die, power integration Circuit IC die, sensor die, and actuator die.4.A stacked package according to any of the preceding claims, wherein the first die and the second die are different types of dies.5.The stacked package of claim 4 wherein the first die is a logic die and the second die is a memory die.6.The stacked package of claim 5 wherein the memory die is attached to the substrate by flip chip attachment.7.A stacked package according to claim 5 or claim 6, wherein the memory die is a TSV TSV memory die.8.A stacked package according to any of the preceding claims, further comprising a third package vertically integrated with the bottom package and the second package such that the second package is disposed in the bottom package Between the third package and the third package.9.A stacked package according to any of the preceding claims, wherein said glass via bars comprise integrated passive components.10.The stacked package of claim 9 wherein said integrated passive component is one of a resistor, an inductor, and a capacitor or a combination thereof.11.A stacked package according to claim 9 or claim 10 wherein said first die comprises a processor and said integrated passive component is electrically coupled to said processor.12.A stacked package according to any one of claims 9 to 11, wherein said integrated passive component is electrically connected to a through hole extending through said glass via bar.13.A stacked package according to any of the preceding claims, further comprising an electronic device printed circuit board PCB attached to said bottom package and in electrical communication with said bottom package.14.A stacked package according to any of the preceding claims, wherein said bottom package further comprises a mold in which said first die and said glass via bar are embedded.15.A package that includes:Package substrateDie;A glass rod comprising one or more glass perforations in electrical communication with the die.16.The package of claim 15 further comprising a mold in which said glass rod and said die are embedded, said mold being disposed on said package substrate and attached to said package substrate.17.A package according to claim 15 or claim 16 wherein the die is a logic die.18.A package according to any one of claims 15 to 17, wherein the glass rod further comprises an integrated capacitor.19.A package according to any one of claims 15 to 18, further comprising an inter-level interconnect disposed on the package substrate opposite the mold.20.A package according to any one of claims 15 to 19 wherein said one or more glass perforations provide a conductive path extending through the thickness of said mold.21.A system comprising the package of any one of claims 15 to 20, the system further comprising:monitor;a processor configured to communicate with the display, the processor configured to process image data; andA memory device configured to communicate with the processor.22.The system of claim 21 further comprising:a drive circuit configured to transmit at least one signal to the display;a controller configured to transmit at least a portion of the image data to the drive circuit, wherein one or more of the processor, memory device, drive circuit, and controller comprise a through-bar with the glass Electrically connected components.23.The system of claim 21 further comprising:An image source module configured to send the image data to the processor,Wherein the image source module includes at least one of a receiver, a transceiver, and a transmitter, and wherein one or more of the processor, the memory device, the receiver, the transceiver, and the transmitter comprise The component that the hole rod is electrically connected to.24.The system of claim 21 further comprising:An input device configured to receive input data and to communicate the input data to the processor.25.A method comprising:Placing the die on the package substrate;Placing one or more glass via bars on the package substrate;The die and the one or more glass via bars are attached to the substrate by solder reflow such that the one or more glass via bars are in electrical communication with the die.26.The method of claim 25 wherein the die is a logic die.27.A method according to claim 25 or claim 26 wherein at least one of the one or more glass via bars comprises an integrated capacitor or an integrated solenoid or toroidal inductor.28.A method according to any one of claims 25 to 27, further comprising testing the die prior to placing the die on the package substrate.29.A method according to any one of claims 25 to 28, further comprising testing the one or more glass vias prior to placing the one or more glass via bars on the package substrate Baton.30.A method according to any one of claims 25 to 29, further comprising applying a molding compound to the substrate and curing the molding compound.31.The method of claim 30, further comprising mounting a solder ball on a surface of the substrate opposite the molding compound. |
Incorporation of passive components and fine pitch vias for stacked packagesPriority claimPriority is claimed on the following application: "Incorporating Passive Components for Substrate Packages and Fine-Pitch Through Holes (INCORPORATION OF PASSIVES & FINE PITCH THROUGH VIA FOR PACKAGE ON), filed on August 3, 2012 PACKAGE) No. 61/679, 625 (Attorney Docket QUAL 169PUS/123236P1) US Provisional Application, and January 23, 2013, entitled "Passive Components for Stacked Packages and Fine-Pitch Through Holes" U.S. Patent Application Serial No. 13/748,294, the disclosure of which is hereby incorporated by reference in its entirety in its entirety in its entirety in its entirety herein inTechnical fieldField of the Invention This invention relates generally to packaging of devices and, more particularly, to glass via bars for interconnecting multiple layers of substrates, substrates, semiconductor dies or other components.Background techniqueA microelectronic device can include multiple components including an electromechanical system (EMS) die. For example, an EMS die can be electrically connected to a driver integrated circuit (IC) die in an electronic device. Electromechanical systems include devices having electrical and mechanical components, actuators, transducers, sensors, optical components (including mirrors), and electronic components. Electromechanical systems can be fabricated at a variety of scales, including but not limited to microscale and nanoscale. Microelectromechanical systems (MEMS) devices can include structures ranging in size from about one micron to hundreds of microns or greater than hundreds of microns. Nanoelectromechanical systems (NEMS) devices can include structures that are less than one micron in size (including, for example, less than a few hundred nanometers in size).The package in the system protects the functional units of the system from the environment, provides mechanical support for the system components, and provides an interface to the electrical interconnects. A three-dimensional (3-D) package with multiple stacked dies can reduce the size of the package in a microelectronic system.Summary of the inventionThe systems, methods and devices of the present invention each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.One innovative aspect of the subject matter described in this disclosure can be implemented in a stacked package (PoP) that includes a bottom package that is vertically integrated with a second package, wherein the bottom package includes a first a die and at least one glass via bar, and the second package includes a second die such that the first die is in electrical communication with the second die through one or more glass via bars. In some embodiments, the bottom package further includes a mold in which the first die and the glass via bar are embedded.The first die and the second die can be independently, for example, a logic die, a memory die, a microelectromechanical system (MEMS) die, a radio frequency (RF) die, a power integrated circuit (IC) Die, sensor die, and actuator die. In some implementations, the first die and the second die are different types of dies. For example, in some implementations, the first die is a logic die and the second die is a memory die. For example, the memory die can be attached to the substrate by flip chip attachment. In some implementations, the memory die can be a through silicon via (TSV) memory die. In some other implementations, the first die and the second die can be the same type of die. For example, in some implementations, the first die and the second die can each be a memory die, each of which can be a logic die, or can be a MEMS die. The stacked package may further include a third package vertically integrated with the bottom package and the second package such that the second package is disposed between the bottom package and the third package.In some embodiments, the glass via rods comprise integrated passive components. Examples of integrated passive components include resistors, inductors, and capacitors, and combinations thereof. The stacked package can further include an electronic device printed circuit board (PCB) attached to the bottom package and in electrical communication with the bottom package.Another innovative aspect of the subject matter described in this disclosure can be implemented in a package comprising a package substrate, a die, and a glass rod, the glass rod comprising one or more in electrical communication with the die Multiple glass perforations. The package may further comprise a mold in which the glass rod and the die are embedded, wherein the mold is disposed on the package substrate and attached to the package substrate. In some embodiments, the one or more glass perforations provide a conductive path that extends through the thickness of the mold.Another innovative aspect of the subject matter described in this disclosure can be practiced in a method comprising placing a die on a package substrate and placing one or more glass via bars on the package liner Attaching the die and the one or more glass via bars to the substrate via solder reflow, such that the one or more glass via bars are in electrical communication with the die . In some embodiments, the method further comprises applying a molding compound to the substrate and curing the molding compound. The method can further include testing the die prior to placing the die on the package substrate. The method can further include testing the one or more glass via bars prior to placing the one or more glass via bars on the package substrate.The details of one or more embodiments of the subject matter described in the specification are set forth in the drawings and the description below. Other features, aspects, and advantages will be apparent from the description, drawings and claims. It should be noted that the relative dimensions of the following figures may not be drawn to scale.DRAWINGS1A to 1C show examples of isometric schematic illustrations of glass through-hole bars.2 shows an example of an isometric schematic illustration of a portion of a glass via rod comprising a passive component.3 shows an example of a flow diagram illustrating a batch manufacturing process for a glass via rod.Figure 4 shows an example of a flow diagram illustrating a fabrication process for a glass via rod using photopatternable glass.Figures 5A through 5G show examples of cross-sectional schematic illustrations of various stages in a method of making a glass through-hole rod.Figure 5H shows an example of a glass via rod comprising an integrated capacitor formed in a trench.Figure 6 shows an example of a schematic cross-sectional illustration of a stacked package (PoP) comprising a glass via bar.7 through 11 show examples of flow diagrams illustrating a PoP process using a glass via bar.12A and 12B show an example of a system block diagram illustrating a display device including a packaged semiconductor chip electrically connected to a glass via bar.Similar reference numerals and names in the various drawings indicate similar elements.Detailed waysThe following description relates to certain embodiments for the purpose of describing the innovative aspects of the invention. However, those skilled in the art will readily recognize that the teachings herein can be applied in many different ways. Thus, the teachings are not intended to be limited to the embodiments depicted in the drawings, but may beSome embodiments described herein relate to glass via bars comprising glass perforations. Glass via bars can be used to provide inter-level connections, for example, in stacked three-dimensional (3-D) packages. In some embodiments, the glass via bars can be part of a stacked package (PoP). In some embodiments, the glass via bars can comprise a high density array of glass perforations. In some embodiments, the glass via bars can comprise one or more passive components embedded in the surface of the glass via bars and/or embedded within the glass via bars.Some embodiments described herein relate to packages comprising glass via bars. In some implementations, the package can be a PoP or a discrete package configured for PoP packaging. The package can include one or more semiconductor dies and one or more glass via bars embedded within the mold structure. The glass via bars can have one or more passive components on the glass via bars or in the glass via bars. The package may further comprise inter-level interconnects such as solder balls.Some embodiments described herein relate to methods of making glass via bars. A method of making a glass via bar can include forming and filling a glass via of a large area glass substrate and singulating the substrate to form a plurality of glass via bars. In some embodiments, passive components can be formed on a glass substrate prior to singulation. In some embodiments, forming the glass vias can include patterning and etching the photopatternable glass. Some embodiments described herein relate to methods of making packages comprising glass via bars. A method of making a package comprising a glass via bar can include forming a mold structure in which one or more semiconductor dies and one or more glass via bars are embedded.Particular embodiments of the subject matter described in this disclosure can be implemented to achieve one or more of the following possible advantages. In some embodiments, the glass via bars can provide the ability to scale the via pitch from 500 microns to 50 microns and the via diameter from 200 microns to 30 microns. The advantages of proportional spacing and diameter include the flexibility to make smaller packages and increase capacity and package design.In some embodiments, passive components can be fabricated together with glass via bars and incorporated into the glass via bars. The advantages of incorporating passive components into a glass via bar include the ability to place passive components closer to the semiconductor die in the package, reduce electrical path length, increase performance, reduce component count, and simplify assembly, And reduce costs.In some embodiments, the glass via bars can be tested prior to incorporating the glass via bars into the package. The ability to test through-holes and passive components can provide high throughput in the subsequent process of assembling known good components. In some embodiments, a glass via bar can facilitate fabrication of a stacked die package.The package of the device comprising the EMS device and the integrated circuit device protects the functional unit of the device from the environment, provides mechanical support for the device, and provides a high density interface for electrical interconnection between the device and the substrate.Embodiments described herein relate to glass through-hole rods comprising glass perforations. Glass via bars can be used, for example, to provide inter-level connections in a stacked three-dimensional (3-D) package. In some embodiments, the glass via rod can be part of a PoP. The PoP comprising the glass via bars is further described below with respect to FIG.1A to 1C show examples of isometric schematic illustrations of glass through-hole bars. FIG. 1A shows an example of a glass via bar 100 comprising glass vias 106. The glass via bar 100 has a length L, a width W, and a height H. (It should be noted that the geometrical arrangement is not to scale, in which the height is increased for illustrative purposes.) The example dimensions of the glass via bar 100 include a length L between about 1 mm and 6 mm, between about 1 A width W between millimeters and 6 millimeters, and a height H between about 300 micrometers and 700 micrometers. In embodiments in which the glass via bars 100 are to be packaged in a mold structure as described below with respect to Figures 6 and 7, the height H may be equal to the thickness of the mold structure. In some embodiments, the length and width of the glass via bars can be large, for example, up to about 15 millimeters. Although the glass via bar 100 in the example of Fig. 1A and the remaining figures is a rectangular cube, the glass via bar 100 may have any shape. For example, the glass via bar 100 can have a 3-D L shape, a cylindrical shape, or other shape suitable for a particular package layout, with dimensions ranging from about 1 mm to 15 mm. Moreover, although depicted as being transparent in the associated figures, the glass via bars 100 can be transparent or non-transparent. The glass via bars can be borosilicate glass, soda lime glass, quartz, Pyrex or other suitable glass materials. In some embodiments, the glass substrate is a borosilicate glass substrate that can be ablated by laser radiation. In some embodiments, the glass substrate is a photo-patternable glass substrate.Glass perforations 106 extend through the glass via bars 100 to provide a conductive path between the opposing faces. An example diameter of the glass via 106 can range from about 30 microns to 100 microns. The glass perforations 100 can also have any suitable shape. For example, in some embodiments, the via opening for the glass via 100 can be circular, semi-circular, oval, rectangular, polygonal, rectangular with rounded edges, sharp edges of the polygon, or other shape. Also, according to various embodiments, the glass perforations 100 can have a linear or curved sidewall profile. The glass through-hole bar 100 can comprise any number of glass perforations placed or arranged in any regular or irregular arrangement. For example, the glass via bar 100 can have a number of glass vias 106 between about 1 and 24. The example spacing (center-to-center distance) of the glass vias 106 in the glass via bars can range from about 40 microns to about 200 microns. In some embodiments, the glass perforated rods 106 have a pitch equal to or less than about 100 microns.In some embodiments, the glass via bar 100 can comprise partially or unfilled glass vias. 1B shows an example of a glass via bar 100 comprising glass vias 106 and unfilled glass vias 132 that can be formed as glass vias by the addition of a conductive material. In some embodiments, the glass perforated rod can be provided with an arrangement of glass perforations 106 and unfilled glass perforations 132 for a particular package layout. The unfilled glass vias 132 facilitate the mass production of the glass via bars 100 without wasting conductive materials that are not used in a particular layout. In some embodiments, the glass via bar 100 can comprise a glass via filled with a non-conductive material. 1C shows an example of a glass via rod 100 comprising glass vias 106 and filled non-conductive vias 134. In some embodiments, the filled non-conductive vias 134 can be filled with a thermally conductive fill material. The thermally conductive filler material can act as a thermal conduction path that transfers heat from the device on one side of the glass via bar 100 to the other side. In some embodiments, the filled non-conductive vias 134 can be filled with a sealing via to prevent liquid or gas from passing through the vias. In some embodiments, the filled non-conductive vias 134 can be filled with a filler material that provides mechanical support and/or stress relief to the glass via bars 100. In some embodiments (not shown), the glass via bar 100 can comprise a glass via that is conformally coated with a conductive material. The interior of the glass perforations may remain unfilled or filled with a non-conductive material as described above.In some embodiments, the glass via bar 100 is provided with conductive wiring on one or more of its faces. In some embodiments, the glass via bar 100 is provided with one or more integrated passive components. The integrated passive components are passive components disposed on one or more of the faces or embedded within the glass via bars 100. Figure 2 shows an example of an isometric schematic illustration of a portion of a glass via rod comprising passive components. The glass via bar 100 includes a top surface 138a and a glass via 106 extending through the glass via bar 100. Passive components including capacitor 144 and resistor 142 can be formed on top surface 138a. Electroplated conductive wiring 140 may also be formed on surface 138a. In some embodiments, a plurality of glass vias 106 can be connected to form a solenoid type inductor or a circular or elongated ring type inductor. In the example of FIG. 2, a portion of a solenoid inductor 146 formed by joining a plurality of glass vias 106 on a top surface 138a and a bottom surface (not shown) is depicted. As illustrated, to form the solenoid inductor 146, the glass perforations are joined to diagonally adjacent glass perforations on the top surface 138a of the glass via bar while the glass perforations are attached to the bottom surface of the glass via bar. Sideways adjacent to the through hole, and vice versa.A manufacturing process for manufacturing a glass via bar is described below with respect to FIGS. 3 to 5G. In some embodiments, the glass via bars can be fabricated in a batch grading process. The batch grading process simultaneously forms a plurality of glass via bars. Figure 3 shows an example of a flow diagram illustrating a batch manufacturing process for a glass via bar. Process 200 begins at block 202 where a passive component for a plurality of glass via bars is formed on one or more surfaces of a glass substrate. The glass substrate can be a panel, sub-panel, wafer, sub-wafer or other suitable type of substrate. For example, in some embodiments, the glass substrate can be a glass sheet or panel having an area of about 4 square meters or more. In some other embodiments, the glass substrate can be a rounded substrate having a diameter of 100 mm, 150 mm, or other suitable diameter. The thickness of the glass substrate may be the same as the height of the glass via bar to be made of a glass substrate. Example thicknesses range from about 300 microns to about 700 microns. In some embodiments, if, for example, the glass substrate can be thinned in a subsequent process, the thickness of the glass substrate can be greater than the thickness of the glass via bar.The glass substrate can be or comprise, for example, borosilicate glass, soda lime glass, quartz, Pyrex or other suitable glass material. In some embodiments, the glass substrate is a borosilicate glass substrate that can be ablated by laser radiation. In some embodiments, the glass substrate can have a coefficient of thermal expansion (CTE) that matches another component of the package or a CTE between the CTEs of two or more components of the package. For example, a glass substrate can have a relatively low CTE of about 3.4 parts per degree Celsius (ppm/° C.) matching silicon, about 10 parts per degree Celsius of a matching PCB or molding compound. A relatively high CTE, or a CTE between these components. In some embodiments, the glass substrate is a photo-patternable glass substrate. The photopatternable glass is discussed further below with respect to Figure 4.Forming a passive component on one or more surfaces of the glass substrate can include one or more thin film deposition and etching operations. For example, one or more of the metal, dielectric, and passivation layers can be deposited and patterned to form a passive component. Examples of deposition techniques may include PVD, CVD, atomic layer deposition (ALD), electrolytic plating, and electroless plating. In some embodiments, the passive component includes one or more capacitors, inductors, and/or resistors. In some embodiments, a passive component can include a variable capacitor, a varactor, a filter, a transformer, a coupler, a directional coupler, a power splitter, a transmission line, a waveguide, and/or an antenna.Process 200 continues at block 204 where a glass via for a plurality of glass via bars is formed in a glass substrate. Block 204 may involve a sand blasting process, a laser ablation process, or a photo patterning process. Process 200 continues at block 206 where the glass vias are metallized to form glass vias. Block 206 can include, for example, an electroplating process such as electroless plating or electroplating. In some embodiments, the glass perforations can be filled with a metal. In some other embodiments, the inner surface of the glass perforations can be coated with a metal, wherein the remainder of the glass perforations remain unfilled or filled with a conductive material (e.g., metal) or a non-conductive material (e.g., a dielectric). Block 206 can also include forming one or more wires on one or more surfaces of the glass substrate, for example, to electrically connect the plurality of glass vias. In some embodiments, block 206 can include filling the glass via with a conductive paste.In some embodiments, after block 204, the glass vias can be connected to one or more surface passive components and/or interconnected to each other to form, for example, one or more solenoid-type inductors. In some embodiments, some of the glass vias formed in block 206 or all of the surface passive components formed in block 202 may remain unattached after block 206. In some such embodiments, glass vias and passive components can be attached during subsequent processing (e.g., during a PoP process).Process 200 continues at block 208 where the glass substrate is singulated to form a plurality of glass via bars, each glass via bar comprising a glass via and, in the case of formation, a surface passive component. Cutting can include forming a cutting track along which the glass substrate will be cut, and cutting along the cutting track with a dicing saw or laser. According to various embodiments, the lateral size of the glass via bars formed in block 208 can be between about 1 mm and 15 mm, for example between about 1 mm and 6 mm.Figure 4 shows an example of a flow diagram illustrating a fabrication process for a glass via rod using photopatternable glass. Figures 5A through 5G show examples of cross-sectional schematic illustrations of various stages in a method of making a glass through-hole rod. Turning first to Figure 4, process 250 begins at block 252 where the glass perforations in the photopatternable glass are patterned. In some embodiments, "patterning" can refer to changing the chemical or crystalline structure of a photopatternable glass to form altered regions and unaltered regions. The photopatternable glass may comprise a silicon oxide/lithium oxide (SiO 2 /Li 2 O) based glass doped with one or more noble metals such as silver (Ag) and cerium (Ce). Electromagnetic radiation and heat treatment of light can be used to pattern a glass to produce a chemical reaction which reveals that the glass can be etched with an etchant such as hydrofluoric acid (HF). Examples of photopatternable glasses include APEX(TM) glass photodefinable glass wafers from Life BioScience, Inc. and Forturan(TM) photosensitive glass from Schott Glass Corporation. Patterning of the photopatternable glass can include masking the glass to define the glass vias and exposing the unmasked portions of the glass body to ultraviolet (UV) light and thermally annealing. An example of the masking material may comprise quartz-chromium. UV exposure can change the chemical composition of the unmasked portion such that it has a high etch selectivity for certain etchants. For example, in some embodiments, the masked glass is exposed to UV light having a wavelength between 280 nanometers and 330 nanometers. Exposure to UV light in this range can cause photo-oxidation of Ce 3+ ions to Ce 4+ ions, thereby releasing electrons. Ag + ions can trap these free electrons to form Ag atoms. In some embodiments, a two-stage UV exposure post thermal anneal can be performed. In the first stage, Ag atoms can coalesce to form Ag nanoclusters. In the second stage, crystalline lithium silicate (Li s SiO 3 ) is formed around the Ag nanoclusters. The masked areas of the glass are chemically invariant and remain amorphous. The thermal annealing temperature may range from about 500 ° C to about 600 ° C, with the second stage being performed at a higher temperature than the first stage. In a subsequent process, for example, in block 256, the crystalline portion of the glass is etched while maintaining the amorphous portion of the glass body substantially unetched.The process described above is an example of patterning photopatternable glass, and other processes are possible. In some embodiments, for example, in addition to or in lieu of the ingredients described above, the glass may comprise Al, Cu, boron (B), potassium (K), sodium (Na), zinc ( Zn), calcium (Ca), antimony (Sb), arsenic (As), gold (Au), magnesium (Mg), barium (Ba), lead (Pb) or other additives. In some embodiments, the photopatternable glass can include various additives to modify the melting point, increase chemical resistance, reduce thermal expansion, modify elasticity, modify refractive index or other optical properties, or otherwise modify the properties of the glass. For example, potassium oxide (K 2 O) and/or sodium oxide (Na 2 O) can be used to lower the melting point of photopatternable glass and/or increase chemical resistance and zinc oxide (ZnO) or calcium oxide (CaO). Can be used to improve chemical resistance or reduce thermal expansion. In some embodiments, one or more other electron donors may be used in addition to or in place of Ce. In some embodiments, the photopatternable glass can comprise one or more oxygen donors.Example UV doses can range from 0.1 Joules per square centimeter to over 50 Joules per square centimeter. The UV wavelength and dose can vary depending on the composition and size of the photopatternable glass. The UV induced chemical reaction may also vary depending on the chemical composition of the photopatternable glass, as the subsequent thermally induced reaction may vary depending on the chemical composition of the photopatternable glass. Moreover, in some embodiments, these reactions can be driven by energy sources other than UV radiation and thermal energy, including but not limited to other types of electromagnetic radiation. In general, treating the unmasked regions of the photopatternable glass with one or more types of energy produces a crystalline composition that produces, for example, polycrystalline ceramics. Conversion to crystalline ceramics allows etching of light to pattern the glass.FIG. 5A shows an example of a cross-sectional schematic illustration of a photopatternable glass prior to patterning. Glass substrate 300 is a photopatternable glass and can be, for example, a SiO 2 /Li 2 O based glass as described above, and can have a thickness of, for example, between about 300 microns and 700 microns. In some embodiments, wherein the glass via bars are formed as part of a batch process as described above with respect to Figure 3, the depicted portion of the glass substrate 300 can be a repeating unit of a larger glass panel or wafer. FIG. 5B shows an example of a cross-sectional schematic illustration of a photo-patternable glass after patterning (eg, after block 252 in FIG. 4). Glass substrate 300 includes a crystalline portion 302 that extends through the thickness of glass substrate 300 and will eventually be etched to form glass vias. In the example of Figure 5B, the crystalline portion 302 has a slightly angled profile. According to various embodiments, the crystalline portion 302 and thus the glass perforations can have substantially straight sidewalls having an angle in the range of from about 80 to 90 from the top surface of the photopatternable glass.Returning to Figure 4, process 250 continues at block 254 where one or more passive components are formed on the surface of the photo-patternable glass. As described above with respect to Figure 3, forming one or more passive components can include thin film deposition and patterning operations. Figure 5C shows an example of a cross-sectional schematic illustration of a photopatternable glass comprising a capacitor formed on the surface of a photo-patternable glass. Capacitor 144 includes metal layers 306 and 308 and dielectric layer 310. Dielectric layer 310 and passivation layer 312 cover the amorphous portion of glass substrate 300. The contact points to each of the metal layers 306 and 308 are patterned. Examples of the metal layer may include, but are not limited to, Al, Mo, Cu, and alloys, and combinations thereof, for example, aluminum bismuth (A1Nd) and aluminum copper (AlCu). Examples of the dielectric material may include, but are not limited to, SiO 2 , silicon oxynitride, zirconium oxide (ZrO), aluminum oxide (AlO x ) containing Al 2 O 3 , and a layer piezoelectric medium.Returning to Figure 4, process 250 continues at block 256 where the etched light can pattern the glass to form a glass via. Any etch chemistry that has an etch selectivity to the crystalline portion 302 of the glass substrate 300 that is substantially higher than the amorphous portion of the glass substrate 300 can be used, including wet and dry etch. In one example, for wet etching, a 10% HF solution can be used. In another example, a fluorine-based dry etching using a chemical such as XeF 2 , tetrafluoromethane (CF 4 ), or sulfur hexachloride (SF 6 ) may be used. The etchant exposure time is long enough to etch the photopatternable glass through its thickness to form a glass via. In some embodiments, the post-etch etch is followed by post-etch bake.Figure 5D shows an example of a cross-sectional schematic illustration of a glass substrate after etching a glass via. The amorphous portion of the glass substrate 300 remains, with the crystalline portion being etched away to form the glass vias 132. In an alternate embodiment, the glass vias 132 may be formed by laser ablation of a laser ablatable glass substrate. The glass perforations 132 include an inner surface 320, which is also referred to as a sidewall surface.Process 250 continues at block 258 where the glass vias 132 are filled. In some embodiments, block 258 can include forming a seed layer on the inner surface of the glass perforations, followed by electroplating to fill the glass perforations. The seed layer can be deposited by processes such as PVD, CVD, ALD or electroless plating processes. In some embodiments, the seed layer may comprise titanium nitride (TiN), tantalum-titanium nitride (Ru-TiN), platinum (Pt), palladium (Pd), Au, Ag, Cu, nickel (Ni), Mo or tungsten (W). In some embodiments, the glass perforations are filled by electroplating. Examples of the plated metal may include Cu, Ni, Au, and Pd, and alloys and combinations thereof. In some implementations, block 250 can further include patterning one or more of the top and bottom surfaces of the glass to electrically isolate the glass vias and/or passive components to form glass vias and/or passive components Wiring and contacts interconnect a plurality of glass vias to form a solenoid type inductor, and the like.Figure 5E shows an example of a cross-sectional schematic illustration of a glass substrate after the glass perforated sidewalls and surface metallization. The exposed surface of the structure of Figure 5E (the inner surface 320 comprising the glass vias 132, the exposed surfaces of the metal layers 306 and 308, and the passivation layer 312) is conformally coated with a seed layer 314. Figure 5F shows an example of a cross-sectional schematic illustration of a glass substrate after electroplating to fill the glass perforations. Electroplated metal 316 fills the glass vias 132 (shown in Figure 5E) and covers the conformal seed layer 314. As described above, the electroplated metal 316 can be patterned in subsequent operations, as shown in Figure 5G.Figure 5G shows an example of a cross-sectional schematic illustration of a glass via rod comprising glass perforations and passive components. The glass via bar 100 includes a glass via 106 formed in the glass substrate 300 and a capacitor 144 formed on the surface of the glass substrate 300. The glass via bar 100 also includes electroplated contacts 318 to the metal layers 306 and 308 of the capacitor 144. In some embodiments, the glass via bar 100 can be configured to attach to a printed circuit board (PCB) or other organic substrate at the plated region 328. In some embodiments, the glass via bars 100 can be attached to a PCB or other organic substrate by soldering with solder balls. In some embodiments, the glass via bar 100 can be attached to a PCB or other organic substrate by solder or solderable metal disposed on the tip of the glass via 106.In some embodiments (not shown), conformal metal can be electroplated or otherwise formed on conformal seed layer 314. The interior of the glass perforations 132 may remain unfilled or filled with a non-conductive material (as described above with reference to Figure 1C). Also, in some other embodiments (not shown), the glass vias 106 may be formed by filling the glass vias 132 with a conductive paste such as copper (Cu) or Ag conductive paste. According to various embodiments, a conformal conductive layer such as a conformal seed layer 314 may or may not be formed prior to filling the glass via 132 with a conductive paste.In some embodiments, an integrated capacitor or other passive component can be formed in the formation or hole formed in the glass via rod. For example, as mentioned above with reference to Figure 2, a solenoid type inductor can be formed by a plurality of glass perforations connecting the glass via bars. Figure 5H shows an example of a glass via rod comprising an integrated capacitor formed in a trench. The glass via bar 100 includes glass vias 106 and capacitors 144 formed in a glass substrate 300. The glass via 106 includes a conformal conductive film 330. Capacitor 144 is formed in trench 334 formed in glass substrate 300 and includes metal layers 306 and 308 and dielectric layer 310. The trenches 334 may be formed in the glass substrate 300 by photo patterning or laser ablation as described above and may also be referred to as blind vias. In some embodiments (not shown), capacitors or other passive components may be formed in the glass vias in addition to or instead of the trenches. Passive components in trenches or holes in the substrate can be formed using deposition processes such as PVD, CVD and ALD, electroplating processes, and etching processes.An example of a method of forming a metal-insulator-metal (MIM) capacitor on the inner surface of a glass substrate is described on November 27, 2012, entitled "Adhesive Metal Nitride on Glass and Related Methods (Adhesive Metal Nitride on In the U.S. Patent Application Serial No. 13/686,620, the entire disclosure of which is incorporated herein by reference. As described herein, forming a MIM capacitor can involve forming a metal nitride layer that acts as an electrode layer for the MIM capacitor and/or an adhesion or diffusion barrier for the MIM capacitor. For example, in some embodiments, an adherent metal nitride layer can be formed on the glass surface of the trench formed in the glass substrate. The adhesion metal nitride layer can serve as a seed layer for the subsequently deposited film. In some embodiments, a dielectric layer can be formed over the adhesion metal nitride layer such that it substantially conforms to the adhesion metal nitride layer over a portion of the trench and surface of the glass substrate. An outer metal nitride layer can be formed over the dielectric layer such that it substantially conforms to the dielectric layer over and within a portion of the trench of the glass substrate. The adhesion metal nitride layer, the dielectric layer and the outer metal nitride layer may form part of a MIM capacitor in the trench, wherein the metal nitride layer acts as an electrode of the MIM capacitor. Examples of the metal nitride layer include a TiN and a tantalum nitride (TaN) layer. In some embodiments, each of the adhesion metal nitride layer, the dielectric layer, and the outer metal nitride layer can be formed by ALD. In some embodiments, a metal layer such as a Cu layer can be formed between each of the dielectric layer and the metal nitride layer. For example, the metal layer can be formed using electrodeless electroplating and/or electrolytic plating techniques. The metal layer, the adhesion metal nitride layer, the outer metal nitride layer, and the dielectric layer can form part of a MIM capacitor in the trench, wherein the metal layer acts as an electrode of the MIM capacitor. The outer metal nitride layer can act as a diffusion barrier to reduce migration of metal atoms into the dielectric layer.As indicated above, in some embodiments, the glass via bars described herein can be part of a stacked package (PoP). The PoP process involves encapsulating a plurality of dies in a separate package, and then packaging the individual packages together by stacking the stacked packages. Two or more packaged dies containing logic, memory, analog, RF, and EMS dies can be packaged together in a PoP. For example, in some implementations, a logic die can be packaged with a memory die.A PoP contains one or more individually packaged dies stacked together. Figure 6 shows an example of a cross-sectional schematic illustration of a PoP comprising a glass via rod. In the figure and associated description, reference is made to a PoP comprising two packages, the two packages being a bottom package and an upper package. However, a PoP can include any number of stacked individual packaged dies, including three or more dies.Figure 6 shows an example of a schematic cross-sectional illustration of a PoP comprising a glass via rod. The PoP 440 includes a bottom package 442 that is vertically integrated with the upper package 444. The PoP 440 can be further mounted on an electronic device PCB (not illustrated) via the inter-level interconnects 120. An example of an electronic device PCB is a PCB for a mobile phone. In the example of FIG. 6, bottom package 442 can be a logical package that includes one or more logic dies and upper package 444 can be a memory package that includes one or more memory dies. However, each of the packages in the PoP can independently comprise any suitable type of die, using any suitable stacked arrangement. In some embodiments using logic and memory packages, the logic package is a bottom package because it typically uses a higher density connection to the underlying PCB.The bottom package 442 includes a mold structure 432 and a bottom package substrate 448. The mold structure 432 has a top surface 464a and a bottom surface 464b and includes a molding compound 454 and components embedded within the molding compound 454; in the example of FIG. 6, the components include a bottom package die 446 and a glass via bar 100. Each of the glass via bars 100 includes a glass via 106 that extends through the thickness of the glass via bar 100 and provides an electrical connection from the top surface 464a to the bottom surface 464b of the mold structure 432. Although the mold structure 432 in the example of Figure 6 comprises a single die, any number of dies may be included in accordance with various embodiments. In some embodiments, bottom package die 446 is a logic die, such as an application processor for a smart phone, digital camera, or other electronic device.The bottom package substrate 448 can be an organic substrate, such as a polymeric substrate or PCB, which can include conductive paths (not shown) and contact pads (not shown). The glass vias 106 may be electrically connected to the bottom package die 446 by electrical routing on the bottom surface 464b of the mold structure 432 and/or electrical wiring in the logic package substrate 448 or onto the logic package substrate 448. Conductive paths and contact pads in the bottom package substrate 448 or on the bottom package substrate 448 can provide electrical connections from the bottom package 442 to the inter-level interconnects 120. The glass vias 106 can provide electrical connections to the interlevel interconnects 118 that connect the bottom package 442 to the upper package 444. In some embodiments, a redistribution layer (not shown) can be included on top surface 464a of mold structure 432 or attached to top surface 464a to provide an electrical connection to inter-level interconnect 118. In the example of FIG. 6, bottom package die 446 and glass vias 106 are electrically connected to bottom package substrate 448 by flip chip attachment, which in turn provides electrical connections to interlevel interconnects 120. . If present, the redistribution layer can be formed directly on the mold structure 432 with electrical connections to the glass vias 106 embedded in the mold structure 432, or via solder balls disposed between the redistribution layer and the mold structure 432 or Other electrical attachments are electrically connected to the glass perforations 106.The upper package 444 includes a mold structure 482 and an upper package substrate 488. Upper package substrate 488 can be an organic substrate, such as a polymeric substrate or PCB. Mold structure 482 includes molding compound 494 and components embedded within molding compound 494; in the example of Figure 6, these components include upper package die 445. For example, upper package die 445 can comprise a single memory die or a stack of multiple memory dies. In the example of FIG. 6, upper package die 445 is electrically coupled to upper package substrate 488 by flip chip attachment, which in turn provides electrical connections to inter-level interconnects 118. In some other implementations, one or more dies may be wire bonded or otherwise connected to the upper package substrate 448.It should be noted that the size, spacing, and placement of inter-level interconnects 118 and inter-level interconnects 120, as well as the size, spacing, and placement of flip-chip attachments of upper package dies 445, bottom package dies 446, and glass vias 100. Can be changed as appropriate. For example, the size and/or spacing of the solder balls connecting the glass via bars 100 to the bottom package substrate 448 can be the same as the interlevel interconnects 118.In some embodiments, the glass via bar 100 can include one or more integrated capacitors (not shown) as described above with reference to Figures 4 through 5H. Because the capacitor is integrated with the glass via bar 100, the glass via bar 100 and capacitor can be placed closer to the bottom package die 446 than would be the case if the capacitor were a discrete component, thereby reducing path length and increasing efficiency. In addition to reducing the path length, the glass via bar 100 can also reduce the footprint of the bottom package 442 and the footprint of the PoP 440. The integrated capacitor on the glass via bar 100 can be connected to one or more of the glass vias or not. In embodiments where the integrated capacitor is not connected to any glass vias, the glass via bars 100 can be configurable and can form electrical connections during assembly of the PoP or prior to assembly of the PoP as needed. The configurable glass through-hole bar is described in U.S. Patent Application Serial No. 13/566, 925, filed on Aug. 3, 2012, entitled, " Passives Via Bar" (Attorney Docket No. 113279/QUALP125US) The application is incorporated herein by reference. In some other implementations, instead of passive components integrated on the glass perforated rod 100 or in addition to passive components integrated on the glass perforated rod 100, the bottom package 442 may also include one or more capacitors or other passives Component.In the example of FIG. 6, only the bottom package contains the glass via bars 100. However, according to various embodiments, any of the packages in the PoP may comprise a glass via bar. For example, the upper package 444 can include a glass via bar to connect to a third package (not shown) stacked over the upper package 444.As mentioned above, in some implementations, the PoP can include packaged memory dies stacked with packaged logic dies. In some such implementations, the integrated capacitor and/or other passive components allow the via interconnect to be located closer to the logic than if the discrete passive component were between the logic die and the via interconnect. Die. In some embodiments, the footprint of the logic package can be reduced by the increased density of via interconnects enabled by the glass via bars. For example, the footprint of the logic package can be from about 5% to about 20% larger than the footprint of the glass via bars in the package. For example, in some embodiments, the logic package can have a lateral dimension of 10 millimeters or less. Other types of packages can similarly scale down. The footprint of the memory package can be reduced by including a stack of memory dies that are attached to the memory package substrate by flip chip attachment rather than by wire bonding. Additionally, in some embodiments, a stacked memory architecture including through silicon vias (TSVs) can be used to reduce memory package footprint. For example, a PoP can include a wide I/O memory die.7 through 11 show examples of flow diagrams illustrating a PoP process using a glass via bar. Once two or more discrete packages to be incorporated into a PoP are formed, they can be stacked to form a PoP. Figures 7 and 8 show an example of stacking two packages, two packages being a bottom package and an upper package. In the example, the bottom package contains a glass via rod as described above with respect to Figure 6. However, in addition to or in lieu of the bottom package, the upper package may comprise a glass via bar.Turning first to FIG. 7, process 500 begins at block 502 where a bottom package comprising a glass via bar and a bottom package substrate is mounted to an electronic device printed PCB, for example, for a mobile phone, tablet computer, or computer PCB. The formation of a bottom package comprising a glass via bar is further described below with respect to Figures 9-11. Mounting the bottom package on the electronic device PCB can involve positioning the bottom package such that inter-level interconnects (e.g., solder balls) on the bottom surface thereof align and contact with corresponding contact pads on the electronic device PCB. The process 500 continues at block 504 where the upper package including the upper package substrate is mounted to the bottom package. Block 504 may involve positioning the upper package such that the interlevel interconnects on the bottom surface thereof align and contact with corresponding contacts on the bottom package. According to various embodiments, the contacts may comprise glass perforations or contact pads that are electrically connected to the glass perforations. One or additional packages may then be included in the stack by mounting on a previously installed package (not shown). Once all of the packages are stacked in this manner, process 500 continues at block 506 where the solder is reflowed to simultaneously attach the bottom package to the electronic device PCB and the upper package to the bottom package.Figure 8 shows an example of stacking two packages, a bottom package and an upper package, wherein the upper package is attached to the bottom package prior to attachment to the electronic device PCB. Process 520 begins at block 522 where the upper package is mounted to the bottom package as described above with respect to block 504 of FIG. One or additional packages may then be included in the stack by mounting on a previously installed package (not shown). Once all packages have been stacked in this manner, process 520 continues at block 524 where solder is reflowed to attach the upper package to the bottom package. If additional packages are stacked on the upper package, they can all be joined to other packages in the stack during block 524. During blocks 522 and 524, the bottom package can be supported by a carrier substrate or clamp. Process 520 may continue at block 526 by mounting a PoP (ie, a stacked package) to an electronic device PCB in an optional operation. If block 526 is performed, block 526 may involve positioning the PoP such that solder balls or other inter-level interconnects on the bottom surface of the bottom package align and contact with corresponding contact pads on the electronic device PCB. Process 520 can then continue at block 528 where a second reflow operation is performed in an optional operation to attach the PoP to the electronic device PCB.According to various embodiments, the reflow process of attaching a PoP to an electronic device PCB may involve a single or multiple reflow operations to attach the PoP to an appropriate location on the electronic device PBC. If multiple reflow processes are used, in some embodiments, a higher temperature solder can be used in the first reflow operation followed by a reflow operation using a lower temperature solder. In some embodiments, a solder composed of an intermetallic composition that does not melt during the second reflow operation can be used in the first reflow operation.9 through 11 show examples of flow diagrams illustrating a process for forming a bottom package of a PoP. First, turning to Figure 9, process 540 begins at block 542 where the die is placed on a bottom package substrate. Examples of dies include, but are not limited to, application processors. As further described below with respect to FIG. 10, in some embodiments, the die is tested prior to block 542. This situation allows only known good dies to be incorporated into the bottom package and PoP. Process 500 continues at block 544 where one or more glass via bars are placed on the bottom package substrate. The glass via bar can include one or more capacitors or other passive components on one or more surfaces. For configurable via bars, various passive components can be connected to each other or to one or more glass vias before or after block 544. As further described with respect to Figure 11, in some embodiments, the glass via bars are tested prior to block 504. This situation allows only known good via bars to be incorporated into the bottom package and PoP. Once the die and one or more glass via bars are placed, they are attached to the bottom package substrate at block 546. The bottom die and one or more glass via bars can be simultaneously attached to the logic package substrate, for example, by solder reflow. Process 540 continues at block 548 where the molding compound is applied and the molding compound is cured. Additional operations such as solder ball mounting, reflow soldering, package singulation, package inspection, and testing can then be performed. Once the bottom package is formed, the bottom package can be stacked with one or more additional packaged dies to form a PoP, as described above with reference to Figures 7 and 8.Figure 10 shows an example of a flow diagram illustrating a process for testing a glass via rod for a bottom package of a PoP. Process 560 begins at block 562 where a via rod formed in a glass substrate is tested, as described above with respect to Figures 4 through 5G. Testing can involve one or more wafer probing and optical inspection operations. If present, both glass vias and integrated passive components can be tested. The via bars that did not pass the test were identified and the via bars were not used in the bottom package. Process 560 continues at block 564 where the glass substrate is singulated to form a plurality of individual glass via bars. Process 560 continues at block 566 where the glass via bars to be placed in the bottom package are inspected. In this way, only known good glass via bars are encapsulated.Dies to be incorporated into the bottom or upper package for PoP can be similarly tested before and/or after package singulation. Moreover, in addition to or in lieu of one or more such test operations, the bottom package can be tested prior to packaging the bottom package in a PoP. Figure 11 shows an example of a flow diagram illustrating a process for testing the bottom package of a PoP. Process 580 begins at block 582 where the die and glass via bars are packaged in a bottom package. The die and glass via bars are packaged in a bottom package as described above with reference to FIG. Process 580 may continue at block 583 and block 584, where the bottom package is singulated, and at block 584, the bottom package is tested. Testing can involve one or more wafer probing and optical inspection operations. Detecting the bottom package is easier than detecting unpackaged dies, due to the larger size of the package. For example, a 300 micron probe may be sufficient to test the package, while a 50 micron probe may be sufficient to detect the die. Identify packages that have not passed the test and do not use the package in PoP. Process 580 continues at block 586 where the bottom package is stacked with one or more additional packages to form a PoP. The one or more additional packages can be similarly tested. In this way, only known good packages are incorporated into the PoP.In some embodiments, the glass via bars can be included as part of a display device, or in a package that includes a display device or is included in a display device. 12A and 12B show an example of a system block diagram illustrating display device 40. Display device 40 can be, for example, a smart phone, a cellular or a mobile phone. However, the same components of display device 40 or slight variations thereof also illustrate various types of display devices, such as televisions, tablet computers, electronic readers, handheld devices, and portable media players.The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48, and a microphone 46. The outer casing 41 may be formed of any of a variety of manufacturing processes including injection molding and vacuum forming. Additionally, the outer casing 41 can be made from any of a variety of materials including, but not limited to, plastic, metal, glass, rubber, and ceramic, or combinations thereof. The outer casing 41 can include a removable portion (not shown) that can be interchanged with other removable portions of different colors or containing different logos, pictures or symbols.Display 30 can be any of a variety of displays including bistable or analog displays, as described herein. Display 30 can also be configured to include a flat panel display such as a plasma, EL, OLED, STN LCD or TFT LCD or a non-flat panel display such as a CRT or other tubular device. Additionally, display 30 can include an interferometric modulator display as described herein.The components of display device 40 are schematically illustrated in Figure 12B. Display device 40 includes a housing 41 and can include additional components that are at least partially enclosed therein. For example, display device 40 includes a network interface 27 that includes an antenna 43 coupled to transceiver 47. Transceiver 47 is coupled to processor 21, which is coupled to conditioning hardware 52. Processor 21 may be one of the dies in the PoP stack as described above. Adjustment hardware 52 can be configured to condition the signal (e.g., to filter the signal). Adjustment hardware 52 is coupled to speaker 45 and microphone 46. Processor 21 is also coupled to input device 48 and driver controller 29. Driver controller 29 is coupled to frame buffer 28 and to array driver 22, which in turn is coupled to display array 30. In some embodiments, power supply 50 can provide power to substantially all of the components in a particular display device 40 design.Network interface 27 includes an antenna 43 and a transceiver 47 such that display device 40 can communicate with one or more devices via a network. Network interface 27 may also have some processing power to mitigate, for example, data processing requirements for processor 21. Antenna 43 can transmit and receive signals. In some embodiments, antenna 43 is transmitted in accordance with the IEEE 16.11 standard (including IEEE 16.11 (a), (b) or (g)) or the IEEE 802.11 standard (including IEEE 802.11a, b, g, n) and other implementations thereof. And receiving RF signals. In some other implementations, antenna 43 transmits and receives RF signals in accordance with the Bluetooth standard. In the case of a cellular telephone, antenna 43 is designed to receive Code Division Multiple Access (CDMA), Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Global System for Mobile Communications (GSM), GSM/General Packet Radio. Services (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunked Radio (TETRA), Wideband CDMA (W-CDMA), Evolution Data Optimized (EV-DO), 1xEV-DO, EV-DO Revision A, EV- DO Revision B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA+), Long Term Evolution (LTE), AMPS or other known signals used to communicate within a wireless network (eg, systems utilizing 3G or 4G technology). The transceiver 47 can pre-process the signals received from the antenna 43 such that the signals can be received and further manipulated by the processor 21. The transceiver 47 can also process the signals received from the processor 21 such that the signals can be transmitted from the display device 40 via the antenna 43.In some embodiments, the transceiver 47 can be replaced with a receiver. Additionally, in some embodiments, the network interface 27 can be replaced with an image source that can store or generate image data to be transmitted to the processor 21. The processor 21 can control the overall operation of the display device 40. Processor 21 receives data (e. g., compressed image data) from network interface 27 or an image source and processes the data into raw image data or into a format that is easily processed into raw image data. Processor 21 can send the processed data to driver controller 29 or frame buffer 28 for storage. Raw data generally refers to information that identifies the characteristics of an image at each location within an image. For example, such image characteristics can include color, saturation, and gray levels.Processor 21 may include a microcontroller, CPU or logic unit to control the operation of display device 40. The conditioning hardware 52 can include amplifiers and filters for transmitting signals to the speaker 45 and for receiving signals from the microphone 46. Adjustment hardware 52 can be a discrete component within display device 40 or can be incorporated into processor 21 or other components.The driver controller 29 can take raw image data generated by the processor 21 directly from the processor 21 or from the frame buffer 28 and can suitably reformat the raw image data for high speed transmission to the array driver 22. In some embodiments, the driver controller 29 can reformat the raw image data into a light-like grid-like data stream such that it has a temporal order suitable for scanning across the display array 30. The drive controller 29 then sends the formatted information to the array driver 22. Although driver controller 29, such as an LCD controller, is often associated with system processor 21 as a stand-alone integrated circuit (IC), such controllers can be implemented in a number of ways. For example, the controller can be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or fully integrated with the array driver 22 in hardware.The array driver 22 can receive the formatted information from the driver controller 29 and can reformat the video data into a set of parallel waveforms that are applied to the xy matrix of pixels from the display multiple times per second multiple times. And sometimes thousands (or thousands) of leads.In some embodiments, driver controller 29, array driver 22, and display array 30 are suitable for any of the types of displays described herein. For example, the driver controller 29 can be a conventional display controller or a bi-stable display controller (e.g., an IMOD controller). Additionally, array driver 22 can be a conventional driver or a bi-stable display driver (e.g., an IMOD display driver). In addition, display array 30 can be a conventional display array or a bi-stable display array (e.g., a display including an IMOD array). In some embodiments, the driver controller 29 can be integrated with the array driver 22. Such an implementation can be used in highly integrated systems, such as mobile phones, portable electronic devices, watches, or small area displays.In some embodiments, input device 48 can be configured to allow, for example, a user to control the operation of display device 40. Input device 48 may include a keypad such as a QWERTY keyboard or telephone keypad, buttons, switches, rocker arms, touch sensitive screens, touch sensitive screens integrated with display array 30, or pressure sensitive or heat sensitive diaphragms. Microphone 46 can be configured as an input device for display device 40. In some embodiments, voice commands through the microphone 46 can be used to control the operation of the display device 40.Power supply 50 can include a variety of energy storage devices. For example, power supply 50 can be a rechargeable battery, such as a nickel cadmium battery or a lithium ion battery. In embodiments where a rechargeable battery is used, the rechargeable battery can be charged using power from, for example, a wall socket or photovoltaic device or array. Alternatively, the rechargeable battery can be charged wirelessly. The power supply 50 can also be a renewable energy source, a capacitor or a solar cell, including a plastic solar cell or a solar cell lacquer. The power supply 50 can also be configured to receive power from a wall outlet.In some embodiments, control programmability resides in a drive controller 29 that can be located at several locations in an electronic display system. In some other implementations, control programmability resides in array driver 22. The optimizations described above can be implemented in any number of hardware and/or software components and in various configurations.In various implementations of display device 40, antenna 43, transceiver 47, processor 21, driver controller 29, frame buffer 28, speaker 45, microphone 46, array driver 22, power supply 50, and input device 48 One or more of the packages may comprise a package having a semiconductor die embedded in a molded die with a glass via bar or a package in which both the semiconductor die and the glass via bar are bonded to the same substrate. For example, processor 29 can include a PoP package that includes a semiconductor processor die and a glass via bar. As another example, power supply 50 can include a glass via rod configured as a solenoid type inductor.The various illustrative logic, logic blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally in terms of functionality and is illustrated in the various illustrative components, blocks, modules, circuits, and steps described above. Whether such functionality is implemented in hardware or in software depends on the particular application and design constraints imposed on the overall system.Hardware and data processing apparatus for implementing the various illustrative logic, logic blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or executed by: general purpose single or multi-chip processors, digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or their design to perform the functions described herein Any combination. A general purpose processor can be a microprocessor or any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, a combination of one or more microprocessors and a DSP core, or any other such configuration. In some embodiments, specific steps and methods may be performed by circuitry that is specific to a given function.In one or more aspects, the functions described can be implemented in hardware, digital electronic circuitry, computer software, firmware (including the structures disclosed in this specification and their structural equivalents), or in any combination thereof. Embodiments of the subject matter described in this specification can also be implemented as one or more computer programs (ie, one of computer program instructions) encoded on a computer storage medium for execution or control of operation of the data processing device by the data processing device. Multiple modules). If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer readable medium. The steps of a method or algorithm disclosed herein may be implemented in a processor-executable software module that can reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can be capable of transmitting a computer program from one location to another. The storage medium can be any available media that can be accessed by a computer. By way of example and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage, or may be used for storage in the form of an instruction or data structure. Program code and any other medium that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. As used herein, magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital audio and video disks (DVDs), flexible disks, and Blu-ray disks, in which disks typically reproduce data magnetically, while disks use lasers to optically Way to regenerate data. Combinations of the above may also be included within the scope of computer readable media. In addition, the operations of the methods or algorithms may reside as any one or any combination or collection of code and instructions on a machine-readable medium and computer readable medium that can be incorporated into a computer program product.Various modifications to the described embodiments of the invention may be readily apparent to those skilled in the Therefore, the claims are not intended to be limited to the embodiments shown herein, but are to be accorded to the broadest scope of the invention, the principles and novel features disclosed herein. The word "exemplary" is used exclusively herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other possibilities or embodiments. In addition, those skilled in the art will readily appreciate that the terms "upper" and "lower" are sometimes used in order to facilitate the description of the figures, and indicate the relative position of the orientation corresponding to the map on the appropriately oriented page, and may not reflect The appropriate orientation of the IMOD as implemented.Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can be implemented in various embodiments or in any suitable sub-combination. Moreover, while features may be described above as acting in certain combinations and even initially claimed, in some cases one or more features from the claimed combination may be deleted from the combination and claimed The combination may involve changes in sub-combinations or sub-combinations.Similarly, while the operations are depicted in a particular order in the drawings, those skilled in the art will readily recognize that such To achieve the desired results. In addition, the drawings may schematically depict more than one example process in flow chart form. However, other operations not depicted may be incorporated in the example processes illustrated schematically. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the illustrated operations. In some cases, multitasking and parallel processing can be advantageous. Moreover, the separation of various system components in the embodiments described above should not be construed as requiring such separation in all embodiments, and it is understood that the described program components and systems can generally be integrated together in a single software. In the product or packaged into multiple software products. Further, other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. |
Provided are an apparatus, computer program product, and method to perform cache operations in a solid state drive. A cache memory determines whether data for a requested storage address in a primary storage namespace received from a host system is stored at an address in the cache memory namespace to which the requested storage address maps according to a cache mapping scheme. Multiple of the storage addresses in the primary storage map to one address in the cache memory namespace. The cache memory returns to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace. |
An apparatus for cache management operations to perform cache operations for a host system in a solid state drive, comprising:a cache memory comprising non-volatile memory, the cache memory to store data at addresses for a cache memory namespace; anda cache manager to:determine whether data for a requested storage address in a primary storage namespace received from the host system is stored at an address in the cache memory namespace to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; andreturn to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.The apparatus of claim 1, wherein the cache manager is further to:return a message to the host system indicating that the data for the requested storage address is not stored in the cache memory namespace, wherein the message causes the host system to retrieve the data for the requested storage address from the primary storage.The apparatus of claim 2, wherein the cache manager is further to:receive data for the requested storage address from the host system the host system retrieved from the primary storage in response to receiving the message; andstore the received data for the requested storage address in the cache volatile memory namespace.The apparatus of claim 1, wherein the cache manager is further to:receive, from the host system, data for a target storage address of the primary storage namespace to add to the cache memory namespace;determine whether there is an available address in the cache memory namespace for the received data to which the storage address maps according to the cache mapping scheme; andstoring the data for the target storage address in the determined available address in the cache memory namespace.The apparatus of claim 4, wherein the cache manager is further to:return a message to the host system indicating that there is no available space in the cache memory namespace for the data at the target storage address;receive a delete request from the host system to delete data for an eviction storage address in the primary storage different from the target storage address, wherein both the target storage address and the eviction storage address map to a same set of addresses in the cache memory namespace;determine an eviction address in the cache memory namespace having data for the eviction storage address;delete the data at the eviction address in the cache memory namespace; andwrite the data for the target storage address to the eviction address in the cache memory namespace.The apparatus of claim 5, wherein the cache manager is further to:receive a retry write request from the host system after receiving the delete request, wherein the data for the target storage address is written to the eviction address in the cache memory namespace in response to the retry write request.The apparatus of claim 5, wherein the cache manager is further to:receive a request from the host system to read data at the eviction storage address that comprises dirty data; andreturn the dirty data at the eviction address in the cache memory namespace, wherein the delete request to delete the data for the eviction storage address is received after returning the dirty data.The apparatus of claim 1, wherein the cache mapping scheme comprises a set associative cache mapping scheme.The apparatus of claim 1, wherein the cache memory comprises a byte addressable write-in-place cache memory.A computer program product that when deployed in a host system couples to a cache memory and a primary storage having a larger address space than a cache memory namespace of the cache memory, wherein the computer program product comprises a computer readable storage medium including program code that when executed by a processor is to:send a read request to the cache memory to read data at a read storage address in the primary storage; andreceive, from the cache memory, data at an address in the cache memory namespace to which the read storage address maps according to a cache mapping scheme.A system coupled to perform cache operations for a host system in a solid state drive for data requests for a primary storage, including:a host system including a processor and a memory including a host cache manager executed by the processor; anda cache memory, including:a storage media in which data is stored at addresses for a cache memory namespace; anda cache memory cache manager to:determine whether data for a requested storage address in the primary storage namespace received from the host system is stored at an address in the cache memory namespace at an address in the cache memory namespace to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; andreturn to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.A method for performing cache operations for a host system in a solid state drive, comprising:determining whether data for a requested storage address in a primary storage namespace received from a host system is stored at an address in a cache memory namespace of the cache memory to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; andreturning to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.An apparatus for performing cache operations for a host system in a solid state drive, comprising:means for determining whether data for a requested storage address in a primary storage namespace received from a host system is stored at an address in a cache memory namespace of the cache memory to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; andmeans for returning to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.An apparatus comprising means to perform a method as claimed in any preceding claim.Machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as claimed in any preceding claim. |
TECHNICAL FIELDEmbodiments described herein generally relate to an apparatus, computer program product, and method to perform cache operations in a solid state drive.BACKGROUNDHost side cache management operations may consume significant host resources to determine where to place data in a faster storage device, such as the Solid State Drive (SSD), that is directed to an address for a larger, typically, slower storage device, for example a Hard Disk Drive (HDD) or a Hybrid Hard Drive. For a direct mapped cache, the host system applies a hash function to a portion of the address of the data to determine a unique location in the faster storage device at which the data for that address is stored. The host system has to check whether data for a different address other than the read address is not located in the direct mapped cache location in the faster storage device, because multiple addresses from the larger slower storage device map to one address in the faster storage device. If data for the read address not at the direct mapped location in the faster storage device, then there is a read miss and the host needs to retrieve data from the slower storage device.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments are described by way of example, with reference to the accompanying drawings, which are not drawn to scale, in which like reference numerals refer to similar elements.FIG. 1 illustrates an embodiment of a computing system.FIG. 2 illustrates an embodiment of an address as known in the prior art.FIG. 3 illustrates an embodiment of content at a cache location in a Solid State Drive (SSD).FIG. 4 illustrates an embodiment of operations for a host system and Solid State Drive (SSD) to process a read request.FIGs. 5a and 5b illustrate an embodiment of operations for a host system and Solid State Drive (SSD) to process a write request.FIG. 6 illustrates an embodiment of a read hit flow.FIG. 7 illustrates an embodiment of a read miss flow.FIG. 8 illustrates an embodiment of a flow to evict data from the Solid State Drive (SSD) to make space available for a write.DESCRIPTION OF EMBODIMENTSWith current caching implementations, significant latency is introduced in a read hit process by having the host system perform cache look-up operations, which requires additional software layers in an Input/Output stack of the operating system. Described embodiments implement the caching operations in a Solid State Drive (SSD) operating as the cache storage device instead of additional layers in a storage stack of the host system to reduce cache latency.With described embodiments, Input/Output (I/O) requests are passed directly from the host system to a Solid State Drive (SSD) operating as a cache to primary storage device(s) to bypass a cache software layer in the host system. In this way, a host-side cache algorithm is not involved and cache software on the host side does not check if requested data is available in a host cache. The SSD may use cache management techniques to determine if a requested address in a non-volatile memory in the SSD was previously written. If so, the data is returned from the SSD. If the SSD does not have the requested data, then the SSD returns a message to the host system to have the host access the data from the primary ("slower") storage device. Further, the host system may configure a namespace size in the SSD which reflects a namespace of the primary storage device, which may be larger than a non-volatile memory namespace available at the SSD.In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Certain embodiments relate to storage device electronic assemblies. Embodiments include both devices and methods for forming electronic assemblies.FIG. 1 illustrates an embodiment of a host system 100 having a processor 102 including a plurality of processing cores, and a memory 104 including applications 106 to communicate read and write requests to storage addresses in a connected primary storage 108, such as one or more block addressable devices, to a block layer 110 that queues and schedules Input/Output (I/O) and returns data between the applications 106 and a connected SSD 112, which functions as a cache memory for the primary storage 108. The primary storage 108 may comprise a block addressable device and has a larger namespace, or more addresses, than the SSD namespace, which comprises a block addressable device non-volatile memory device, that has less storage capacity than the primary storage 108. The memory 104 includes a SSD driver 114to manage communication between the host system 100 and the SSD 112 over a bus 116 and a primary storage device driver 118 to manage communication between the host system 100 and the primary storage 108 over the bus 116.The memory 104 may further include a host-side cache manager 120 to manage read misses and situations where there is not sufficient space in the SSD 112 for new write data for a storage address for the primary storage 108. The host-side cache manager 120 maintains Least Recently Used (LRU) information 122, such as an LRU list, providing an ordered list of target storage addresses cached in the SSD 112 and a cache 124 of the memory 104 for caching data when there is a read miss or eviction operation. A block layer 126 may queue and schedule I/O requests and returned data for I/O requests between the primary storage 108 and the cache 120.In additional embodiments, cache eviction techniques other than an LRU cache eviction algorithm may be used to determine data to destage from the cache 124 to make room for more recently accessed data. In such case, the LRU information 122 would comprise other types of cache information indicating data for storage addresses stored in the cache 124 for other such cache eviction and management algorithms.The SSD 112 includes a controller 128 that includes a non-volatile memory cache manager 130 having a cache mapping scheme, such as set associative cache or other caching algorithm, to map storage addresses for the primary storage 108 to a set of address in the non-volatile memory namespace. For a set associative cache algorithm, data that maps to a set of cache locations may be stored in any cache location of the set to which the storage address for the data to cache maps. The SSD 112 includes a storage media 132 that includes non-volatile memory devices, such as NAND storage dies, in which addressable blocks 300 are maintained to provide cache locations or cache blocks for storage addresses in the primary storage namespace. The storage media 132 may also include future generation nonvolatile memory devices, such as a three dimensional crosspoint memory device, or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), antiferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thiristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product. Each set of addresses in the non-volatile memory namespace may comprise a line 134iof block addressable blocks or cache locations.The SSD 112 further maintains a primary storage namespace 131 setting configured by the host system 100 during initialization, such as by the SSD driver 114, to indicate a size of the primary storage 108 namespace. The SSD cache manager 130 maintains a mapping of addresses in the primary storage namespace 131 to the non-volatile memory namespace. In an embodiment, the host uses the Non-Volatile Memory Express (NVMe) standard (http://www.nvmexpress.org) to communicate with the SSD 112 over the PCIe Bus 116.The bus 116 may comprise a Peripheral Component Interconnect (PCI) bus, such as the Peripheral Component Interconnect express (PCIe) bus, or any other custom bus. The host 100 and the SSD 112 may each include bus interface 136, 138 components to communicate on the bus 116.The SSD 112 and devices of the primary storage 108 may comprise the same or different types of block devices, or block addressable devices. In one embodiment, the SSD 112 operating as a cache to the primary storage 108 may comprise a faster access type of device than the devices comprising the primary storage 108. For instance, the SSD 112 may comprise byte addressable write-in-place non-volatile memory (for example, 3D crosspoint) and the primary storage 108 may comprise one or more hard disk drives or slower access SSDs comprising block addressable non-volatile memory (for example, NAND).The memory 104 may comprise a suitable memory device, such as a volatile memory device, used for system memory, such as a dynamic random access memory (DRAM).The system 100 may also communicate with Input/Output (I/O) devices, which may comprise input devices (e.g., keyboard, touchscreen, mouse, etc.), display devices, graphics cards, ports, network interfaces, etc.FIG. 2 illustrates an embodiment of the components of an address 200, as known in the prior art, used to address a location in the primary storage 108 namespace, and includes tag bits 202, such as the most significant bits, that uniquely identify the address 200 in a cache set identified by the set bits 204 of the address 200, and block offset bits 206 comprising least significant bits of the address 200 that are used to locate the data in the cache location.FIG. 3 illustrates an embodiment of one of the cache locations 300i, also referred to as a cache block, in the SSD 112, and includes a valid/dirty flags 302 indicating whether the cache location 300ihas valid data and dirty, e.g., updated, data; a tag 304 having tag bits 202 from the address 200 for the primary storage 108; and one or more data bytes 3061, 3062...306bfor the address 200.FIG. 1 shows the cache sets identified by the set bits 204 in a storage address 200 as lines, e.g., 134i, in the cache locations 300, and each cache location is represented as a box in a cache set 134i. Each address 200 in the primary storage 108 may map to any address in a cache set 134i, identified by the set bits 204. When finding a location in the cache set 134ifor data for a primary storage address 200, the tag bits 202 of the primary storage address 200 are stored in the tag 304 of the cache location 300iin the cache set 134i. In set associative cache embodiments, more primary storage addresses 200 would map to a set of addresses than the number of locations in a set.To decrease latency at the SSD cache manager 130, additional accelerators, such as dedicated hardware Application Specific Integrated Circuit (ASIC) support, may be provided to have the SSD cache manager 130 have a faster I/O processing capability than host processor cores 102. This reduces increased latency that may be experienced on a miss path by having the caching operations handled in the SSD cache manager 130.In certain embodiments, the non-volatile memory cache manager 130 may comprise flash based key-value (KV) cache system in a flash translation layer (FTL), with native FTL capabilities such as sparse addressing and dynamic mapping using an indirection map. In such embodiments, the SSD cache manager 130 may take a hash of a storage address 200 to determine a direct mapped cache location 300iin the SSD 112 or a slab when a slab-based space management scheme is used. Alternative key-value (KV) caching techniques may be used to map key values to cache locations 300iin the SSD 112 that store values of data for storage addresses.In further embodiments, the SSD 112 may support a sparse address space.FIG. 4 illustrates an embodiment of operations performed by the SSD 112 and the host system 100 components to process a read request to a storage address in the primary storage 108. Control begins with the block layer 110 of the host system 100 receiving a read request to a requested storage address 200Rin the primary storage 108 from an application 106, where 200Rrefers to a read address subject to a read request. The block layer 110 forwards (at block 402) the read request for the requested storage address 200Rto the SSD driver114. The SSD driver 114 sends (at block 404) the read request for the requested storage address 200Rto the SSD 112.Upon the SSD 112 receiving (at block 406) the read request, the SSD cache manager 130 determines whether data for the requested storage address 200Ris stored in the SSD namespace at an address to which the requested storage address 200Rmaps according to a cache mapping scheme. In one embodiment, to determine whether the storage address is stored in the SSD cache location 300i, the SSD cache manager 130 applies (at block 408) a cache mapping scheme, such as a set associative cache scheme, to determine a set of addresses in the SSD 112, such as a line 134i, to which the requested storage address 200 maps. The SSD cache manager 130 determines (at block 410) whether the requested storage address 200Ris located in the determined set of addresses 134i, such as the tag 202 of the address matches the tag 304 in one cache location 300i in the set 134imapping to the storage address 200. If (at block 410) the requested storage address 200 is located in the set 134i, then the SSD cache manager 130 returns (at block 412) the requested data from the SSD address in the set having the data, i.e., matching tag.Upon the SSD driver 114 receiving (at block 414) the data for the requested storage address 200, the SSD driver 114 (at block 416) returns the data to the application 106 initiating the request via the block layer 110. In this way, a read hit is processed without having to go through the host cache 124, which reduces cache latency at the host system 100 because read data is directly returned to the application from the SSD 112 operating as a cache for the primary storage 108.If (at block 410) the requested storage address 200Ris not in the determined set of addresses 134iin the SSD 112, then the SSD cache manager 130 returns (at block 418) a message, e.g., error message, to the host system 100 of a read miss, data not at an address in the SSD device to which it maps. Upon receiving (at block 420) the error of the read miss, the SSD driver 114 sends (at block 422) the error to the host cache manager 120. The host cache manager 120 sends (at block 424) a read request to read data at the read storage address 200Rto the primary storage device driver 118 via the block layer 126 to send to the primary storage 108.Upon the host cache manager 120 receiving (at block 426) the data for the requested storage address 200Rfrom the primary storage 108 via the primary storage device driver 118 and the block layer 126, the host cache manager 120 sends (at block 428) data for the requested storage address 200Rto the application 106 via block layer 110. The host cache manager 120 sends (at block 430) a write request to write the received data to the requested storage address 200Rto the SSD 112, according to the logic of FIGs. 5a and 5b , described below.With the embodiment of FIG. 4 , the latency of a read operation is reduced for a read hit because the host cache manager 120 is bypassed, to avoid having to perform any lookup and other cache algorithm operations at the host cache manager 120. Instead, the request is sent directly to the SSD 112 to perform the cache lookup operations to return the requested data for the storage address 200R. If there is a read miss, then the host cache manager 120 retrieves the requested data from the primary storage 108 and returns to the SSD 112 to cache.Cache latency may further be reduced by providing additional hardware and accelerators to implement the SSD cache manager 130 to improve the speed at which the SSD cache manager 130 performs the lookup operations to access the requested data at a cache location 300i.FIGs. 5a and 5b illustrate an embodiment of operations performed by the SSD 112 and the host system 100 components to process a write request to a storage address 200win the primary storage 108, where 200wrefers to a write storage address to which a write is directed. The write request may be initiated from an application 106 or by the host cache manager 120 to perform a write for a read miss, such as at block 530 in FIG. 5 . Upon initiating (at block 500) a write request to a target address 200wfrom an application 106 via the block layer 110 or from the host cache manager 120, the SSD driver 114 sends (at block 502) the write request to the target storage address 200wto the SSD 112.Upon the SSD 112 receiving (at block 504) a write request to the target storage address 200w, the SSD cache manager 130 applies (at block 506) a cache management mapping scheme, such as a set associative cache management scheme, to determine a set of cache locations 134iin the SSD 112 to which the target storage address 200wmaps, which set may be determined from the set bits 204 of the address 200w. If (at block 508) there is an available cache location 300ior address in the SSD namespace in the determined set of address 134i, then the SSD cache manager 130 stores (at block 510) the data for the target storage address 200win an available space in the determined set of addresses 102i(cache locations 300i). After the data is written, at block 508, the host cache manager 120 upon receiving acknowledgment of the write completing would add the written target storage address 200wto the LRU information 122, at the most recently used end of the LRU list 122 and return complete (at block 512) to the host application 106. If (at block 508) there is no available cache location 300i, in the determined set 134ito which the target storage address 200wmaps, then the SSD cache manager 130 returns (at block 514) an error message to the host system 100 indicating that there is no available space for write data to the target storage address 200w.Upon the SSD driver 114 in the host system 100 receiving (at block 516) the error message indicating a read miss, the SSD driver 114sends (at block 518) the error message to the host cache manager 120. The host cache manager 120 determines (at block 520) a set of addresses 134iin the SSD 112 to which the target storage address 200wmaps according a cache management scheme, comprising the same cache management scheme used by the SSD cache manager 130. If (at block 522) there is an available address, e.g., cache location, in the determined set of addresses 134iin the SSD namespace, then the host cache manager 120 selects (at block 524) the available address in the determined set of addresses 134iin the SSD namespace and proceeds back to block 500 to retry the write request to the selected target storage address. If (at block 522) there is no available storage address, e.g., cache location, in the determined set of addresses 134iin the SSD namespace, then the host cache manager 120 uses (at block 526) a cache eviction algorithm, e.g., the LRU list 122 and LRU cache algorithm, to determine an eviction storage address 200E, other than the write storage address 200w, that maps to one of the addresses in the determined set of addresses 134iin the SSD namespace. In one embodiment, the host cache manager 120 determines the eviction storage address 200Eusing a least recently used (LRU) algorithm that determines a least recently used target storage address 200LRUin the LRU information 122, that also maps to one of the addresses in the determined set 134i, different from the target storage address 200w, according to the cache mapping scheme. The LRU information 122 may indicate whether each target address in the SSD 112 has dirty data. If (at block 528) the determined eviction storage address 200Edoes not have dirty data, i.e., modified data, then the host cache manager 120 sends (at block 530) a delete request to the eviction storage address 200E, e.g., the least recently used storage address 200LRU, to the SSD 112 via the SSD driver114.Upon (at block 532) the SSD 112 receiving the delete request for the eviction storage address 200E, the SSD cache manager 130 uses (at block 534) the cache mapping scheme to determine a set of addresses 134; to which the eviction storage address maps 200Eand determine in the set of address 134; the cache location 300i, having data for the eviction storage address 200E. The SSD cache manager 130 indicates (at block 536) invalid data at the determined address 300i, e.g., cache location, in the SSD namespace having data for the eviction storage address 200E, such as by setting the valid/dirty flags 302 to indicate the data is invalid. Complete is then returned (at block 538) to the delete.Control then proceeds (at block 540) to block 542 in FIG. 5b where the host cache manager 120 upon receiving (at block 542) acknowledgment that the delete request completed, the host cache manager 120 removes (at block 544) the deleted eviction storage address 200E, e.g., LRU storage address 200LRU, from the LRU information 122, and control proceeds (at block 546) back to block 500 to retry the write request to the target storage address 200w.If (at block 528) the determined LRU storage address has dirty data, then control proceeds (at block 548) to block 550 in FIG. 5b where the host cache manager 120 sends a read request for the dirty data at the eviction storage address 200Eto the SSD 112. Upon receiving (at block 552) the read request for the eviction storage address 200E, the SSD cache manager 130 performs (at block 554) the operations at block 406 et seq. in FIG. 4 to return the dirty data at the eviction storage address 200Ein the SSD namespace. The read request at block 552 would comprise a read hit because the host cache manager 120 determined the eviction storage address from the host LRU information 122.Upon the host cache manager 120 receiving (at block 556) the requested dirty data at the eviction storage address 200E, the host cache manager 120 writes (at block 558) the received dirty data for the eviction storage address 200Eto the primary storage 108 via the block layer 126 and primary storage device driver 118. Control then proceeds (at block 560) back to block 530 in FIG. 5a where the host cache manager 120 deletes the data at the eviction storage address 200Efrom the SSD 112.With the embodiment of operations if FIG. 5a and 5b , a write operation from an application 106 may be performed with minimal latency by bypassing any host-side cache manager to be written directly to the SSD 112. Only if there is not sufficient available space for the storage address to write, would the host cache manager 120 need to get involved to free space in the SSD 112 for the new write. Further, the host cache manager 120 would coordinate evicting data for storage addresses from the SSD 112 to ensure that any dirty data is updated to the primary storage 108 before being evicted to maintain data consistency.FIG. 6 illustrates the read hit flow through host 100 components to the SSD 112, that bypasses any cache manager 120 logic to return the data directly from the SSD 112 to substantially reduce read hit latency. Path 600 shows the application 106 read request proceeding directly to the SSD 112 without any latency introduced by a cache management layer. Decision 602 shows the SSD cache manager 130 determining whether there is read hit, and if so the SSD storage media 132 is accessed with step 604 to return the data on path 606 directly to the application 106, completely bypassing any host cache operations that would introduce latency for a read hit. In this way, FIG. 6 shows zero cache latency for a read hit to the SSD 112.FIG. 7 illustrates the read miss flow through host 100 components to the SSD 112, that involves the host cache manager 120 returning the data from the primary storage 108. Path 700 shows a write request to the SSD cache manager 130 resulting in a decision 702 of a read miss that returns an error on path 704 to the host system 100, which causes the host cache manager 120 to request the read data on path 706 from the primary storage 108, resulting in the read data form the primary storage 108 being returned on path 808 to the host cache manager 120 to return to both the application 106 on path 710 and the SSD 112 on path 712 to cache. A read miss thus eliminates the latency from having the host cache determine whether the requested data is in the SSD 112.FIG. 8 illustrates a write flow when there is not sufficient space available in the SSD 112 for write data, that involves the host cache manager 120 having to evict data from the SSD device to make space to write data for a target storage address 200w. Path 800 shows the write request going directly from the application 106 to the SSD 112, bypassing the host-side cache 124. Upon the SSD cache manager 130 determining at decision point 802 that there is not sufficient space in the SSD namespace for further write data, the error message of no space is returned on path 804 to the host cache manager 120, which then invokes host eviction logic 806 that sends a TRIM request on path 808 to remove data from the SSD namespace. Path 810 shows a retry of the write request once space is made available.The described embodiments may be implemented as a method, apparatus, device, and computer program product comprising a computer readable storage medium using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code or logic maintained in a "computer readable storage medium". The term "code" as used herein refers to software program code, hardware logic, firmware, microcode, etc. The computer readable storage medium, as that term is used herein, includes a tangible element, including at least one of electronic circuitry, storage materials, inorganic materials, organic materials, biological materials, a casing, a housing, a coating, and hardware. A computer readable storage medium may comprise, but is not limited to, a magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), Solid State Devices (SSD), computer encoded and readable punch cards, etc. The computer readable storage medium may further comprise a hardware device implementing firmware, microcode, etc., such as in an integrated circuit chip, a programmable logic device, a Programmable Gate Array (PGA), field-programmable gate array (FPGA), Application Specific Integrated Circuit (ASIC), etc. Still further, the code implementing the described operations may be implemented in "transmission signals", where transmission signals may propagate through space or through a transmission media, such as an optical fiber, copper wire, etc. The transmission signals in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The program code embedded on a computer readable storage medium may be transmitted as transmission signals from a transmitting station or computer to a receiving station or computer. A computer readable storage medium is not comprised solely of transmission signals, but includes physical and tangible components. Those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present invention, and that the article of manufacture may comprise suitable information bearing medium known in the art.It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the invention.Similarly, it should be appreciated that in the foregoing description of embodiments of the invention, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description.The reference characters used herein, such as b, i, and n, are used herein to denote a variable number of instances of an element, which may represent the same or different values, and may represent the same or different value when used with different or the same elements in different described instances.EXAMPLESThe following examples pertain to further embodiments.Example 1 is an apparatus for cache management operations to perform cache operations for a host system in a solid state drive, comprising: a cache memory comprising non-volatile memory, the cache memory to store data at addresses for a cache memory namespace; and a cache manager to: determine whether data for a requested storage address in a primary storage namespace received from the host system is stored at an address in the cache memory namespace to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; and return to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.In Example 2, the subject matter of examples 1 and 3-9 can optionally include that the cache manager is further to: return a message to the host system indicating that the data for the requested storage address is not stored in the cache memory namespace, wherein the message causes the host system to retrieve the data for the requested storage address from the primary storage.In Example 3, the subject matter of examples 1, 2 and 4-9 can optionally include that the cache manager is further to: receive data for the requested storage address from the host system the host system retrieved from the primary storage in response to receiving the message; and store the received data for the requested storage address in the cache volatile memory namespace.In Example 4, the subject matter of examples 1-3 and 5-9 can optionally include that the cache manager is further to: receive, from the host system, data for a target storage address of the primary storage namespace to add to the cache memory namespace; determine whether there is an available address in the cache memory namespace for the received data to which the storage address maps according to the cache mapping scheme; and storing the data for the target storage address in the determined available address in the cache memory namespace.In Example 5, the subject matter of examples 1-4 and 6-9 can optionally include that the cache manager is further to: return a message to the host system indicating that there is no available space in the cache memory namespace for the data at the target storage address; receive a delete request from the host system to delete data for an eviction storage address in the primary storage different from the target storage address, wherein both the target storage address and the eviction storage address map to a same set of addresses in the cache memory namespace; determine an eviction address in the cache memory namespace having data for the eviction storage address; delete the data at the eviction address in the cache memory namespace; and write the data for the target storage address to the eviction address in the cache memory namespace.In Example 6, the subject matter of examples 1-5 and 7-9 can optionally include that the cache manager is further to: receive a retry write request from the host system after receiving the delete request, wherein the data for the target storage address is written to the eviction address in the cache memory namespace in response to the retry write request.In Example 7, the subject matter of examples 1-6 and 8-9 can optionally include that the cache manager is further to: receive a request from the host system to read data at the eviction storage address that comprises dirty data; and return the dirty data at the eviction address in the cache memory namespace, wherein the delete request to delete the data for the eviction storage address is received after returning the dirty data.In Example 8, the subject matter of examples 1-7 and 9 can optionally include that the cache mapping scheme comprises a set associative cache mapping scheme.In Example 9, the subject matter of examples 1-8 can optionally include that the cache memory comprises a byte addressable write-in-place cache memory.Example 10 is a computer program product that when deployed in a host system couples to a cache memory and a primary storage having a larger address space than a cache memory namespace of the cache memory, wherein the computer program product comprises a computer readable storage medium including program code that when executed by a processor is to: send a read request to the cache memory to read data at a read storage address in the primary storage; and receive, from the cache memory, data at an address in the cache memory namespace to which the read storage address maps according to a cache mapping scheme.In Example 11, the subject matter of examples 10 and 12-17 can optionally include that the program code includes a cache manager, wherein the program code when executed is further to: determine, by the cache manager, a storage address in the primary storage for data stored in the cache memory namespace to evict from the cache memory device; and send read and write requests to storage addresses in the primary storage directly to the cache memory.In Example 12, the subject matter of examples 10, 11 and 13-17 can optionally include that the read request comprises a first read request, wherein the program code when executed is further to: receive a message from the cache memory indicating that the data for the read storage address is not stored in the cache memory namespace; and send a second read request to read data at the read storage address to the primary storage.In Example 13, the subject matter of examples 10-12 and 14-17 can optionally include that the program code when executed is further to: write the read data at the read storage address returned from the primary storage to the cache memory to store in the cache memory namespace.In Example 14, the subject matter of examples 10-13 and 15-17 can optionally include that the program code when executed is further to: send a write request to write data to a target storage address in the primary storage to the cache memory device; receive from the cache memory a message indicating that there is no available space in the cache memory namespace for the target storage address; determine an eviction storage address for the primary storage for which data is stored in the cache memory namespace; and send a delete request to the cache memory to delete the data at the eviction storage address.In Example 15, the subject matter of examples 10-14 and 16-17 can optionally include that the program code when executed is further to: send a retry of the write of data for the target storage address to the cache memory in response to sending the delete request to cause the cache memory to write the data for the target storage address to an address in the cache memory namespace storing data for the eviction storage address.In Example 16, the subject matter of examples 10-15 and 17 can optionally include that the determine the eviction storage address comprises: use a cache mapping scheme to determine a set of addresses in the cache memory namespace to which the target storage address maps; and determine a least recently used target storage address for the primary storage that maps to one of the addresses in the determined set, wherein the eviction storage address comprises the determined least recently used target storage address.In Example 17, the subject matter of examples 10-16 can optionally include that the program code when executed is further to: determine whether data for the eviction storage address stored in the cache memory comprises dirty data; send a read request to the cache memory to read the dirty data at the eviction storage address; and write the read dirty data, received from the cache memory in response to the read request to read the dirty data at the eviction storage address, to the primary storage, wherein the delete request is sent to the cache memory in response to writing the read dirty data to the primary storage.Example 18 is a system coupled to perform cache operations for a host system in a solid state drive for data requests for a primary storage, including: a host system including a processor and a memory including a host cache manager executed by the processor; and a cache memory, including: a storage media in which data is stored at addresses for a cache memory namespace; and a cache memory cache manager to: determine whether data for a requested storage address in the primary storage namespace received from the host system is stored at an address in the cache memory namespace at an address in the cache memory namespace to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; and return to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.In Example 19, the subject matter of examples 18 and 20-22 can optionally include that the cache memory cache manager is further to return a message to the host system indicating that the data for the requested storage address is not stored in the cache memory namespace, and wherein the host cache manager retrieves the data for the requested storage address from the primary storage in response to the message.In Example 20, the subject matter of examples 18, 19, 21, and 22 can optionally include that the cache memory cache manager is further to: receive, from the host system, data for a target storage address of the primary storage namespace to add to the cache memory namespace; determine whether there is an available address in the cache memory namespace for the received data to which the storage address maps according to the cache mapping scheme; and store the data for the target storage address in the determined available address in the cache memory namespace.In Example 21, the subject matter of examples 18, 19, 20, and 22 can optionally include that the host cache manager and the cache memory cache manager are further to: return, by the cache memory cache manager, a message to the host system indicating that there is no available space in the cache memory namespace for the data at the target storage address; in response to the message indicating that there is no available space, the host cache manager is further to: determine an eviction storage address for the primary storage for which data is stored in the cache memory namespace; and send a delete request to the cache memory to delete the data at the eviction storage address; in response to the delete request, the cache memory cache manager is further to: determine an eviction address in the cache memory namespace having data for the eviction storage address; delete the data at the eviction address in the cache memory namespace; and write the data for the target storage address to the eviction address in the cache memory namespace.In Example 22, the subject matter of examples 18-21 can optionally include that the host cache manager and the cache memory cache manager are further to: determine, by the host cache manager, whether data for the eviction storage address stored in the cache memory comprises dirty; send, by the host cache manager, a read request to the cache memory to read the dirty data at the eviction storage address; and write the read dirty data, received from the cache memory in response to the read request to read the dirty data at the eviction storage address, to the primary storage, wherein the delete request is sent to the cache memory in response to writing the read dirty data to the primary storage.Example 23 is a method for performing cache operations for a host system in a solid state drive, comprising: determining whether data for a requested storage address in a primary storage namespace received from a host system is stored at an address in a cache memory namespace of the cache memory to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; and returning to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.In Example 24, the subject matter of examples 23, 25, 26 can optionally include returning a message to the host system indicating that the data for the requested storage address is not stored in the cache memory namespace, wherein the message causes the host system to retrieve the data for the requested storage address from the primary storage.In Example 25 the subject matter of examples 23, 24, 26 can optionally include receiving, from the host system, data for a target storage address of the primary storage namespace to add to the cache memory namespace; determining whether there is an available address in the cache memory namespace for the received data to which the storage address maps according to the cache mapping scheme; and storing the data for the target storage address in the determined available address in the cache memory namespace.In Example 26, the subject matter of examples 23-25 can optionally include returning a message to the host system indicating that there is no available space in the cache memory namespace for the data at the target storage address; receiving a delete request from the host system to delete data for an eviction storage address in the primary storage different from the target storage address, wherein both the target storage address and the eviction storage address map to a same set of addresses in the cache memory namespace; determining an eviction address in the cache memory namespace having data for the eviction storage address; deleting the data at the eviction address in the cache memory namespace; and writing the data for the target storage address to the eviction address in the cache memory namespace.Example 27 is a system for performing cache management operations coupled to a cache memory and a primary storage having a larger address space than a cache memory namespace of the cache memory, that executes program code to: send a read request to the cache memory to read data at a read storage address in the primary storage; and receive, from the cache memory, data at an address in the cache memory namespace to which the read storage address maps according to a cache mapping scheme.In Example 28, the subject matter of examples 27 and 29-34 can optionally include that the program code includes a cache manager, wherein the program code when executed is further to: determine, by the cache manager, a storage address in the primary storage for data stored in the cache memory namespace to evict from the cache memory device; and send read and write requests to storage addresses in the primary storage directly to the cache memory.In Example 29, the subject matter of examples 27, 28 and 30-34 can optionally include that the read request comprises a first read request, wherein the program code when executed is further to: receive a message from the cache memory indicating that the data for the read storage address is not stored in the cache memory namespace; and send a second read request to read data at the read storage address to the primary storage.In Example 30, the subject matter of examples 27-29 and 31-34 can optionally include that the program code when executed is further to: write the read data at the read storage address returned from the primary storage to the cache memory to store in the cache memory namespace.In Example 31, the subject matter of examples 27-30 and 32-34 can optionally include that the program code when executed is further to: send a write request to write data to a target storage address in the primary storage to the cache memory device; receive from the cache memory a message indicating that there is no available space in the cache memory namespace for the target storage address; determine an eviction storage address for the primary storage for which data is stored in the cache memory namespace; and send a delete request to the cache memory to delete the data at the eviction storage address.In Example 32, the subject matter of examples 27-31 and 33-34 can optionally include that the program code when executed is further to: send a retry of the write of data for the target storage address to the cache memory in response to sending the delete request to cause the cache memory to write the data for the target storage address to an address in the cache memory namespace storing data for the eviction storage address.In Example 33, the subject matter of examples 27-32 and 34 can optionally include that the determine the eviction storage address comprises: use a cache mapping scheme to determine a set of addresses in the cache memory namespace to which the target storage address maps; and determine a least recently used target storage address for the primary storage that maps to one of the addresses in the determined set, wherein the eviction storage address comprises the determined least recently used target storage address.In Example 34, the subject matter of examples 27-33 can optionally include that the program code when executed is further to: determine whether data for the eviction storage address stored in the cache memory comprises dirty data; send a read request to the cache memory to read the dirty data at the eviction storage address; and write the read dirty data, received from the cache memory in response to the read request to read the dirty data at the eviction storage address, to the primary storage, wherein the delete request is sent to the cache memory in response to writing the read dirty data to the primary storage.Example 35 is an apparatus for performing cache operations for a host system in a solid state drive, comprising: means for determining whether data for a requested storage address in a primary storage namespace received from a host system is stored at an address in a cache memory namespace of the cache memory to which the requested storage address maps according to a cache mapping scheme, wherein multiple of the storage addresses in the primary storage map to one address in the cache memory namespace; and means for returning to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.Example 36 is an apparatus comprising means to perform a method as claimed in any preceding claim.Example 37 is a machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as claimed in any preceding claim. |
An aluminum interconnect which extends adjacent to and is insulated from a stacked capacitor structure to facilitate electrical communication between an active device region of a semiconductor substrate of a semiconductor device structure and a bit line extending above the semiconductor substrate. The aluminum interconnect is disposed within a trench and may include a metal silicide layer adjacent the active device region to form a buried metal diffusion layer. The aluminum interconnect may also include a metal nitride layer disposed between the metal silicide and aluminum. The invention also includes methods of fabricating aluminium interconnects adjacent stacked capacitor structures and semiconductor device structures which include the aluminum interconnects. |
What is claimed is: 1. A method for fabricating a semiconductor device structure, comprising:providing a semiconductor substrate including at least one active device region; fabricating a stacked capacitor structure on said semiconductor substrate; forming a trench through a portion of said stacked capacitor structure located over said at least one active device region; forming a diffusion barrier on at least a portion of a surface of said at least one active device region; depositing aluminum over the stacked capacitor structure and in said trench to form an interconnect; and forming from said aluminum over the stacked capacitor structure at least one conductive line in electrical communication with said interconnect. 2. The method of claim 1, wherein said forming said trench includes exposing said at least one active device region.3. The method of claim 1, further comprising insulating said trench from said stacked capacitor structure.4. The method of claim 1, wherein said forming said diffusion barrier comprises a metal silicide layer at at least said portion of said surface of said at least one active device region.5. The method of claim 4, wherein said forming said metal silicide layer includes selectively depositing said metal silicide layer.6. The method of claim 4, wherein said forming said metal silicide layer includes:depositing a metal or metal nitride; and annealing said metal or metal nitride to said at least one active device region. 7. The method of claim 4, wherein said forming said metal silicide layer comprises forming a buried metal diffusion layer adjacent said at least one active device region.8. The method of claim 4, further comprising disposing a metal nitride layer over said metal silicide layer.9. The method of claim 1, wherein said forming said at least one conductive line comprises patterning said aluminum over said stacked capacitor structure.10. The method of claim 1, further comprising removing said aluminum from locations over said stacked capacitor structure.11. The method of claim 10, wherein said removing comprises chemical-mechanical planarizing said aluminum.12. The method of claim 10, wherein said removing comprises etching said aluminum.13. The method of claim 1, wherein said forming said at least one conductive line comprises depositing a material layer over the semiconductor device structure and patterning said material layer.14. A method for fabricating a semiconductor device structure, comprising:forming a capacitor structure over at least one active device region of a semiconductor substrate; exposing said at least one active device region through said capacitor structure; forming a buried metal diffusion layer on said at least one active device region; and disposing aluminum over at least said stacked capacitor structure and over said buried metal diffusion layer. 15. The method of claim 14, wherein said forming said buried metal diffusion layer includes forming a metal silicide layer over said at least one active device region.16. The method of claim 15, wherein said forming said metal silicide layer includes selectively depositing said metal silicide over said at least one active device region.17. The method of claim 15, wherein said forming said buried metal diffusion layer includes depositing a layer of metal or metal nitride over at least said at least one active device region and annealing said buried metal diffusion layer to said metal or metal nitride layer.18. The method of claim 15, wherein said forming said buried metal diffusion layer further includes forming a metal nitride layer adjacent said metal silicide layer.19. The method of claim 15, wherein said forming said buried metal diffusion layer further includes depositing aluminum in electrical communication with said at least one active device region.20. The method of claim 14, further comprising forming at least one conductive line above said semiconductor substrate.21. The method of claim 20, wherein said forming said at least one conductive line comprises forming said at least one conductive line from aluminum.22. The method of claim 20, wherein said forming said at least one conductive line includes patterning said at least one conductive line from a layer comprising said aluminum.23. The method of claim 20, wherein said forming said at least one conductive line includes chemical-mechanical planarizing a surface of the semiconductor device structure.24. The method of claim 23, wherein said forming said at least one conductive line further includes depositing a layer of a material in electrical communication with said buried metal diffusion layer.25. The method of claim 24, wherein said forming said at least one conductive line further includes patterning said layer of said material.26. A method for fabricating an interconnect adjacent a stacked capacitor structure of a semiconductor device structure, comprising:forming a trench through the stacked capacitor structure; forming a diffusion barrier at least in a bottom of said trench; and disposing aluminum in said trench and over said diffusion barrier. 27. The method of claim 26, wherein said forming said trench comprises etching said trench.28. The method of claim 26, further comprising insulating said trench from said stacked capacitor structure.29. The method of claim 26, wherein said forming said trench includes exposing an active device region of a semiconductor substrate of the semiconductor device structure.30. The method of claim 29, wherein said forming said diffusion barrier comprises forming a metal silicide layer on said active device region.31. The method of claim 30, wherein said forming said metal silicide layer includes selectively depositing a metal silicide over said active device region.32. The method of claim 30, wherein said forming said metal silicide layer includes:depositing a layer of a metal or metal nitride over said active device region; and annealing said layer of a metal or metal nitride to said active device region. 33. The method of claim 29, further comprising forming a metal nitride layer over said active device region.34. The method of claim 26, further comprising disposing aluminum over the semiconductor device structure.35. The method of claim 34, further comprising patterning said aluminum.36. The method of claim 34, further comprising chemical-mechanical planarizing a surface of the semiconductor device structure.37. The method of claim 26, further comprising forming at least one conductive line over the stacked capacitor structure.38. The method of claim 37, wherein said forming said at least one conductive line includes forming a layer of a material over the semiconductor device structure.39. The method of claim 38, wherein said forming said at least one conductive line further comprises patterning said layer of said material.40. The method of claim 39, wherein said at least one conductive line is in electrical communication with said aluminum in said trench. |
CROSS-REFERENCE TO RELATED APPLICATIONThis application is a continuation of application Ser. No. 09/102,331, filed Jun. 22, 1998, issued as U.S. Pat. No. 6,165,863, which is assigned to the assignee of the present application.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to stacked capacitor structures of semiconductor devices. In particular, the present invention relates to semiconductor device structures which include aluminum plugs disposed between the active device regions and bit lines thereof. More specifically, the present invention relates to semiconductor device structures which include an aluminum-filled trench that electrically connects a bit line to an active device region positioned between adjacent stacked capacitor structures.2. Background of Related ArtStacked capacitors are employed in many state of the art semiconductor devices to maintain high storage capacitance despite the ever-increasing densities of such semiconductor devices. Stacked capacitors typically make an electrical connection with a diffusion region, or active device region, of a semiconductor substrate, such as silicon, polysilicon, gallium arsenide, or indium phosphide. Some conventional processes for fabricating stacked capacitors on semiconductor device structures facilitate increased densities by employing electrically conductive layers (e.g., polysilicon layers) that are somewhat convoluted or have large surface areas, and which project outwardly relative to and electrically contact their associated active device regions. The remainders of the capacitor structures are then fabricated on the electrically conductive layers.Many stacked capacitor structures include electrically conductive contacts between the active device regions and the bit lines thereof. Typically, such electrically conductive contacts are fabricated from polysilicon, which withstands the high temperature processes (e.g., thermal oxidation processes or thermal anneal processes) that are usually performed subsequent to the fabrication of contacts on semiconductor device structures. Such contacts, however, may create a somewhat undesirable amount of contact resistance during operation of the semiconductor device.Metals have also been employed as the contact material between the active device region and bit lines of semiconductor devices and through the stacked capacitor structures thereof. Again, due to the high process temperatures that are employed following the fabrication of the contacts, metals that will withstand high process temperatures are typically employed in the contacts. Metals that will withstand such high process temperatures are commonly referred to as "refractory metals" and include titanium (Ti), tungsten (W), molybdenum (Mo), and tantalum (Ta). While these metals and their silicides have low resistivities relative to other metals, their resistivities ([rho]Ti=43-47 [mu][Omega]-cm, [rho]W=5.3 [mu][Omega]-cm, [rho]Mo=5 [mu][Omega]-cm, and [rho]Ta=13-16 [mu][Omega]-cm) may be somewhat undesirable during the operation of state of the art very large scale integration (VLSI) and ultra large scale integration (ULSI) semiconductor devices. As metals of higher resistivity are employed in such semiconductor devices, the power requirements and operating temperature of such semiconductor devices increase undesirably.Conventionally, aluminum (Al) has been widely employed as an electrically conductive material in semiconductor devices, as it has low resistivity ([rho]Al=2.7 [mu][Omega]-cm and is compatible with both silicon (Si) and silicon dioxide (SiO2). Aluminum is not, however, typically employed in self-aligned processes due to its inability to withstand high temperature processing, such as the rapid thermal anneal processes that may be employed in fabricating self-aligned silicide layers.What is needed is a process for fabricating a stacked capacitor structure on a semiconductor device structure which increases the speed of the semiconductor device and reduces the interconnect resistance and power consumption thereof and a stacked capacitor and semiconductor device structure fabricated by such a process.BRIEF SUMMARY OF THE INVENTIONThe present invention includes a stacked capacitor structure and methods of fabricating the stacked capacitor structure which address the foregoing needs.The stacked capacitor structure of the present invention includes a trench disposed over an active device region of a semiconductor device structure. The trench extends downward through the stacked capacitor structure to the active device region of the semiconductor substrate (e.g., silicon, gallium arsenide, indium phosphide), exposing same through the stacked capacitor structure. A layer of self-aligned metal silicide, or "salicide", is disposed within the trench, adjacent the active device region and preferably defining a buried metal diffusion (BMD) layer with the active device region. An aluminum interconnect, or "contact", is disposed within the trench in contact with the metal silicide and substantially filling the trench. The aluminum interconnect preferably provides an electrical link between the active device region and a bit line that extends above the stacked capacitor structure and electrically contacts the interconnect.A method of fabricating a stacked capacitor structure is also within the scope of the present invention. The method includes fabricating a stacked capacitor structure over a semiconductor device structure and defining a trench through the stacked capacitor structure and over an active device region of the semiconductor device structure. Processes for fabricating stacked capacitor structures and defining trenches therethrough to an underlying active device region, which may be employed in the method of the present invention, are disclosed in U.S. Pat. No. 5,498,562 ("the '562 patent"), which issued to Dennison et al. on Mar. 12, 1996, the disclosure of which is hereby incorporated by reference in its entirety.A layer of a metal that will form a salicide with the silicon exposed through the trench, such as titanium or tungsten, is then deposited over the semiconductor device structure. Known processes, such as rapid thermal anneal (RTA) or silicide deposition processes, may then be employed to form the salicide layer, such as titanium silicide (TiSix, predominantly TiSi2) or tungsten silicide (WSix, predominantly WSi2), which is typically referred to as a "selective" contact, over the active device region of the semiconductor device structure. The formation of suicides such as TiSi2 and WSi2 is said to be self-aligned since the silicide forms only over exposed semiconductor substrate (e.g., silicon and polysilicon) regions of a semiconductor device structure. Everywhere else, the metal film overlies an insulative, substantially non-reactive oxide layer, and may subsequently be removed. Preferably, the metal silicide diffuses into the silicon and defines a BMD layer. A metal nitride layer may also be fabricated over the selective contact by known techniques. Such metal nitride layers are typically referred to as "barrier" layers, as they prevent the diffusion of silicon and silicide into any metal layer or structure that is subsequently fabricated adjacent thereto.An interconnect is fabricated in the trench by depositing aluminum over the semiconductor device structure in a manner that substantially fills the trench. Known processes, such as physical vapor deposition (PVD) and chemical vapor deposition (CVD) techniques, may be employed to deposit aluminum over the semiconductor device structure. The aluminum that covers other areas of the semiconductor device structure may then be removed by known processes, such as by known planarization (e.g., by chemical-mechanical polishing (CMP) techniques) or etching techniques, which do not remove aluminum from the trench. Additional layers and structures may then be fabricated or defined above the stacked capacitor, including, without limitation, bit lines that are in electrical contact with one or more corresponding aluminum interconnects.Alternatively, portions of the aluminum layer that overlie the semiconductor device structure may be selectively removed therefrom by known techniques, such as masking and etching processes, in order to define bit lines that are integral with the aluminum interconnects and extend over an active surface of the semiconductor device structure. Such aluminum bit lines may be desirable since they may further reduce contact resistance and are compatible with the adjacent silicon dioxide of the semiconductor device structure.The advantages of the present invention will become apparent to those of skill in the art through a consideration of the ensuing description, the accompanying drawings, and the appended claims.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSFIG. 1 is a cross-sectional schematic representation of a semiconductor device structure including an aluminum interconnect extending from an active device region of the semiconductor substrate and through a stacked capacitor structure to a bit line; andFIGS. 2-8 are cross-sectional schematic representations which illustrate a process of fabricating the semiconductor device structure of FIG. 1 in accordance with the present invention.DETAILED DESCRIPTION OF THE INVENTIONWith reference to FIG. 1, a semiconductor device structure 10 according to the present invention is shown. Semiconductor device structure 10 includes a semiconductor substrate 12, such as silicon, gallium arsenide, or indium phosphide, a field oxide layer 14 disposed over various regions of semiconductor substrate 12, active device regions 16 in semiconductor substrate 12, word lines 18 extending over semiconductor substrate 12 and field oxide layer 14, and a stacked capacitor structure 20 disposed over word lines 18 and active device regions 16.A trench 22 extends through stacked capacitor structure 20, exposing a source/drain 24, or p-n region, of active device region 16 to an active surface 11 of semiconductor device structure 10. A metal silicide selective contact 38 may be disposed over source/drain 24, and preferably defines a buried metal diffusion layer 39 in the semiconductor substrate 12 of source/drain 24. Selective contact 38 preferably comprises titanium silicide. A metal nitride layer 40, preferably titanium nitride (TiN), may be disposed over selective contact 38. The remainder of trench 22 is filled with aluminum, which defines an aluminum interconnect 34, or contact or plug.Aluminum interconnect 34 is in electrical communication with a bit line 36 that extends over semiconductor device structure 10 above the stacked capacitor structures 20 thereof. Bit line 36 may be fabricated from an electrically conductive material, including, without limitation, metals such as aluminum, tungsten and titanium, electrically conductive polymers, and doped polysilicon. If bit line 36 is fabricated from aluminum, bit line 36 and aluminum interconnect 34 are preferably integral.Referring now to FIGS. 2-8, a method of fabricating a semiconductor device structure 10 in accordance with the present invention is illustrated. FIG. 2 illustrates a semiconductor device structure 10 with active device regions 16, word lines 18, and a stacked capacitor structure 20 disposed thereon. Each of these features may be fabricated as known in the art, such as by the process disclosed in the '562 patent.Turning now to FIG. 3, a trench 22 is defined through stacked capacitor structure 20 by known processes, such as the mask and anisotropic etch processes that are disclosed in the '562 patent. Any electrically conductive features of the stacked capacitor structure 20, such as the electrically conductive (typically polysilicon) layer 21 thereof, that are exposed to trench 22 may be oxidized by known processes to insulate these electrically conductive features from the subsequently fabricated aluminum interconnect 34 (see FIG. 1), as disclosed in the '562 patent. Preferably, in order to prevent oxidation of source/drain 24 as any exposed electrically conductive features of stacked capacitor structure 20 are insulated, such insulation is performed before trench 22 has been completely defined and, therefore, prior to the exposure of source/drain 24 through trench 22.With reference to FIG. 4, a selective contact 38 of a metal silicide may then be fabricated over source/drain 24. Metal suicides that may be employed as selective contact 38 include, without limitation, titanium silicide (TiSix, predominantly TiSi2), tungsten silicide (WSix, predominantly WSi2), molybdenum silicide (MoSix, predominantly MoSi2), and platinum silicide (PtSix, predominantly PtSi2). Known processes may be employed to form selective contact 38. An exemplary process for fabricating selective contact 38 includes the deposition of a metal or metal nitride over semiconductor device structure 10, a rapid thermal anneal of the metal or metal nitride to the exposed regions of semiconductor substrate 12 to form the salicide selective contact 38, and removal of the non-reacted metal or metal nitride from the active surface 11 of the semiconductor device structure 10.Alternatively, selective contact 38 may be selectively deposited onto source/drain 24 by chemical vapor deposition (CVD) of a metallic precursor and a silicon-containing compound. For example, when titanium silicide selective contacts are desired, a titanium tetrahalide, such as titanium tetrachloride (TiCl4), is reacted with either silane (SiH4) or dichlorosilane (DCS, SiH2Cl2) as follows:TiCl4+SiH4->TiSi2vTiCl4+SiH2Cl2->TiSi2vIn order to optimize the selectivity of these titanium silicide deposition reactions for the semiconductor substrate 12, which is exposed through trench 22, a deposition temperature in the range of about 650[deg.] C. to about 750[deg.] C. is preferable. Since minimal amounts of the semiconductor substrate 12 are consumed by these reactions, the deposition reaction is allowed to continue until a selective contact 38 of the desired thickness is formed.Other exemplary metal silicide deposition processes that may be employed in the present invention to fabricate selective contact 38 include the reaction of a titanium halide and/or a gaseous titanium organometallic precursor with a silicon-containing compound of the formula SinH2n+2, as disclosed in U.S. Pat. No. 5,240,739, issued to Trung Doan et al. on Aug. 31, 1993; U.S. Pat. No. 5,278,100, issued to Trung Doan et al. on Jan. 11, 1994; and U.S. Pat. No. 5,376,405, issued to Trung Doan et al. on Dec. 27, 1994, the disclosures of each of which are hereby incorporated by reference in their entirety. Titanium halides that may be employed in the deposition of selective contact 38 over source/drain 24 include, without limitation, TiCl4, titanium tetraboride, titanium tetrafluoride, titanium tetraiodide, and subhalides. Titanium organometallic precursors which may be used to fabricate selective contact 38 include, but are not limited to, compounds of the formula Ti(NR2)4, where the titanium atom is bonded to the nitrogen atom and R comprises hydrogen or a carbon-containing radical. Exemplary compounds include tetradimethylamido titanium (TDMAT or Ti(N(CH3)2)4 and Ti(N(C2H5)2)4).The following are exemplary chemical reactions for depositing metal silicide on source/drain 24:nTiCl4+SinH2n+2->nTiSi+4nHCl+H2+by-products;nTiCl4+2SinH2n+2->nTiSi+4nHCl+2H2+by-products;TiCl4+SinH2n+2->Ti5Si3+HCl+H2+by-products;TDMAT+Si2H6->TiSi2+organic by-products;TDMAT+SinH2n+2->(n/2)TiSi2+organic by-products;andTi(NR2)4+SiH4->TiSix+TiSiyN1-y+organic by-products,where x is predominantly equal to two, y is zero or one and n is an integer equal to zero or more. The reaction between TiCl4 and Si2H6 may be employed to deposit selective contact 38 over source/drain 24 at a temperature as low as about 400[deg.] C. The reaction of TiCl4 and Si3H8 deposits a titanium silicide selective contact 38 on a semiconductor substrate at a temperature of about 300[deg.] C. or higher.Preferably, selective contact 38 and semiconductor substrate 12 diffuse into each other to define a buried metal diffusion layer 39.Although silicide deposition in accordance with the foregoing processes is selective for semiconductor substrate 12, residual metal silicide may be deposited above stacked capacitor structure 20. Thus, cleaning of semiconductor device structure 10 may be desirable in order to remove any residual metal silicide from above stacked capacitor structure 20. Cleaning agents such as chlorine (Cl2), hydrochloric acid (HCl) and hydrofluoric acid (HF) may be employed in known cleaning techniques (e.g., thermal gas, plasma assisted, and remote plasma activated cleaning) to clean any residual metal silicides from field oxide layer 14.Referring now to FIG. 5, upon depositing a selective contact 38 of the desired thickness, a metal nitride layer 40, which is also referred to as a barrier layer, may be deposited over selective contact 38. A metallic precursor and another reactant, which are collectively referred to as second reactants, may be reacted to deposit metal nitride layer 40 over semiconductor device structure 10. The metallic precursor, which is preferably TiCl4 when selective contact 38 is comprised of titanium silicide, is reacted with ammonia (NH3) to initiate the following chemical reaction, which deposits a metal nitride layer 40 of titanium nitride over the surface of semiconductor device structure 10:TiCl4+NH3->TiNv,including above the stacked capacitor structures 20 and selective contacts 38 of the semiconductor device structure 10 (i.e., a "blanket" deposition occurs). The duration of the foregoing reaction is dependent upon the desired thickness of metal nitride layer 40. This reaction may also be carried out in the presence of nitrogen gas (N2), as discussed in U.S. Pat. No. 5,416,045 ("the '045 patent"), issued to Ralph E. Kauffman et al. on May 16, 1995, the disclosure of which is hereby incorporated by reference in its entirety. As explained in the '045 patent, nitrogen gas facilitates the deposition of titanium nitride at temperatures of about 500[deg.] C. or lower. Hydrogen gas (H2) may also be introduced into the reaction chamber to facilitate the formation of hydrochloric acid from chlorine.Other chemical reactions are also useful for depositing metal nitride layer 40. U.S. Pat. No. 5,399,379 ("the '379 patent"), issued to Gurtej S. Sandhu on Mar. 21, 1995, the disclosure of which is hereby incorporated by reference in its entirety, describes such a reaction, whereby one or more organometallic compounds of the formula Ti(NR2)4, which is also referred to as a tetrakis-dialkylamido-titanium, are reacted with a halide gas (e.g., F2, Cl2, Br2) to form a titanium nitride film on a semiconductor device. In each Ti(NR2)4 molecule, the titanium atom is single-bonded to four nitrogen atoms, each of which are also single-bonded to two carbon-containing radical (R) groups, which include hydrogen atoms or alkyl groups.Another exemplary titanium nitride deposition reaction is disclosed in U.S. Pat. No. 5,254,499 ("the '499 patent"), issued to Gurtej S. Sandhu et al. on Oct. 19, 1993, the disclosure of which is hereby incorporated by reference in its entirety. According to the '499 patent, a titanium nitride layer may also be deposited by reacting one or more compounds of the general formula Ti(NR2)4, where the titanium atom is bonded to a nitrogen atom, which is in turn bonded to two hydrogen atoms or a carbon-containing radical (R), with ammonia (NH3). The following United States Patents disclose various other methods for depositing titanium nitride films, wherein the second reactants are Ti(NR2)4 and ammonia: U.S. Pat. No. 5,192,589, issued to Gurtej S. Sandhu on Mar. 9, 1993; U.S. Pat. No. 5,139,825, issued to Roy G. Gordon et al. on Aug. 18, 1992; and U.S. Pat. No. 5,089,438, issued to Avishay Katz on Feb. 18, 1992, the disclosures of each of which are hereby incorporated by reference in their entirety.U.S. Pat. No. 5,246,881, issued to Gurtej S. Sandhu et al. on Sep. 21, 1993, the disclosure of which is hereby incorporated by reference in its entirety, discloses another method for depositing a titanium nitride film, wherein the second reactants are one or more compounds of the formula Ti(NR2)4, where the titanium atom is bonded to the nitrogen atom which is, in turn, bonded to two hydrogen atoms or a carbon-containing radical (R), and an activated species which attacks the R-nitrogen bonds of the Ti(NR2)4, and which will convert the activated species to a volatile compound. The activated species include halogens, ammonia, and hydrogen, and are radiofrequency (RF) activated remote from the Ti(NR2)4.Another titanium nitride deposition reaction that is useful in the method of the present invention is disclosed in U.S. Pat. No. 5,227,334, issued to Gurtej S. Sandhu on Jul. 13, 1993, which is hereby incorporated by reference in its entirety. The second reactants of that process include a titanium-containing compound, such as Ti(NR2)4, and nitrogen trifluoride (NF3).Alternatively, metal nitride layer 40 may comprise a mixed phase layer, such as the TiN/TiSix layer deposited by the method disclosed in U.S. Pat. No. 5,525,518 ("the '518 patent"), issued to Gurtej S. Sandhu et al. on Oct. 12, 1993, the disclosure of which is hereby incorporated by reference in its entirety. The process of the '518 patent includes reacting Ti(NR2)4, where the titanium atom is bonded to the nitrogen atom which is, in turn, bonded to two hydrogen atoms or a carbon-containing radical (R), with an organic silane reactive gas, such as tris(dimethylamino) silane (SIN).FIG. 6 illustrates the selective removal of metal nitride layer 40 from the active surface 11 of semiconductor device structure 10. Known patterning processes, such as mask and etch techniques, may be employed to selectively remove metal nitride layer 40 from various regions of the semiconductor device structure (e.g., from above the stacked capacitor structures 20 thereof), while metal nitride layer 40 remains over selective contact 38. Alternatively, a layer 42 (see FIG. 7) of aluminum may be disposed over metal nitride layer 40 prior to such patterning.With reference to FIG. 7, a layer 42 of aluminum may be disposed over semiconductor device structure 10 and within trench 22 by known processes, such as PVD (e.g., sputtering, evaporation, or other PVD processes) or CVD. Aluminum layer 42 may be patterned by known techniques, such as masking and etching, to define bit lines 36 (see FIG. 1) therefrom and integral therewith. Alternatively, the layer 42 of aluminum overlying semiconductor device structure 10 may be substantially completely removed from above the stacked capacitor structures 20 thereof by known techniques, such as etch processes or planarization processes (e.g., chemical/ mechanical planarization (CMP)) that will leave aluminum interconnect 34 substantially intact.Referring to FIG. 8, if aluminum layer 42 is removed from active surface 11, a bit line 36 comprised of an electrically conductive material, such as a metal (e.g., tungsten, titanium, aluminum), an electrically conductive polymer, or polysilicon, may be fabricated above stacked capacitor structure 20 and in electrical contact with aluminum interconnect 34. Known metal layer fabrication processes, such as PVD or CVD processes, may be employed to deposit a layer of metal from which bit line 36 is to be defined by known patterning techniques, such as mask and etch processes.Additional structures and layers may then be fabricated over the active surface 11 of semiconductor device structure 10 by known processes.The semiconductor device structure 10 (see FIG. 1) of the present invention may have increased speed and lower power consumption than many state of the art semiconductor devices due to the use of aluminum, which has a low resistivity, in interconnects 34 and due to the salicide selective contact 38 and the buried metal diffusion layer 39, each of which may reduce contact resistance.In addition, the aluminum interconnects 34 of semiconductor device structure 10 of the present invention may also facilitate further increases in the density of semiconductor device structures due to the low resistivity of aluminum and, thus, the potentially thinner interconnects 34 that may be fabricated through the stacked capacitor structures 20 of such semiconductor devices.Although the foregoing description contains many specifics, these should not be construed as limiting the scope of the present invention, but merely as providing illustrations of some of the presently preferred embodiments. Similarly, other embodiments of the invention may be devised which do not depart from the spirit or scope of the present invention. Features from different embodiments may be employed in combination. The scope of the invention is, therefore, indicated and limited only by the appended claims and their legal equivalents, rather than by the foregoing description. All additions, deletions and modifications to the invention as disclosed herein which fall within the meaning and scope of the claims are to be embraced thereby. |
An FPGA configuration memory is divided into columnar frames each having a unique address. Configuration data is loaded into a configuration register, which transfers configuration data frame by frame in parallel. In a preferred embodiment, an input register, a shadow input register and a multiplexer array permit efficient configuration data transfer using a larger number of input bits than conventional FPGAs. A flexible external interface enables connection with bus sizes varying from a predetermined maximum width down to a selected fraction thereof. Configuration data transfer is made more efficient by using shadow registers to drive such data into memory cells on a frame-by-frame basis with a minimum of delay, and by employing a multiplexer array to exploit a wider configuration data transfer bus. The speed of configuration readback is made substantially equal to the rate of configuration data input by employing configuration register logic that supports bidirectional data transfer. Using the invention, a bit stream designed for an old device can be used for a new device having additional configuration memory cells. |
What is claimed is: 1. A field programmable gate array (FPGA) comprising: a configurable logic block (CLB) having a plurality of associated rows of configuration memory cells, wherein a first subset of the rows controls a first set of functions in the CLB and a second subset of the rows controls a second set of functions in the CLB; and a configuration state machine that controls the loading of configuration data values into the configuration memory cells, wherein the configuration state machine causes valid configuration data values to be loaded into all of the rows in a first mode, and wherein the configuration state machine causes valid configuration data values to be loaded only into the first subset of rows in a second mode. 2. The FPGA of claim 1, wherein the configuration state machine is configured to select the first mode or the second mode in response to a configuration instruction set. 3. The FPGA of claim 2, wherein the configuration instruction set comprises an external signal. 4. The FPGA of claim 1, further comprising a configuration register having the capacity to store a configuration data value for each row of configuration memory cells. 5. The FPGA of claim 4, further comprising an input bus for receiving a plurality of configuration data values in parallel, wherein the input bus is coupled to the configuration register. 6. The FPGA of claim 5, wherein the width of the configuration register is not the same as the width of the input bus, the FPGA further comprising a multiplexer array for routing the configuration data values from the input bus to the configuration register. 7. The FPGA of claim 1, wherein there are 18 rows of configuration memory cells in the first subset, and 2 rows of configuration memory cells in the second subset. 8. The FPGA of claim 1, wherein the configuration state machine is configured to load the second subset of rows with disable bits in the second mode. 9. A field programmable gate array (FPGA), comprising: an array of configurable logic blocks (CLBs), wherein each CLB has an associated plurality of rows of configuration memory cells, wherein a first subset of rows controls a first set of functions within each CLB, and a second subset of rows controls a second set of functions within each CLB; and a configuration state machine that controls the loading of configuration data values into the configuration memory cells, wherein: the configuration state machine causes valid configuration data values to be loaded into all of the rows in a first mode, and the configuration state machine causes valid configuration data values to be loaded only into the first subset of rows in a second mode. 10. The FPGA of claim 9, further comprising a configuration register that stores a configuration data value for each row of configuration memory cells. 11. The FPGA of claim 10, further comprising an input bus for receiving a plurality of configuration data values in parallel, wherein the input bus is coupled to the configuration register. 12. The FPGA of claim 11, wherein the configuration register has a width that is equal to the lowest multiple of the number of rows associated with each CLB that is greater than the width of the input bus. 13. The FPGA of claim 11, further comprising a multiplexer array for routing the configuration data values from the input bus to the configuration register. 14. A method of configuring a configurable logic block (CLB) that has a plurality of rows of configuration memory cells, wherein a first subset of the rows controls a first set of functions in the CLB and a second subset of the rows controls a second set of functions in the CLB, the method comprising: loading valid configuration data values into all of the rows of configuration memory cells in a first mode; and loading valid configuration data values only into the first subset of rows of configuration memory cells in a second mode. 15. The method of claim 14, further comprising selecting the first mode or the second mode in response to a configuration instruction set. 16. The method of claim 15, further comprising providing the configuration instruction set from an external source. 17. The method of claim 14, further comprising loading the valid configuration data values into a configuration register prior to the loading steps. 18. The method of claim 17, further comprising: loading the configuration register in a sequence having a first number of cycles in the first mode; and loading the configuration register in a sequence having a second number of cycles in the second mode. 19. The method of claim 14, further comprising receiving the configuration data values in parallel on an input bus. 20. The method of claim 14, further comprising loading the second subset of rows with disable bits in the second mode. |
FIELD OF THE INVENTION The invention relates to field programmable gate arrays (FPGAs). The invention particularly relates to a structure and method for configuring static random access memory (SRAM)-based FPGAs. BACKGROUND OF THE INVENTION The first FPGA with programmable logic cells and programmable routing was described by Freeman in U.S. Pat. No. Re. 34,363, which is incorporated herein by reference. An FPGA includes configurable logic blocks and configurable routing, which are programmed by configuration memory cells. The configuration memory cells are typically arranged in an array and are loaded with a bit stream of configuration data. The configuration data is selected to cause the FPGA to perform a desired function. FIG. 1 shows a conventional array of configuration memory cells (i.e., a configuration memory) such as used by Xilinx, Inc., assignee of the present invention. The configuration memory of FIG. 1 is a 16-bit by 16-bit array that includes 256 configuration memory cells. In general, each of the configuration memory cells is identified by a reference character Mx-y, where x and y correspond to the row and column of the configuration memory cell. A typical array of configuration memory cells in a commercial device has on the order of 20,000 to one million memory cells. Thus, the array of FIG. 1 is much smaller than is typically used in a commercial embodiment, but nevertheless shows the structure of prior art configuration memories. To load the configuration memory, the bit stream of configuration data is shifted through a data shift register DSR under control of a clocking mechanism (not shown), until a frame of data (16 bits wide in this example) has been shifted into bit positions DS0 through DS15 of the data shift register DSR. This frame of data is then shifted in parallel on lines D0 through D15 into a column of configuration memory cells addressed by address shift register ASR. Typically, some configuration memory cells are missing from the rows and columns. Missing memory cells are often due to idiosyncrasies in the layout of the configuration memory, or to the lack of need for a particular configuration memory cell in a desired logic scheme that still requires a rectangular array for implementation. Dummy bits are inserted into the bit stream as place holders for these missing memory cells. The column is addressed by shifting a token high bit through the address shift register ASR from bit AS0 to bit AS15, one shift per frame. Each time a frame of configuration data is loaded through data shift register DSR, it is loaded in parallel to the column of memory cells selected by the token high bit. When the token high bit shifts out to the right, it activates a DONE circuit, which indicates that configuration is complete and causes the FPGA to become operational. In a typical FPGA, configuration data is shifted serially into a data shift register, then loaded in parallel into the configuration memory cells. In certain conventional parallel configuration modes, configuration data is loaded onto the device in parallel, and is then serially loaded into a data shift register. The Xilinx XC5200.TM. family of FPGAs has a configuration mode called Express mode, in which configuration data is loaded in parallel (i.e., eight bits at a time) into a data shift register. (See "The Programmable Logic Data Book", pp. 4-54 to 4-78, published July 1996 by Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, hereinafter referred to as the "Xilinx 1996 Data Book".) The Express mode enables a configuration bit stream to be loaded at eight times the rate of the above-described conventional configuration modes. However, Express mode is limited to a single bus width of eight bits. Moreover, the configuration bit stream used in Express mode is not compatible with the configuration bit streams used to configure an XC5200 FPGA in other configuration modes (e.g., a serial configuration mode). A purchaser of an FPGA may spend days, weeks, or months developing and perfecting a logic design to be implemented by the FPGA and generating the accompanying configuration bit stream to program the FPGA. Companies such as Xilinx, Inc., which manufacture FPGAs and other programmable devices, continue to develop new device architectures (or device families) with new features. Yet these companies continue to sell devices from the older device families because customers would rather use these older devices than repeat or augment the engineering necessary to generate a different configuration bit stream, which is required to cause a newer device to perform the same function as the older device. This means that the FPGA manufacturer must maintain an inventory of an increasing number of device families and maintain the capability to manufacture many device families. However, operating in this manner is inefficient for the FPGA manufacturer. It would therefore be desirable to make the older device families obsolete, thereby minimizing the number of device families that must be held in inventory. It would also be desirable to minimize the number of device families being manufactured in order to optimize manufacturing capacity. It would further be desirable for newer device families to be programmable with the same configuration bit streams as older device families. One reason new device architectures are continuously arriving on the market is the desire among users for increased device flexibility, size, and speed. Partial reconfigurability, flexible pin allocation, and modified device layout for increased speed are just a few of the innovations only recently introduced to the FPGA marketplace. While available memory-addressing mechanisms provide certain advantages in writing configuration data, there are a number of disadvantages in existing devices, including the difficulty of testing the devices before shipping to users because of the slow read speed of available configuration memory cells. FIG. 1A is a schematic diagram of a conventional five-transistor configuration memory cell M0-0 that includes one access transistor T1 and two CMOS inverters I1 and I2. As is well known in the CMOS design art, each of the two inverters I1 and I2 comprise one PMOS transistor and one NMOS transistor connected in series between power and ground. Inverters I1 and I2 are connected into a loop, thereby forming a latch. This latch is connected to a data line D0 by a pass transistor T1 that is controlled by address line A0. A line Q or QB (or both) extends from memory cell M0-0 to the FPGA logic structure (not shown) to control configuration. Such a structure is described by Hsieh in U.S. Pat. Nos. 4,750,155 and 4,821,233. As used in existing devices, this cell structure enables only relatively slow data readback capability, with a maximum speed for some devices of only 1 MHz. For a discussion on existing readback circuitry, see "The Programmable Logic Data Book", pp. 8-17 to 8-24, published 1993 by Xilinx, Inc. (hereinafter referred to as the "Xilinx 1993 Data Book"). It would therefore be desirable to be able to rapidly read configuration memory cells. SUMMARY OF THE INVENTION The present invention provides a novel structure and method for configuring FPGAs. The invention allows bit streams of various bus widths to be loaded into a single FPGA. The invention further allows distribution of multiple copies of bit stream segments without reloading the desired segments, thereby increasing configuration and testing speed and convenience. According to the invention, the device configuration memory is divided into frames, each having a frame address, preferably unique. A configuration register is provided, preferably positioned vertically in the center of the device, and capable of storing one frame of data. Parallel bit stream data is loaded into the configuration register one memory word at a time until the configuration register is full. Bit stream data is then loaded in parallel from the configuration register to the configuration memory, one frame being loaded into one configuration memory column each time the configuration register is fully loaded. In other embodiments, the configuration register is segmented and distributed throughout the device to enable faster partial device configuration. In one embodiment, the same bit stream can be used for multiple device types. A configuration instruction set (CIS) or instruction code is preferably provided at the beginning of a configuration data bit stream, with an opcode being provided in front of every data frame. Each CIS includes information identifying the device type for which the bit stream is intended. Each opcode includes any special instructions for processing or distributing configuration data, such as duplicate distribution. Using the bit stream CIS to define the device type of the device to be configured, preferably in combination with a flexible external interface, enables configuration using bus widths varying from a predetermined maximum width down to a selected fraction thereof, without significantly increasing the number of pins needed to accept configuration data. In effect, the configuration bus width has been de-coupled from the size of the configuration memory frame. A multiplexer array and a configuration register distribute configuration data to the appropriate addressable memory frames in response to opcode contents. Frame addressing can be either coded into the bit stream or controlled by registers. Register reset or preset can be accomplished using a signal embedded in the bit stream's CIS. Bi-directional configuration register logic is preferably incorporated to allow high speed readback of configuration memory data. The configuration memory structure of the invention provides an FPGA different from earlier FPGAs, but having a configuration bit stream compatible with such earlier FPGAs. The structure of the invention also provides a significantly faster configuration rate than in conventional FPGAs. In some embodiments, partial reconfiguration is provided. Another advantage is that configuration data may be read back at substantially the same rate as the rate at which configuration data is input to the FPGA. Further, configuration data transfer is made more efficient by using shadow registers to load the data into memory cells on a frame-by-frame basis with a minimum of delay. Configuration data transfer is also made more efficient by employing a multiplexer array and a configuration register to exploit a wider configuration data transfer bus. An FPGA according to one embodiment of the invention has an external interface adapted to connect to different width buses, the size of which may be defined by the configuration bit stream or by dedicated pins. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example, and not by way of limitation, in the following figures, in which like reference numerals refer to similar elements. FIG. 1 shows a prior art configuration memory array with a prior art address shift register. FIG. 1A shows a prior art configuration memory cell structure usable in the memory cells of FIG. 1. FIG. 2 is a circuit diagram of a configuration circuit used to configure an FPGA in accordance with one embodiment of the present invention. FIG. 3 is a circuit diagram illustrating the interconnections between a predetermined set of input/output blocks (IOBs) and input data buses in accordance with one embodiment of the present invention. FIG. 4, which consists of FIGS. 4A and 4B, illustrates an input multiplexer of the configuration circuit of FIG. 2 in accordance with one embodiment of the present invention. FIG. 5 is a circuit diagram illustrating an input multiplexer and an input register of the configuration circuit of FIG. 2 in accordance with one embodiment of the present invention. FIGS. 5A and 5B are circuit diagrams illustrating bit slices of an input register in accordance with two embodiments of the present invention. FIG. 6 is a circuit diagram of a 1-bit storage device in a configuration register of the configuration circuit of FIG. 2 in accordance with one embodiment. FIG. 7 is a circuit diagram of a configuration circuit used to configure an FPGA in accordance with another embodiment of the present invention. FIGS. 8A and 8B are state tables that illustrate a 9-state repeated process implemented by a multiplexer array in accordance with one embodiment of the present invention. FIG. 9 is a circuit diagram of the multiplexer array and input circuitry of the configuration circuit of FIG. 7 in accordance with one embodiment of the present invention. FIGS. 10A, 10B, 10C, and 10D are circuit diagrams of multiplexers that make up the multiplexer array of FIG. 9 in accordance with one embodiment of the present invention. FIG. 11 is a state table that illustrates the manner in which the multiplexers of FIGS. 10A-10D route configuration data values in response to control signals. FIG. 12 is a layout diagram of a portion of the multiplexer of FIG. 10A in accordance with one embodiment of the present invention. FIG. 13 is a layout diagram of the multiplexer of FIG. 10A in accordance with one embodiment of the present invention. FIG. 14 is a layout diagram of the multiplexer of FIG. 10A that illustrates a second metal layer, in accordance with one embodiment of the present invention. FIG. 15 is a table illustrating a state sequence for configuring the FPGA in response to a type B bit stream. FIG. 16 is a table illustrating a state sequence for configuring the FPGA in response to a type A bit stream. FIG. 17 is a block diagram of configuration readback circuitry in accordance with one embodiment of the present invention. FIG. 18 is a circuit diagram of a bit-slice of the configuration register and shadow configuration register of FIG. 17 in accordance with one embodiment of the present invention. FIG. 19 is a circuit diagram illustrating the output multiplexer array of FIG. 17 in accordance with one embodiment of the present invention. FIG. 20 is a block diagram of a multiplexer within the output multiplexer array of FIG. 19 in accordance with one embodiment of the present invention. FIG. 21 is a state table that illustrates the manner in which the multiplexer array of FIG. 19 routes configuration data values in response to applied control signals. FIG. 22 is a circuit diagram of a multiplexer within the output multiplexer array of FIG. 19 in accordance with one embodiment of the present invention. FIG. 23 is a circuit diagram of a 64-to-32 multiplexer in accordance with one embodiment of the present invention. DETAILED DESCRIPTION OF THE DRAWINGS FIG. 2 illustrates a configuration circuit 200 used to configure an FPGA in accordance with one embodiment of the present invention. This circuit 200 includes configuration memory array 1, shadow input register 3, configuration register 4, configuration pre-load bus 5, input register 7, shadow configuration register 8, input multiplexer 9, comparators 100 -10N, frame number registers 110 -11N, frame counter 12, 1-bit storage device 14, and configuration state machine 15. A configuration bit stream is provided to input multiplexer 9 on an input data bus (IDB). In the described embodiment, the FPGA can be configured such that the input data bus IDB has a width of 8 bits, 16 bits, or 32 bits. The 8-bit, 16-bit and 32-bit bit input data buses are referred to as input data buses IDB8, IDB16, and IDB32, respectively. Input data buses IDB8, IDB16, and IDB32 route configuration data bits IDB8 [7:0], IDB16 [15:0], and IDB32 [31:0], respectively. The configuration bit stream is applied to a predetermined set of input/output blocks (IOBs) (e.g., pads) of the FPGA, depending upon the selected width of the input data bus IDB. FIG. 3 is a circuit diagram illustrating the interconnections between a predetermined set of IOBs 300-331 and input data buses IDB8, IDB16, and IDB32 in accordance with one embodiment of the present invention. When the input data bus IDB is selected to have a width of 32-bits, the configuration data is applied as 32-bit words to IOBs 331-300. These IOBs 331-300, in turn, are connected to 32 conductive paths, represented as IDB32 [31:0], to input multiplexer 9. In this 32-bit mode, input multiplexer 9 is set to pass all 32 bits through to input register 7 (and input data buses IDB8 and IDB16 are not routed through input multiplexer 9). When the input data bus IDB is selected to have a width of 16 bits, the configuration data bits IDB16 [15:0] are applied to IOBs 323-308, respectively. These IOBs 323-308, in turn, are connected to 16 conductive paths represented by IDB16 [15:0]. These 16 conductive paths are connected to input multiplexer 9, as illustrated in FIG. 3. In the 16-bit mode, input multiplexer 9 is arranged to pass the 16-bit configuration data on both the upper and lower halves of its 32-bit output. The resulting 32-bit configuration data word is passed through to input register 7. When the input data bus IDB is selected to have a width of 16-bits, IOBs 300-307 and 324-331 can be used for other purposes. When the input data bus IDB is selected to have a width of 8 bits, the configuration data is applied to IOBs 319-312, which, in turn, are connected to 8 conductive paths, represented by IDB8 [7:0]. These 8 conductive paths are connected to input multiplexer 9, as illustrated in FIG. 3. In the 8-bit mode, input multiplexer 9 is arranged to replicate the 8-bit configuration data on the four bytes that make up the 32-bit output of input multiplexer 9. This 32-bit output is provided to input register 7. When IDB is selected to have a width of 8-bits, IOBs 300-311 and 320-331 can be used for other purposes. In other embodiments, the configuration data bits are assigned to other predetermined sets of IOBS. For example, when the input data bus IDB is selected to have a width of 16 bits, the configuration data bits IDB16 [15:0] can be applied to either IOBS 315-300 or IOBs 331-316. Similarly, when the input data bus IDB is selected to have a width of 8 bits, the configuration data bits IDB8 [7:0] can be applied to either IOBs 307-300, IOBs 315-308, IOBs 323-316 or IOBs 331-324. The IOBs 300-331 used to connect to the input data bus IDB are preferably placed along a single edge of the FPGA device, shown as a vertical edge 333 in FIG. 3. Configurable logic blocks (CLBS, not shown) are arranged in horizontal rows with one or more IOBs along the vertical edge corresponding to each CLB row. The bits are preferably arranged in numerical order (in the embodiment of FIG. 3, the least significant bit is lowest on the edge). The bits are preferably on contiguous IOBs (excluding power and ground pads, not shown) to ensure convenient connection to user logic inside the FPGA after programming the device. Referring back to FIG. 2, the width of the input data bus IDB is selected by configuration state machine 15 in response to a configuration instruction set (CIS) present at the beginning of the configuration bit stream. Configuration state machine 15 configures the FPGA such that the appropriate input data bus IDB8, IDB16, or IDB32 is formed and connected through input multiplexer 9. In another embodiment, the width of the input data bus IDB is selected in response to a plurality of signals provided on mode pins of the FPGA. FIG. 4, which consists of FIGS. 4A and 4B, illustrates input multiplexer 9 of FIG. 2 in accordance with one embodiment of the present invention. Input multiplexer 9 includes thirty-two 3-to-1 multiplexers 400-431. The control input terminals of each of these multiplexers 400-431 are coupled to receive multiplexer control signals MC[1:0] from configuration state machine 15. The uppermost input terminals of multiplexers 431-400 are coupled to receive configuration data bits IDB32 [31:0], respectively. If multiplexer control signals MC[1:0] are representative of a first state, then multiplexers 431-400 pass the configuration data bits IDB32 [31:0] as output signals OUT[31:0]. The middle input terminals of multiplexers 431-416 are coupled to receive configuration data bits IDB16 [15:0], respectively. Input multiplexer 9 also includes connections (not shown) that route the configuration data bits IDB16 [15:0] to the middle input terminals of multiplexers 415-400, respectively. Thus, configuration data bits IDB16 [15:0] are provided to input multiplexer 9 twice. If multiplexer control signals MC[1:0] are representative of a second state, then multiplexers 431-416 and multiplexers 415-400 pass the configuration data bits IDB16 [15:0] as output signals OUT[31:16] and OUT[15:0], respectively. The lowermost input terminals of multiplexers 431-424 are coupled to receive configuration data bits IDB8 [7:0], respectively. Input multiplexer 9 also includes connections (not shown) that route the configuration data bits IDB8 [7:0] to the lowermost input terminals of multiplexers 423-416, respectively, to the lowermost input terminals of multiplexers 415-408, respectively, and to the lowermost input terminals of multiplexers 407-400, respectively. Thus, configuration data bits IDB8 [7:0] are provided to input multiplexer 9 four times. If multiplexer control signals MC[1:0] are representative of a third state, then multiplexers 431-424, multiplexers 423-416, multiplexers 415-408, and multiplexers 407-400 pass the configuration data bits IDB8 [7:0] as output signals OUT[31:24], OUT[23:16], OUT[15:8], and OUT[7:0], respectively. In the foregoing manner, the configuration bit stream is routed through input multiplexer 9 to input register 7. The width of input register 7 is preferably the same as the maximum supported width of input data bus IDB. In the described embodiment, input register 7 has a width of 32 bits. FIG. 5 is a circuit diagram illustrating input register 7, which includes 32 input terminals and 32 output terminals. The 32 input terminals are coupled to receive the output signals OUT[31:0] from input multiplexer 9. The 32 output terminals provide output signals IR[31:0]. Input register 7 is controlled by a clock signal CLK and register enable signals EN[2:0] received from configuration state machine 15. The register enable signals EN[2:0], when asserted, enable various sets of the input terminals of input register 7. When the clock signal CLK is asserted, the signals on the enabled input terminals are loaded into input register 7. Table 1 summarizes the manner in which the output signals OUT[31:0] are loaded into input register 7 in response to the register enable signals EN[2:0].<tb>TABLE 1<tb> Enabled Input Terminals<tb>EN[2:0] of Input Register 7<tb>000 None<tb>001 OUT[31:24]<tb>010 OUT[23:16]<tb>011 OUT[15:8]<tb>100 OUT[7:0]<tb>101 OUT[31:16]<tb>110 OUT[15:0]<tb>111 OUT[31:0] A 32-bit configuration data value is loaded into input register 7 by providing an enable signal EN[2:0] having a value of "111" and then asserting the clock signal CLK. A 32-bit configuration data value is thereby loaded in a single clock cycle. A pair of 16-bit configuration data values are loaded into input register 7 as follows. A first 16-bit configuration data value is loaded into input register 7 by providing an enable signal EN[2:0] having a value of "101" and then asserting the clock signal CLK. A second 16-bit configuration data value is loaded into input register 7 by providing an enable signal EN[2:0] having a value of "101 " and then asserting the clock signal CLK. A pair of 16-bit configuration data values are thereby loaded in two clock cycles. Note that the first 16-bit configuration data value must be applied to input multiplexer 9 during the first clock cycle, and that the second 16-bit configuration data value must be applied to input multiplexer 9 during the second clock cycle. Four 8-bit configuration data values are loaded into input register 7 in a similar manner by providing enable signals EN[2:0] having values of "001", "010", "011", and "100" and asserting the clock signal CLK four times. Regardless of the width of input data bus IDB, configuration data is loaded into input register 7 until all 32-bits of input register 7 have been loaded. FIG. 5A is a circuit diagram of three bit-slices of the least significant bits of input register 7 in accordance with one embodiment of the present invention. These bit slices include flip-flops 501-503 and multiplexers 511-513. When the enable signals EN[2:0] have values of 100, 110 or 111, multiplexers 511-513 pass the signals OUT[0:2] to flip-flops 501-503, respectively. For all other enable signals, multiplexers 511-513 route the last values stored by flip-flops 501-503 to the input terminals of flip-flops 501-503, respectively. The other 29 bit slices of input register 7 are identical to the illustrated bit slices (other than in their response to enable signals EN[2:0]), and operate in accordance with Table 1. In another embodiment of the present invention, the input data bus IDB can also be configured to have a width of one bit. In this embodiment, input register 7 is modified to have a mode in which input register 7 operates as a shift register. Each configuration data bit is sequentially shifted into input register 7 from the least significant bit position of the register until 32 configuration data values have been loaded into input register 7. FIG. 5B is a circuit diagram of three bit-slices of the least significant bits of input register 7 that enables serial operation in accordance with one embodiment of the present invention. Because the circuitry of FIG. 5B is similar to the circuitry of FIG. 5A, similar elements in FIGS. 5A and 5B are labeled with similar reference numbers. Thus, the bit slices of FIG. 5SB include flip-flops 501-503, multiplexers 511-513 and multiplexers 521-523. Multiplexers 521-523 are controlled by an additional enable signal EN[3]. When the enable signal EN[3] has a first logic value (e.g., logic low), multiplexers 521-523 pass the output signals received from multiplexers 511-513, respectively, to flip-flops 501-503, respectively. Under these conditions, the input register of FIG. 5B operates in the same manner as the input register of FIG. 5A. However, when the enable signal EN[3] has a second logic value (e.g., logic high), multiplexer 521 routes an initial serial input data bit SER_IN[x] to the input terminal of flip-flop 501. During the next clock cycle, the serial input data bit SER_IN[x] is routed through multiplexer 522 to the input terminal of flip-flop 502. Also during this clock cycle, the next serial input data bit SER_IN[y] is routed through multiplexer 521 and into flip-flop 501. This process continues until 32 configuration data bits have been loaded into input register 7. In one embodiment, the portion of configuration state machine 15 that controls input multiplexer 9, input register 7, and shadow input register 3 is implemented separately from the rest of configuration state machine 15. Returning now to FIG. 2, when input register 7 is full, configuration state machine 15 initiates a load from input register 7 to shadow input register 3 by asserting a clock signal on the clock input terminal of shadow input register 3. Input register 7 provides a 32-bit output signal IR[31:0] to shadow input register 3. Once the contents of input register 7 have been loaded into shadow input register 3, configuration state machine 15 begins overwriting the configuration data in input register 7 with new configuration data. At this point in the configuration process, 32 bits of configuration data are ready in shadow input register 3 for loading into configuration register 4. Shadow input register 3 provides a 32-bit output signal on bus S[31:0]. In the present embodiment, the FPGA employs an array of configurable logic blocks (CLBs) arranged in tiles, as well as input/output blocks (IOBs) located around the periphery of the FPGA. CLBS and IOBs are collectively referred to as configurable blocks. Each 1-bit wide slice of configuration memory cells that extends along an entire column of configurable blocks is referred to as a configuration memory frame. (In another embodiment, a configuration memory frame comprises only a portion of a column of configurable blocks.) In the described embodiment, each configurable block is 18 configuration memory cells tall. As a result, the number of configuration data bits in a configuration memory frame is a multiple of 18. However, in the present embodiment shadow input register 3 is 32 bits wide, and therefore the number of bits is not a multiple of 18. It is therefore necessary to alter the width of the data provided by shadow input register 3 to accommodate the width of the configuration memory frames. Configuration register 4 is a shift register that includes a set of 18 registers for each row of configurable blocks in the FPGA. For example, configuration register 4 includes register sets 13A and 13B. The 18 configuration data values loaded into register set 13A are used to configure the first row of configurable blocks, and the 18 configuration data values loaded into register set 13B are used to configure the last row of configurable blocks. Eighteen of the 32 configuration data values stored in shadow input register 3 are routed to configuration pre-load bus 5. From configuration pre-load bus 5, these 18 configuration data values are shifted into register set 13B under the control of configuration state machine 15. The 14 bits that are not routed to configuration pre-load bus 5 need not be completely wasted, as some of these 14 bits can be used for accuracy testing. FIG. 6 (inset on FIG. 2) is a detailed diagram of a 1-bit storage device 14 in register set 13B. All of the 1-bit storage devices in configuration register 4 are substantially identical to storage device 14. In the described embodiment, storage device 14 is a D flip-flop having a D input terminal coupled to receive a configuration data value (from configuration pre-load bus 5 or the Q output terminal of a lower adjacent register), a Q output terminal coupled to shadow configuration register 8 and a D input terminal of an upper adjacent register, and a clock input terminal (shown as a triangle in FIG. 6) coupled to configuration state machine 15. To load the 18 configuration data values on pre-load bus 5 into configuration register 4, configuration state machine 15 asserts a clock signal on the clock input terminals of the storage devices in configuration register 4. Each time a new set of 18 configuration data values is provided on pre-load bus 5, configuration state machine 15 clocks the configuration register, thereby shifting the configuration data values up toward register set 13A. This cycle continues until configuration register 4 is full of configuration data values. In one embodiment, configuration register 4 is divided into a plurality of configuration sub-registers that are laid out across a corresponding plurality of logic blocks, in order to reduce the distance between the 1-bit storage devices in configuration register 4 and the corresponding configurable blocks. When configuration register 4 is full, configuration state machine 15 asserts a clock signal on the clock input terminal of shadow configuration register 8, thereby loading the configuration data values stored in configuration register 4 into shadow configuration register 8. Shadow configuration register 8 has the same number of bits as configuration register 4. After shadow configuration register 8 has been loaded, configuration state machine 15 can begin loading new configuration data values into configuration register 4 (for programming the next column of configurable blocks). Therefore, the configuration process does not have to pause while the frame of configuration data is written to a configuration memory frame. Instead, this write step can take as long as it takes configuration register 4 to be re-loaded with the next frame of configuration data. Shadow configuration register 8 is coupled to each of the configuration memory frames F0 -FN of configuration memory array 1 as illustrated. Shadow configuration register 8 drives the frame of configuration data across the device to where it is clocked into a selected configuration memory frame. A frame of configuration data is written from shadow configuration register 8 to a configuration memory frame F0 -FN of configuration memory array 1 when a configuration clock goes high and the corresponding enable signal (EN) is active. Configuration state machine 15 selects one of the configuration memory frames F0 -FN by causing an enable signal to be applied to one of the configuration memory frames. In the described embodiment, the enable signal is generated as follows. Configuration state machine 15 provides a clock signal that increments frame counter 12. The count value generated by frame counter 12 is provided to input terminals of comparators 100 -10N. The other input terminals of comparators 100 -10N are coupled to frame number registers 110 -11N, respectively. Each of frame number registers 110 -11N is programmed to provide a unique address to its associated comparator. When the count value of frame counter 12 matches the contents of a particular frame number register, the associated comparator asserts an enable signal. This enable signal is provided to an associated configuration memory frame. The addressing of configuration memory frames is described in more detail by Ong et al in U.S. Pat. No. 5,821,772, which is incorporated by reference. After the enable signal is asserted, configuration state machine 15 asserts a clock signal on the clock input terminals of each configuration memory frame. As a result, the configuration data values are written to the selected configuration memory frame. In one embodiment, the frame number addresses are hard coded instead of being stored in frame number registers 110 -11N. When frame number registers 110 -11N are used, each register has to be initialized before configuration can commence. Ong et al. in U.S. Pat. No. 5,821,772, which is referenced above, describes how this initialization is done by the first block of writes to the device. However, initialization could also be performed by a reset signal initializing each register to a unique value. In one variation, a plurality of frame number registers 110 -11N are loaded with the same addresses, thereby enabling the same data to be written to many configuration memory frames at the same time. This feature helps decrease the time needed to test a device. Second Embodiment of the Invention FIG. 7 illustrates a configuration circuit 600 used to configure an FPGA in accordance with another embodiment of the present invention. Similar elements in FIGS. 2 and 7 are labeled with similar reference numbers. Thus, circuitry 600 includes input multiplexer 9, input register 7, shadow input register 3, shadow configuration register 8, configuration memory array 1, frame number registers 110 -11N comparators 100 -10N frame counter 12, and configuration state machine 15. In configuration circuit 600, configuration pre-load bus 5 of circuit 200 is replaced by multiplexer array 62. In addition, configuration register 4 of circuit 200 is replaced by configuration register 64. Configuration register 64 includes 36-bit register sets 63A and 63B (compared to 18-bit register sets 13A and 13B in configuration register 4). The width of configuration register 64 in this embodiment is 36 bits, because 36 is the smallest whole multiple of the CLB memory cell height (18 bits) that is greater than the maximum supported input data bus width of 32 bits. As described in more detail below, configuration state machine 15 of FIG. 7 is more complicated than configuration state machine 15 of FIG. 2. Circuit 200 of FIG. 2 provides a simple solution to the requirement for resolving incompatible bus widths between shadow input register 3 and configuration register 4. The solution of FIG. 2 is relatively easy to implement and avoids the need to deal with bit stream compatibility requirements. However, the configuration bit stream is almost twice as large as it needs to be and the bus bandwidth is not fully utilized. The embodiment illustrated in FIG. 7 is more complex, but does not require unnecessary bits in the configuration bit stream, thereby fully utilizing the bus bandwidth. In the embodiment of FIG. 7, multiplexer array 62 is coupled to receive incoming configuration data from both 32-bit input register 7 and 32-bit shadow input register 3. The input terminals of shadow input register 3 are driven from the output terminals of input register 7. Every configuration data word loaded into input register 7 is passed on to shadow input register 3 in the subsequent load cycle. Multiplexer array 62 receives 64 configuration data bits from 32-bit input register 7 and 32-bit shadow input register 3. As described in more detail below, configuration state machine 15 controls multiplexer array 62 to route 36 of these 64 configuration data bits to configuration register 64. Configuration state machine 15 then provides a clock signal to configuration register 64, thereby causing the 36 selected configuration data bits to be loaded into configuration register 64. This loading operation proceeds in the manner described above in connection with configuration circuit 200. After configuration register 64 has been filled with configuration data, this configuration data is loaded into configuration memory array 1 in the manner described above in connection with configuration circuit 200. Multiplexer array 62 is now described. FIGS. 8A and 8B illustrate state tables 700A and 700B, respectively. State table 700A defines the manner in which input register 7 and shadow input register 3 are loaded with 32-bit configuration data values A[31:0] to U[31:0] during a first 21 states. State table 700B defines the manner in which multiplexer array 62 routes these configuration data values during these 21 states. Note that state tables 700A and 700B define a 9-state repeated process that is implemented under the control of configuration state machine 15. (State machines are well known in the art of IC design; therefore, state machine 15 is not described in detail.) As will become apparent in view of the subsequent disclosure, this 9-state process requires each bit of configuration register 64 to be fed by an 8-to-1 multiplexer. Thus, multiplexer array 62 includes thirty-six 8-to-1 multiplexers (not shown). These multiplexers are implemented in accordance with state table 700B. The 36 bits provided to configuration register 64 are designated CR[35:0]. During the initial state (State 0), a first 32-bit configuration data value A[31:0] is loaded into input register 7. During the next state (State 1), all 32-bits of this configuration data value A[31:0] are loaded from 32-bit input register 7 to shadow input register 3, and a second configuration data value B[31:0] is loaded into input register 7. Also during State 1, the configuration data values A[31:0] and B[31:0] begin to propagate through multiplexer array 62. Note that configuration register 64 is not clocked during State 0 or State 1, as valid data is not yet available from multiplexer array 62. This lack of a clock pulse is indicated by the letter "N" appearing in the "Load" column at the right edge of FIG. 8B. During the next state (State 2), multiplexer array 62 provides the configuration data values A[31:0] and B[31:28] to configuration register 64. During State 2, these 36 configuration data values are loaded into configuration register 64 as a 36-bit configuration data word CR[35:0]. This clocking step is indicated by the letter "Y" appearing in the "Load" column at the right edge of FIG. 8B. Also during State 2, the second configuration data value B[31:0] is loaded from input register 7 to shadow input register 3. In addition, a third configuration data value C[31:0] is loaded into input register 7. During the next state (State 3), multiplexer array 62 provides the configuration data values B[27:0] and C[31:24] to configuration register 64. During State 3, these 36 configuration data values are loaded into configuration register 64 as a 36-bit configuration data word CR[35:0]. Also during State 3, the third configuration data value C[31:0] is loaded from input register 7 to shadow input register 3. In addition, a fourth configuration data value D[31:0] is loaded into input register 7. The above-described sequence is continued in the manner defined by state tables 700A and 700B. Note that during State 8, a ninth configuration data value I[31:0] is loaded into input register 7. Also note that during the next state (State 0'), this entire ninth configuration data value I[31:0] is loaded into configuration register 64 (along with the remaining bits from the eighth configuration data value H[31:0]). Thus, at the end of State 0', nine 32-bit configuration data values have been loaded into configuration register 64 as eight 36-bit configuration data values. Consequently, all of the configuration data bits are used (none are wasted). During the next state (State 1'), configuration register 64 is not loaded because it takes an additional cycle to present the next two configuration data values (J[31:0] and K[31:28]) to multiplexer array 62. FIG. 8B clearly identifies the 9 inputs to each of the thirty-six 8-to-1 multiplexers present in multiplexer array 62. For example, the multiplexer that provides bit CR[35] of configuration register 64 is coupled to bit 31 of input register 7 (State 2) or bits 27, 23, 19, 15, 11, 7, and 3 of shadow input register 3 (States 3, 4, 5, 6, 7, 8, and 0', respectively). The thirty-six 8-to-1 multiplexers of multiplexer array 62 can be efficiently laid out as described below with reference to FIGS. 12-14. FIG. 9 is a circuit diagram of multiplexer array 62 in accordance with one embodiment of the present invention. Multiplexer array 62 includes four 16-to-9 multiplexers 800-803. As described in more detail below, each of 16-to-9 multiplexers 800-803 consists of nine 8-to-1 multiplexers. Multiplexers 800-803 are laid out in a compact manner, thereby enabling the efficient implementation of multiplexer array 62. As described above, the 32-bit configuration data values are provided by shadow input register 3 and input register 7 on busses S[31:0] and IR[31:0], respectively. (The terms "S[31:0]" and "IR[31:0]" are used herein to describe both the corresponding busses and the signals carried on the busses.) The bits of these configuration data values S[31:0], IR[31:0] are provided to the input terminals of multiplexers 800-803 as set forth below in Table 2 (and in FIG. 9).<tb>TABLE 2<tb>Multiplexer Input Signals<tb>800 IR[0, 4, 8, 12, 16, 20, 24, 28]<tb> S[0, 4, 8, 12, 16, 20, 24, 28]<tb>801 IR[1, 5, 9, 13, 17, 21, 25, 29]<tb> S[1, 5, 9, 13, 17, 21, 25, 29]<tb>802 IR[2, 6, 10, 14, 18, 22, 26, 30]<tb> S[2, 6, 10, 14, 18, 22, 26, 30]<tb>803 IR[3, 7, 11, 15, 19, 23, 27, 31]<tb> S[3, 7, 11, 15, 19, 23, 27, 31] Thus, each of multiplexers 800-803 receives every fourth bit of configuration data values S[31:0] and IR[31:0]. As described above, the 36-bit configuration data values routed from multiplexer array 62 to configuration register 64 are labeled CR[35:0]. The bits of these configuration data values CR[35:0] are routed from multiplexers 800-803 to configuration register 64 as set forth below in Table 3 (and in FIG. 9).<tb>TABLE 3<tb>Multiplexer Output Signals<tb>800 CR[0, 4, 8, 12, 16, 20, 24, 28, 32]<tb>801 CR[1, 5, 9, 13, 17, 21, 25, 29, 33]<tb>802 CR[2, 6, 10, 14, 18, 22, 26, 30, 34]<tb>803 CR[3, 7, 11, 15, 19, 23, 27, 31, 35] Thus, each of multiplexers 800-803 provides every fourth bit of configuration data value CR[35:0]. Multiplexers 800-803 are controlled by configuration state machine 15. As described in more detail below, each of multiplexers 800-803 shares the same eight control lines. As a result, configuration state machine 15 controls the entire multiplexer array 62 using only eight control signals, further contributing to the efficiency of multiplexer array 62. FIGS. 10A, 10B, 10C, and 10D are circuit diagrams of multiplexers 800, 801, 802, and 803, respectively, in accordance with the present embodiment. Each of 16-to-9 multiplexers 800-803 includes nine 8-to-1 multiplexers. More specifically, multiplexers 800, 801, 802 and 803 include multiplexers 901-909, 911-919, 921-929, and 931-939, respectively. Multiplexers 901-909, 911-919, 921-929, and 931-939 are coupled to receive configuration data values IR[31:0] and S[31:0] as illustrated. Each of multiplexers 901-909, 911-919, 921-929, and 931-939 has eight input terminals. The rightmost input terminal of each multiplexer is defined as the first input terminal of the multiplexer, and the leftmost input terminal of each multiplexer is defined as the eighth input terminal of the multiplexer. The intermediate input terminals are defined as consecutive input terminals between the rightmost and leftmost input terminals (e.g., the third input terminal from the right is the third input terminal). Each of multiplexers 901-909, 911-919, 921-929, and 931-939 is controlled by the same eight control signals CTRL[7:0] (not shown). These eight control signals CTRL[7:0] are controlled by configuration state machine 15 to have eight different states. In each of the eight states, one and only one of the control signals CTRL[7:0] is asserted. (In another embodiment, three control signals are encoded to select the eight different states.) When the first control signal CTRL[0] is asserted, each of multiplexers 901-909, 911-919, 921-929, and 931-939 passes the input signal applied to its first input terminal. When the eighth control signal CTRL[7] is asserted, each of multiplexers 901-909, 911-919, 921-929, and 931-939 passes the input signal applied to its eighth input terminal. Table 4 summarizes the manner in which multiplexers 901-909, 911-919, 921-929, and 931-939 operate in response to control signals CTRL[7:0].<tb>TABLE 4<tb> Enabled Input Terminal of Multiplexers<tb>CTRL[7:0] 901-909, 911-919, 921-929, 931-939<tb>0000 0001 1st Input Terminal (rightmost)<tb>0000 0010 2nd Input Terminal<tb>0000 0100 3rd Input Terminal<tb>0000 1000 4th Input Terminal<tb>0001 0000 5th Input Terminal<tb>0010 0000 6th Input Terminal<tb>0100 0000 7th Input Terminal<tb>1000 0000 8th Input Terminal (leftmost) FIG. 11 is a table 1111 that illustrates the manner in which multiplexers 800-803 route the configuration data values IR[31:0] and S[31:0] in response to the control signals CTRL[7:0]. Note that, as expected, multiplexers 800-803 route the configuration data values from input register 7 and shadow input register 3 in a manner consistent with state table 700B of FIG. 8B. Table 5 illustrates the values of the control signal CTRL[7:0] required to route the configuration data values in the manner defined by state table 700B (FIG. 8B).<tb> TABLE 5<tb> State CTRL[7:0]<tb> State 0 Don't Care<tb> State 1 0000 0001<tb> State 2 0000 0010<tb> State 3 0000 0100<tb> State 4 0000 1000<tb> State 5 0001 0000<tb> State 6 0010 0000<tb> State 7 0100 0000<tb> State 8 1000 0000<tb> State 0' Don't Care<tb> State 1' 0000 0001<tb> State 2' 0000 0010<tb> State 3' 0000 0100<tb> State 4' 0000 1000<tb> State 5' 0001 0000<tb> State 6' 0010 0000<tb> State 7' 0100 0000<tb> State 8' 1000 0000<tb> State 0" Don't Care<tb> State 1" 0000 0001<tb> State 2" 0000 0010 The loading of configuration register 64 through multiplexer array 62 is now described. As described above in connection with FIGS. 8A and 8B, during State 1 the first and second 32-bit configuration data values A[31:0] and B[31:0] are loaded into shadow input register 3 and input register 7, respectively. As a result, during State 1, the configuration data value S[31:0] is equal to the first configuration data value A[31:0], and the configuration data value IR[31:0] is equal to the second configuration data value B[31:0]. Also during State 1, the control signal CTRL[7:0] is controlled to have a value of 0000 0001 (i.e., CTRL[0] is asserted). As a result, the configuration data bits on the first input terminals of multiplexers 901-909, 911-919, 921-929, and 931-939 (i.e., A[31:0] and B[31:28]) are provided to configuration register 64 during State 1. Configuration register 64 is clocked at the beginning of State 2, thereby loading the configuration data bits on the first input terminals of multiplexers 901-909, 911-919, 921-929, and 931-939 (i.e., A[31:0] and B[31:28]) into configuration register 64. As described in connection with FIGS. 8A and 8B, during State 2 the second and third 32-bit configuration data values B[31:0] and C[31:0] are loaded into shadow input register 3 and input register 7, respectively. As a result, during State 2 the configuration data value S[31:0] is equal to the second configuration data value B[31:0], and the configuration data value IR[31:0] is equal to the third configuration data value C[31:0]. Also during State 2, the control signal CTRL[7:0] is controlled to have a value of 0000 0010 (i.e., CTRL[l] is asserted). As a result, the configuration data bits on the second input terminals of multiplexers 901-909, 911-919, 921-929, and 931-939 (i.e., B[27:0] and C[31:24]) are provided to configuration register 64 during State 2. Configuration register 64 is clocked at the beginning of State 3, thereby loading the configuration data bits on the second input terminals of multiplexers 901-909, 911-919, 921-929, and 931-939 (i.e., B[27:0] and C[31:24]) into configuration register 64. The above-described process is repeated for the various states as defined by state tables 700A and 700B and Table 5. One advantage of multiplexers 901-909, 911-919, 921-929, and 931-939 is that they can be laid out in an area-efficient manner. FIG. 12 is a layout diagram of a portion of multiplexer 901, which includes gate electrodes 1000-1007 and source/drain regions 1011-1022. In the described embodiment, the n-type source/drain regions 1011-1022 are fabricated in a monocrystalline silicon substrate in accordance with well known semiconductor processing techniques. Source/drain regions 1011-1022 are stacked in a column along a first axis. Gate electrodes 1000-1007 extend substantially in parallel to one another along a second axis. The second axis is perpendicular to the first axis. Gate electrodes 1000-1007 are coupled to receive control signals CTRL[0]-CTRL[7], respectively. It is understood that gate electrodes 1000-1007 are located on a gate oxide layer (Sio2) that is formed over the silicon substrate. It is further understood that p-type channel regions are located in the substrate beneath the gate electrodes 1000-1007. These elements are formed in accordance with conventional semiconductor processing techniques. Contacts, which are illustrated as squares containing X's, provide electrical contact to source/drain regions 1011-1022 at the upper surface of the silicon substrate. These contacts extend upward from the substrate to contact a first conductive layer (not shown in FIG. 12), which overlies (and is electrically insulated from) the gate electrodes 1000-1007. The first conductive layer is described in more detail in connection with FIG. 13. Multiplexer 901 uses an interleaved transistor configuration. Thus, input signal IR[0] is provided to source/drain region 1022 and input signal IR[4] is provided to source/drain region 1020. From source/drain region 1022, the input signal IR[0] can be transmitted to source/drain region 1021 as the output signal CR[0] by asserting control signal CTRL[7] on gate electrode 1007. Similarly, from source/drain region 1020, the input signal IR[4] can be transmitted to source/drain region 1021 as the output signal CR[0] by asserting control signal CTRL[6] on gate electrode 1006. Interleaving the transistors of multiplexer 901 in this manner minimizes the layout area of multiplexer 901. Multiplexers 902-909, 911-919, 921-929, and 931-939 have layouts identical to multiplexer 901. FIG. 13 is a layout diagram of multiplexer 800, which includes multiplexers 901-909. Multiplexers 901-909 are laid out adjacent to one another, thereby providing a rectangular layout area. Multiplexers 901-909 share gate electrodes 1000- 1007, which were described above in connection with FIG. 12. FIG. 13 illustrates the traces of a first metal layer, which are shown as shaded regions. Connections between the first metal layer and the underlying source drain regions are illustrated as squares containing X's. Input signals IR[0, 4, 8, 12, 16, 20, 24, 28] and S[0, 4, 8, 12, 16, 20, 24, 28] are applied to the traces of the first metal layer as illustrated. In general, the first metal layer includes serpentine traces (which receive the input signals), and square traces (which provide the output signals). The serpentine traces of the first metal layer shift upward one bit position as they extend from the left to the right. This configuration enables multiplexer 800 to route the input signals in the manner described above in state tables 700A-700B (FIGS. 8A and 8B), Table 1111 (FIG. 11) and Table 5. Each of multiplexers 901-909 includes a set of four square traces, which are aligned along the first (vertical) axis. Each set of square traces is commonly connected by a trace of a second metal layer (not shown in FIG. 13). The output signals CR[0, 4, 8, 12, 16, 20, 24, 28, 32] are routed on these traces of the second metal layer. FIG. 14 is a layout diagram of multiplexer 800 that illustrates the second metal layer. The second metal layer includes traces that extend along the first (vertical) axis. Connections between the second metal layer and the first metal layer are illustrated as squares containing +'s. Connections between the second metal layer, the first metal layer, and a source drain region are therefore illustrated as squares containing both X's and +'s. Selected traces of the second metal layer enable input signals IR[0, 4, 8, 12, 16, 20, 24, 28 ] to be routed to multiplexer 800 from the bottom of the multiplexer structure. This enables all of the input signals to multiplexer 800 to be received at one edge of the multiplexer structure. Selected traces of the second metal layer route the output signals to the top of multiplexer 800. As a result, the output signals CR[0, 4, 8, 12, 16, 20, 24, 28, 32] of multiplexer 800 are provided at the edge of the multiplexer structure opposite from the input signals. The structure of multiplexer 800 is repeated for multiplexers 801-803. In one embodiment, multiplexers 800-803 are positioned end-to-end, with gate electrodes 1000-1007 extending in parallel across all four multiplexers 800-803. This placement advantageously results in an area-efficient layout for multiplexers 800-803. Third Embodiment of the Invention A third embodiment of the invention includes a circuit for controlling the configuration of an FPGA with CLBs having a height of twenty configuration memory cells (20-cell CLBS). Eighteen of the twenty configuration memory cells are identical to the eighteen configuration memory cells described above in connection with configuration circuits 200 and 600. The two additional rows of configuration memory cells in each CLB are provided to control new (additional) functions within each CLB. The FPGA of the third embodiment can be configured with a bit stream intended for the 20-cell CLB. This bit stream is referred to as a "type B bit stream". When the FPGA is programmed with a type B bit stream, the new features controlled by the two additional rows of configuration memory cells are enabled. Alternatively, the FPGA can be configured with a bit stream intended for CLBs having a height of 18 configuration memory cells (18-cell high CLBS). This bit stream, which was described above in connection with circuits 200 and 600, is referred to as a "type A bit stream". When the FPGA is programmed with a type A bit stream, the new features controlled by the two additional rows of configuration memory cells are not enabled, and the FPGA operates as if it has 18-cell high CLBS. In the third embodiment, two different state sequences are supported, one for a type A bit stream, and another for a type B bit stream. The state sequences for type B and type A bit streams are shown in FIGS. 15 and 16, respectively. The circuitry required by the third embodiment is similar to circuitry 600, which was described above in connection with FIGS. 7-14. However, because the CLBs have a height of 20 configuration memory cells in the third embodiment, the configuration register 64 is modified to have a width of 40 bits (i.e., the lowest multiple of the CLB height greater than the maximum input data bus width of 32-bits). Multiplexer array 62 must also be modified to support the state sequences described below. FIG. 15 illustrates the state sequence for configuring the FPGA in response to a type B bit stream. It is noted that the beginning of the type B bit stream contains a configuration instruction set (CIS), which is provided to configuration state machine 15 and which identifies the bit stream as a type B bit stream. In response to this CIS, the configuration state machine 15 follows the state sequencing of FIG. 15. The state sequence shown in FIG. 15 is similar to that of FIG. 8B, except that type B bit stream data is loaded to a 40-bit configuration register (rather than a 36-bit configuration register). As a result, only five states are required to load five 32-bit configuration data values. FIG. 16 illustrates the state sequence for configuring the FPGA in response to a type A bit stream. It is noted that the beginning of the type A bit stream contains a CIS, which is provided to configuration state machine 15 and which identifies the bit stream as a type A bit stream. In response to this CIS, configuration state machine 15 follows the state sequencing of FIG. 16. The nine state sequence shown in FIG. 16 is similar to that of FIG. 8B. However, the configuration data bits are only loaded into bit locations CR[37:20] and CR[17:0] of the 40-bit configuration register. The configuration state machine loads bit locations CR[39:38] and CR[19:18] of the configuration register with values (identified by asterisks), that disable the new functions provided by the two new rows of configuration memory cells in each CLB. Bit locations CR[39:38] and CR[19:18] correspond to the two new rows of configuration memory cells in a pair of CLBs. In one embodiment, the new logical functions in the FPGA are designed to be enabled or disabled by the logic value in one or more configuration memory cells. Preferably, all logic high (or all logic low) values consistently enable all of the new functions throughout the FPGA. When loading a Type A bit stream in a Type B FPGA, the configuration state machine simply loads all logic low (or all logic high) bits into bit locations CR[39:38] and CR[19:18], and the functions are thereby disabled. An advantage of this embodiment is that when new bit stream types or new bus widths must be supported in new generations of FPGAs, the interface can be adapted by modifying the configuration state machine, the multiplexer array, and the configuration register. The internal configuration logic of the FPGA need not be changed. Fast Readback Implementation In one embodiment, circuitry is provided to enable the configuration data values stored in memory array 1 to be read back in a fast and efficient manner. In the described embodiment, each configurable block has a height of 18 memory cells. FIG. 17 is a block diagram of such readback circuitry 1700, which includes shadow configuration register 8, configuration register 64, output multiplexer array 71, save register 72, 64-to-32 multiplexer 73 and output register 74. In general, readback circuitry 1700 operates as follows. Initially, the configuration data values stored in one of configuration memory frames F0 -FN of configuration memory array 1 are loaded into shadow configuration register 8 (shown as path 1701). These configuration data values are then loaded into configuration register 64 (shown as path 1702). From configuration register 64, the configuration data values are shifted up and into output multiplexer array 71 as a 36-bit output configuration data value O[35:0]. Output multiplexer array 71 is controlled to route the output configuration data value O[35:0] (along with logic 0 bits, as described in more detail below) as two 32-bit configuration data values DA[31:0] and DB[31:0]. Save register 72 saves the configuration data value DB[31:0] and provides saved data value SDB[31:0]. 64-to-32 multiplexer 73 is controlled to route thirty-two of the configuration data bits from configuration data values DA[31:0] and SDB[31:0] as the output configuration data value D[31:0]. The output configuration data value D[31:0] is loaded into output register 74 and then routed out of the FPGA. These circuits are described in more detail below. Because the readback operation is performed largely in parallel, the speed of the readback operation is much faster than in conventional FPGAS. FIG. 18 is a circuit diagram of a bit-slice of shadow configuration register 8 and configuration register 64. This slice includes tri-state inverter 1801, inverter 1802, flip-flop 1803, 3-to-1 multiplexer 1804, flip-flop 1805 and 3-to-1 multiplexer 1806. Although only a single bit-slice is described, it is understood that all of the bit slices of shadow configuration register 8 and configuration register 64 operate in the same manner. Configuration data values are written to memory array 1 in the manner described above in connection with FIGS. 7-14. For example, a configuration data bit from the previous register set is shifted into flip-flop 1805 through multiplexer 1806. After the entire configuration register 64 is loaded, this configuration data bit is shifted into flip-flop 1803 through multiplexer 1804. The tri-state buffer signal TS is then enabled, thereby allowing the configuration data bit to be transferred from flip-flop 1803, through inverters 1801-1802, into configuration memory array 1. The readback operation takes place as follows. Configuration state machine 15 de-asserts the tri-state enable signal TS and addresses one of the configuration memory frames F0 -FN in the manner described above in connection with FIG. 7. The addressed configuration data bit is thereby routed from the addressed configuration memory frame to the "10" input terminal of multiplexer 1804. Configuration state machine 15 controls multiplexer 1804 to route this configuration data bit to the D input terminal of flip-flop 1803. Configuration state machine 15 then asserts a clock signal CLK that causes the configuration data bit to be latched in flip-flop 1803. Configuration state machine 15 then applies a "00" signal to multiplexer 1804, thereby causing the configuration data bit to be routed back to the D input terminal of flip-flop 1803 during subsequent clock cycles. Configuration state machine 15 also applies a "01" value to multiplexer 1806, thereby causing the configuration data bit to be routed through multiplexer 1806 to the D input terminal of flip-flop 1805. The CLK signal is again asserted (under the control of configuration state machine 15), thereby latching the configuration data bit into flip-flop 1805. The above-described process loads one bit into each of the 1-bit registers in configuration memory 64, thereby filling configuration memory 64. After configuration register 64 has been filled with configuration data values, the uppermost register set applies a 36-bit configuration data value O[35:0] to output multiplexer array 71. After the first configuration data value O[35:0] has been processed (as described in more detail below), configuration state machine 15 applies a "10" value to multiplexer 1806 and asserts the CLK signal. At this time, the configuration data bit from the lower adjacent register set is shifted up into flip-flop 1805. More generally, the configuration data values in configuration register 64 are all shifted upward by one register set. At this time, a second configuration data value O[35:0] is provided to output multiplexer array 71. This second configuration data value is then processed, as described in more detail below. This process continues until all of the configuration data values in configuration register 64 are shifted out the top of configuration register 64. Because configuration data values are read from configuration memory array 1 in parallel, and because the configuration data values are read out of configuration register 64 in parallel, the readback operation is relatively fast when compared to conventional readback schemes. FIG. 19 is a circuit diagram illustrating output multiplexer array 71. Output multiplexer array 71 includes four 9-to-16 multiplexers 1100-1103. Configuration register 64 provides the 36-bit configuration data value O[35:0] to multiplexers 1100-1103. More specifically, the bits of the configuration data value O[35:0] are provided to the input terminals of multiplexers 1100-1103 as set forth below in Table 6 (and in FIG. 19).<tb>TABLE 6<tb>Multiplexer Input Signals<tb>1100 O[0, 4, 8, 12, 16, 20, 24, 28, 32]<tb>1101 O[1, 5, 9, 13, 17, 21, 25, 29, 33]<tb>1102 O[2, 6, 10, 14, 18, 22, 26, 30, 34]<tb>1103 O[3, 7, 11, 15, 19, 23, 27, 31, 35] Thus, each of multiplexers 1100-1103 receives every fourth bit of configuration data value O[35:0]. Each of multiplexers 1100-1103 provides two 8-bit output signals in response to the applied configuration data bits. These 8-bit output signals are a combination of the applied configuration data bits O[35:0] and logic "0" values. The output signals provided by multiplexers 1100-1103 are set forth below in Table 7 (and in FIG. 19).<tb>TABLE 7<tb>Multiplexer Output Signals<tb>1100 DA[0, 4, 8, 12, 16, 20, 24, 28]<tb> DB[0, 4, 8, 12, 16, 20, 24, 28]<tb>1101 DA[1, 5, 9, 13, 17, 21, 25, 29]<tb> DB[1, 5, 9, 13, 17, 21, 25, 29]<tb>1102 DA[2, 6, 10, 14, 18, 22, 26, 30]<tb> DB[2, 6, 10, 14, 18, 22, 26, 30]<tb>1103 DA[3, 7, 11, 15, 19, 23, 27, 31]<tb> DB[3, 7, 11, 15, 19, 23, 27, 31] The output signals provided by multiplexers 1100-1103 are routed on a pair of 32-bit buses. One of the buses carries the 32-bit configuration data value DA[31:0] and the other bus carries the 32-bit configuration data value DB[31:0]. Multiplexers 1100-1103 are controlled by configuration state machine 15. As described in more detail below, each of multiplexers 1100-1103 shares the same eight control lines. As a result, configuration state machine 15 controls the entire multiplexer array 71 using only eight control signals, contributing to the efficiency of multiplexer array 71. FIG. 20 is a circuit diagram of multiplexer 1100 in accordance with one embodiment of the present invention. The structure of multiplexers 1101-1103 is identical to the structure of multiplexer 1100. Multiplexers 1101-1103 are therefore not discussed in detail herein. Multiplexer 4100 includes sixteen 8-to-1 multiplexers 1211-1218 and 1311-1318. Multiplexers 1211-1218 and 1311-1318 are coupled to receive configuration data bits O[0, 4, 8, 12, 16, 20, 24, 28, 32] and a logic "0" value as illustrated. Each of multiplexers 1211-1218 and 1311-1318 has eight input terminals. The rightmost input terminal of each multiplexer is defined as the first input terminal of the multiplexer, and the leftmost input terminal of each multiplexer is defined as the eighth input terminal of the multiplexer. The intermediate input terminals are defined as consecutive input terminals between the rightmost and leftmost input terminals (e.g., the third input terminal from the right is the third input terminal.) Each of multiplexers 1211-1218 and 1311-1318 is controlled by the same eight control signals X[7:0] (not shown). These eight control signals X[7:0] are controlled by configuration state machine 15 to have eight different states. In each of the eight states, one and only one of the control signals X[7:0] is asserted. When the first control signal X[0] is asserted, each of multiplexers 1211-1218 and 1311-1318 passes the input signal applied to its first input terminal. When the eighth control signal X[71] is asserted, each of multiplexers 1211-1218 and 1311-1318 passes the input signal applied to its eighth input terminal. Table 8 provides a complete description of each of the eight states.<tb>TABLE 8<tb> Enabled Input Terminal of<tb>State X[7:0] Multiplexers 1211-1218 & 1311-1318<tb>1 0000 0001 1st Input Terminal (rightmost)<tb>2 0000 0010 2nd Input Terminal<tb>3 0000 0100 3rd Input Terminal<tb>4 0000 1000 4th Input Terminal<tb>5 0001 0000 5th Input Terminal<tb>6 0010 0000 6th Input Terminal<tb>7 0100 0000 7th Input Terminal<tb>8 1000 0000 8th Input Terminal (leftmost) FIG. 21 is a state table 1400 that illustrates the manner in which multiplexers 1100-1103 route the configuration data values O[35:0] and the logic "0" values in response to the control signals X[7:0]. FIG. 22 is a circuit diagram illustrating multiplexer 1100. Because multiplexers 1101-1103 are identical to multiplexer 1100, these multiplexers are not described in detail. The circuitry of multiplexers 1100-1103 is similar to the circuitry of multiplexers 800-803. Multiplexers 1100-1103 are therefore laid out in a manner similar to that described above in connection with FIGS. 12-14. As illustrated in FIG. 17, configuration data value DA[31:0] is provided directly to 64-to-32 multiplexer 73. Configuration data value DB[31:0] is loaded into save register 72 prior to being provided to 64-to-32 multiplexer 73. As described in more detail below, configuration data value DB[31:0] is stored in save register 72 for one cycle prior to being routed through multiplexer 73 as saved data value SDB[31:0]. FIG. 23 is a circuit diagram of 64-to-32 multiplexer 73. Multiplexer 73 includes 4-bit pass circuits 2301A-2308A and 2301B-2308B. Each of pass circuits 2301A-2308A is coupled to receive four bits of configuration data value DA[31:0]. Similarly, each of pass circuits 2301B-2308B is coupled to receive four bits of configuration data value SDB[31:0]. More specifically, pass circuits 2301A-2308A are coupled to receive configuration data bits DA[31:28], DA[27:24], DA[23:20], DA[19:16], DA[15:12], DA[11:81], DA[7:4], and DA[3:0], respectively. Pass circuits 2301B-2308B are coupled to receive configuration data bits SDB[31:28], SDB[27:24], SDB[23:20], SDB[19:16], SDB[15:12], SDB[11:8], SDB[7:4], and SDB[3:0], respectively. Pass circuits 2301A-2308A and 2301B-2308B are controlled by control signals Y[7:0] and the inverse of these control signals as provided by inverters 2317-2310, respectively. Control signals Y[7:0] are controlled to selectively route thirty-two of the sixty-four applied configuration data bits. Table 9 defines the manner in which the configuration data signals DA[31:0] and SDB[31:0] are routed through multiplexer 73 to create output configuration data value D[31:0] in response to control signals Y[7:0]. Configuration state machine 15 cycles the control signals in the sequence illustrated in Table 9.<tb> TABLE 9<tb> State Y[7:0] D[31:0]<tb> 0 0000 0000 DA[31:0]<tb> 1 0000 0001 DB[31:28], DA[27:0]<tb> 2 0000 0011 DB[31:24], DA[24:0]<tb> 3 0000 0111 DB[31:20], DA[19:0]<tb> 4 0000 1111 DB[31:16], DA[15:0]<tb> 5 0001 1111 DB[31:12], DA[11:0]<tb> 6 0011 1111 DB[31:8], DA[7:0]<tb> 7 0111 1111 DB[31:4], DA[3:0]<tb> 8 1111 1111 DB[31:0] The transfer of configuration data values from output multiplexer array 71 to 64-to-32 multiplexer 73 is now described. During an initial state (State O), a first 36-bit configuration data value A[35:0] is applied to output multiplexer array 71. During this initial state, control signals X[7:0] have a value of "0000 0001". As a result, configuration data bits A[35:4] are routed to multiplexer 73 (as configuration data bits DA[31:0]), and configuration data bits A[3:0] are routed to save register 72 (as configuration data bits DB[31:28]). Configuration data bits DB[27:0] have logic zero values. Also during State 0, configuration state machine 15 provides a control signal Y[7:0] having a value of "0000 0000" to 64-to-32 multiplexer 73, thereby causing multiplexer 73 to route configuration data bits A[35:4] to output register 74 (as configuration data bits D[31:0]. At the beginning of State 1, configuration state machine 15 asserts a clock signal that clocks output register 74, thereby loading configuration data bits A[35:4] into output register 74. The asserted clock signal also clocks save register 72, thereby loading configuration data bits A[3:0] and logic "0" values into save register 72. At this time, configuration data bits A[35:4] are then read from output register 74 to output data bus ODB having a width of 32 bits. In the described embodiment, the width of the output data bus ODB is selectable in the same manner as the width of the input data bus IDB. Thus, output data bus ODB can have a width of 32, 16 or 8 bits. Other widths are possible in other embodiments. Also during State 1, a second 36-bit configuration data value B[35:0] is applied to output multiplexer array 71. Configuration state machine 15 causes control signals X[7:0] to have a value of "0000 0010", thereby routing the configuration data bits B[35:8] to multiplexer 73 (as configuration data bits DA[27:0]), and configuration data bits B[7:0] to save register 72 (as configuration data bits DB[31:24 ]). At this time, configuration state machine 15 provides a control signal Y[7:0] having a value of "0000 0001" to 64-to-32 multiplexer 73, thereby causing multiplexer 73 to route configuration data bits A[3:0] to output register 74 (as configuration data bits D[31:28]) and to route configuration data bits B[35:8] (as configuration data bits D[27:0]) to output register 74. Note that configuration data bits A[3:0] were previously stored in save register 72 at the beginning of State 1. At the beginning of State 2, configuration state machine 15 asserts a clock signal that clocks output register 74, thereby loading configuration data bits A[3:0] and B[31:8] into output register 74. The asserted clock signal also clocks save register 72, thereby loading configuration data bits B[7:0] and logic "0" values into save register 72. Also during State 2, a third 36-bit configuration data value C[35:0] is applied to output multiplexer array 71. Configuration state machine 15 causes control signals X[7:0] to have a value of "0000 0100", thereby routing the configuration data bits C[35:12] to multiplexer 73 (as configuration data bits DA[23:0]), and configuration data bits C[11:0] to save register 72 (as configuration data bits DB[31:20]). At this time, configuration state machine 15 provides a control signal Y[7:0] having a value of "0000 0011", to 64-to-32 multiplexer 73, thereby causing multiplexer 73 to route configuration data bits B[7:0] to output register 74 (as configuration data bits D[31:24]) and to route configuration data bits C[35:12] (as configuration data bits D[23:0]) to output register 74. Processing proceeds in the above-described manner, with a new 36-bit configuration data value being routed by output multiplexer array 71 during each successive state. At the end of State 8, nine 32-bit configuration data values D[31:0] have been provided to output register 74. (See FIG. 21.) Advantageously, a large number of configuration data bits can be read back through circuitry 1700 in a relatively fast manner. Those having skill in the relevant arts of the invention will now perceive various modifications and additions which may be made as a result of the disclosure herein. Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents. |
A portable computing device synchronously offloads tasks from a first processing resource to an alternative processing resource. Offload requests are centralized and communicated to a dispatch controller. The request defines the alternative processing resource and the location of items in a common or shared memory related to a thread that is desired to be transferred or dispatched from the primary processing resource to the identified alternative processing resource. The dispatch controller, in response to the request, creates a task dispatch packet that provides the information required to switch the context of the thread that was previously executing on the primary processing resource to the alternative processing resource. The common or shared memory space is leveraged to provide desired performance. Results generated by the alternative processing resource are available in the shared memory space upon return to the primary processing resource. |
1.A computing device comprising:Main processing resource;An auxiliary processing resource configured to communicate with a distribution controller in a device execution environment, the distribution controller being configured to synchronously manage function calls from the primary processing resource;a shared memory space coupled to the primary processing resource and the secondary processing resource and accessible by the primary processing resource and the secondary processing resource, wherein the primary processing resource and the secondary processing resource are configured to A signal/wait interface is generated and responsive to the signal/wait interface.2.The computing device of claim 1, wherein the primary processing resource generates a request to specify the secondary processing resource in response to a task offload condition.3.The computing device of claim 2 wherein the primary processing resource suspends execution of the thread prior to generating the request.4.The computing device of claim 3 wherein said primary processing resource waits for a task completion signal from said distribution controller, and upon receipt of said task completion signal, said primary processing resource resumes said thread carried out.5.The computing device of claim 4, wherein the request directs the distribution controller to provide information that enables the auxiliary processing resource to execute the thread.6.The computing device of claim 4 wherein said task completion signal from said distribution controller is communicated to an operating system.7.The computing device of claim 1 further comprising:A global coordinator coupled to the primary processing resource and configured to receive the request and asynchronously generate a distribution command specifying the auxiliary processing resource in response to the task unloading condition.8.The computing device of claim 7, wherein the global coordinator executes a micro-scheduler capable of initiating the distribution command in response to the request.9.The computing device of claim 1 further comprising:A graphics processing unit specific controller coupled to the primary processing resource and configured to asynchronously receive a request in response to a task unload condition.10.The computing device of claim 9, wherein the graphics processing unit-specific controller executes a scheduler capable of initiating a distribution command in response to the request.11.The computing device of claim 1 further comprising:A digital signal processor configured with a real-time operating system to asynchronously receive requests responsive to task offload conditions.12.The computing device of claim 1 wherein the distribution controller is a hardware component.13.The computing device of claim 1 wherein the distribution controller is implemented in software.14.The computing device of claim 1 wherein the first set of one or more functions of the distribution controller is implemented with hardware elements and the remaining functionality of the distribution controller is implemented in software .15.A method for synchronous task distribution in a portable computing device, comprising:Configuring the portable computing device to have a primary processing resource, a secondary processing resource, and a shared memory space, wherein the shared memory space is accessible to the primary processing resource and the secondary processing resource;Detecting task unloading conditions;Suspending execution of a thread executing in the main processing resource;Generating a request from the portable computing device in response to the task offload condition;The request is transmitted to a distribution controller that identifies the auxiliary processing resource for execution of the thread.16.The method of claim 15 wherein said primary processing resource and said secondary processing resource are configured to generate a signal/wait interface and to respond to said signal/wait interface.17.The method of claim 15 wherein said primary processing resource waits for a task completion signal from said distribution controller, and upon receipt of said task completion signal, said primary processing resource resumes execution of said thread .18.The method of claim 15 wherein said main processing resource waits for a task completion signal from said distribution controller prior to signaling to said operating system that said thread is completed.19.The method of claim 15 wherein transmitting the request to the distribution controller comprises transmitting at least one of the primary processing resource and the shared memory space accessible by the auxiliary processing resource, wherein Information associated with the thread is currently stored in the at least one location.20.The method of claim 15 wherein transmitting the request to the distribution controller comprises using a graphics processing unit-specific controller coupled to the main processing resource And configured to asynchronously receive the request in response to a task offload condition.21.The method of claim 20 wherein said graphics processing unit specific controller executes a scheduler capable of initiating a dispatch command for said auxiliary processing resource.22.The method of claim 15, wherein the transmitting the request to the distribution controller comprises asynchronously receiving, by the digital signal processor configured with a real-time operating system, a response from the task unloading condition Said request for the primary processing resource.23.A computing device comprising:a first unit for processing a thread, the first unit including a mechanism for detecting a task unloading condition;Means for distributing the threads synchronously in response to the task unloading condition;A second unit for processing the thread in response to the means for distributing the thread synchronously.24.A computing device according to claim 23, wherein said first unit for processing said thread identifies said unit for distributing said thread synchronously for processing said second unit of said thread .25.The computing device of claim 24 wherein said first unit for processing said thread suspends execution of said thread prior to transmitting a request to said unit for synchronously distributing said thread.26.The computing device of claim 24 wherein said first unit for processing said thread waits for a task completion signal from said unit for synchronously distributing said thread prior to restoring execution of said thread .27.The computing device of claim 26, wherein the first unit for processing the thread forwards an indication to the operating system regarding receipt of the task completion signal.28.A non-transitory processor readable medium having processor instructions stored thereon that, when executed, direct the processor to perform functions including:Detecting task unloading conditions;Suspend execution of threads executing in the main processing resource;Generating a request in response to the task unloading condition;The request is transmitted to a distribution controller that identifies a secondary processing resource that is different from the primary processing resource for execution of the thread.29.The non-transitory processor readable medium of claim 28, wherein transmitting the request to the distribution controller comprises: specifying an application binary interface, the application binary interface directing where the auxiliary processing resource is located A thread-related entry in the shared memory space.30.The non-transitory processor readable medium of claim 29, wherein the application binary interface comprises a set of N registers, where N is an integer. |
System and method for synchronous task distribution in portable devicesBackground techniqueComputing devices are ubiquitous. Some computing devices are portable, such as smart phones, tablet devices, or laptop computers. In addition to the main functions of these devices, many devices include units that support peripheral functions. For example, a cellular phone can include: primary functions for implementing and supporting cellular telephone calls, as well as still cameras, cameras, global positioning system (GPS) navigation, web browsing, sending and receiving email, sending and receiving text messages, push-to-talk (push) -to-talk) Peripheral functions such as capabilities. As the capabilities of these portable computing devices increase, the amount of computing or processing power required, as well as the data storage capacity typically used to support such functions, also increases.Some conventional designs for handheld portable computing devices include multiple processors and/or processors with multiple cores to support various primary and peripheral functions desired for a particular computing device. These designs typically integrate analog, digital, and RF circuits or functional units on a single substrate and are commonly referred to as system on a chip (SoC). Consumers want improved battery life, size and weight for their laptops, tablets and smartphones. The ability to transfer processing work to components within the SoC is considered for both power management and user experience. The ability to remove power from certain resources can provide significant power savings when user requirements do not require the entire processing resources available on the SoC. The ability to transfer certain tasks to more efficient processing resources in processing the requested tasks can both save power and provide performance gains.However, the cost of managing the transfer of a task from one processing resource to another may prevent the task from being completely unloaded because there may not be enough work to compensate for the delay associated with managing the transfer. In addition, this transfer can be performed only when the time is allowed to manage the transfer, the task is completed, and the result is returned to the requesting party. That is to say, this transfer in the traditional computing model is managed asynchronously. While user mode queuing provides a potential solution for significantly reducing the latency associated with managing task transfers from one processing resource to another, the proposed model relies on the premise that all of these transfers are asynchronous. of.Therefore, there is a need for an improved mechanism for managing task transfer between processing resources that can be applied to situations that require solutions other than asynchronous solutions.Summary of the inventionExample embodiments of systems and methods are disclosed that configure a portable computing device to synchronously offload processing tasks from a first processing resource to an alternate processing resource. The task offload request centralizes resources from the host or main processing (which in one example arrangement is the central processing unit of the SoC). The disclosed systems and methods enable devices such as graphics processing units or digital signal processors to remain autonomous. Therefore, these components are allowed to remain separate from the central processing unit of the SoC.The host or primary processing resource generates a task offload request or request that is passed to the distribution controller. The request specifies an alternate processing resource and a location in the shared or shared memory that is related to a thread that is expected to be transferred or distributed from the primary processing resource to the identified alternate processing resource. Entries or thread related entries stored in shared memory may include code, data, or both code and data. The distribution controller creates a task distribution packet in response to the request, wherein the task distribution packet provides information needed to switch the context of the thread previously executed on the primary processing resource to the alternate processing resource. Use shared or shared memory space to provide the desired performance. The host or main processing resource waits for an indication from the distribution controller about the completion of the task before the execution of the recovery thread. When the main processing resource is returned, the result generated when the thread is executed in the alternate processing resource can be easily obtained in the shared or shared memory space. Alternatively, when no further instructions are to be processed, the primary processing resource transmits a notification to the operating system regarding the completion of the thread.An example embodiment of a computing device includes a primary processing resource, an auxiliary or an alternate processing resource configured to communicate with a distribution controller in a device execution environment. The distribution controller synchronously manages the function calls received from the primary processing resource. The shared or shared device space can be accessed by both the primary processing resource and the secondary processing resource. The primary processing resource and the secondary processing resource are configured to generate a corresponding signal and respond to the corresponding signal based on the signal/wait interface.An example embodiment of a method for synchronous task distribution in a portable computing device includes the steps of configuring the portable computing device to have a primary processing resource, a secondary processing resource, and a shared memory space, wherein the shared memory space is available to the primary processing resource and Auxiliary processing resource access; detecting a task unloading condition; suspending execution of a thread executing in the main processing resource; generating a request from the portable computing device in response to the task unloading condition; and transmitting the request to the distribution controller, the request identifier Auxiliary processing resource for the execution of this thread.Another example embodiment of a computing device includes a plurality of processing resources or units for processing threads, the first unit for processing threads including a mechanism for detecting task offload conditions on the portable computing device. The mechanism or unit for distributing the thread synchronously is in response to a task unload condition. A second or alternative processing unit for executing the thread is responsive to means for distributing the thread synchronously.Another example embodiment is a non-transitory processor readable medium having processor instructions and data stored therein, the processor instructions and data instructing a processor to perform various functions including: detecting a task offload condition; Suspending execution of a thread executing in the main processing resource; generating a request in response to the task unloading condition; and transmitting the request to the distribution controller, the request identifying a secondary processing different from the main processing resource for execution of the thread Resources.DRAWINGSIn the figures, like reference characters refer to the For reference numerals having an alphabetic character number such as "102A" or "102B", the alphabetic character number can distinguish between two similar components or elements present in the same figure. When it is intended that the reference numerals refer to all the parts of the figures having the same reference numerals, the letter character number of the reference numerals may be omitted.FIG. 1 is a schematic diagram showing an example embodiment of a computing device configured as a SoC.2 is a schematic diagram of an example embodiment of a subsystem for synchronously managing the distribution of tasks from primary processing resources to secondary processing resources in the SoC of FIG.3 is a schematic diagram showing an example embodiment of a computing environment that manages the distribution of tasks in the SoC of FIG.4 is a schematic diagram showing another example embodiment of a computing environment that manages the distribution of tasks in the SoC of FIG.5 is a schematic diagram showing a third example embodiment of a computing environment that manages the distribution of tasks in the SoC of FIG.6 is a schematic diagram showing an example embodiment of a task of user mode scheduling and the computing environment presented in FIG.Figure 7 is a schematic diagram showing the tasks of the coordinator management and the computing environment presented in Figure 4.FIG. 8 is a diagram showing the use of a register set to specify a distribution packet.9 is a flow diagram of an example embodiment of a method for synchronization task distribution in a SoC.Detailed waysThe word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily considered to be preferred or advantageous over other aspects.In this specification, the term "application" may also include files having executable content such as object code, scripts, bytecodes, markup language files, and patches. Moreover, an "application" as referred to herein may also include files that are not executable in nature, such as documents that may need to be opened or other data files or data values that need to be accessed.The term "content" may also include files having executable content such as object code, scripts, bytecodes, markup language files, and patches. Moreover, "content" as referred to herein may also include files that are not executable in nature, such as documents that may need to be opened or other data files or data values that need to be accessed.As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, not hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and a computing device can be a component. One or more components can reside within a process and/or execution thread and the components can be centralized on one computer and/or distributed between two or more computers. Moreover, these components can be executed by various computer readable media having various data structures stored thereon. The components may be in a local and/or remote process, for example based on signals having one or more data packets (eg, data from a component interacting with another component in the local system, the distributed system, and/or The communication is performed by signaling the data of a component that interacts with other systems over a network such as the Internet.In this specification, the term "portable computing device" ("PCD") is used to describe any device that operates on a limited capacity rechargeable power source (eg, a battery and/or capacitor). Although PCDs with rechargeable power supplies have been in use for decades, technological advances in rechargeable batteries with the advent of third-generation ("3G") and fourth-generation ("4G") wireless technologies have enabled Numerous PCDs with multiple capabilities. Thus, the PCD can be a cellular telephone, a satellite telephone, a pager, a PDA, a smart phone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop or tablet computer with a wireless connection, and the like.Although described with particular reference to operations within a PCD, the described systems and methods are applicable to any computing device having multiple processing resources, where autonomously and synchronously offloading tasks from one processing system to an alternate processing system may be useful. In other words, the computing systems and methods disclosed herein are applicable to desktop computers, server computers, or any electronic device having multiple processing resources.Reference will now be made to the example shown. An example embodiment of a non-limiting aspect of a portable computing device (PCD) supported by a SoC is shown and is generally labeled 100. The PCD 100 includes a system on chip 120, and the system on chip 120 includes a multi-core CPU 210. The multi-core CPU 210 includes a 0th core 215, a 1st or first core 216, and an Nth core 217. Each of the N cores is independent of each other and is configured to process instructions such as add, move data, branches, and the like. The multi-core CPU 210 executes software that manages the scheduling as indicated by the scheduler 214. Alternatively, multi-core CPU 210 or another portion of SoC 120 is arranged with a hardware (ie, circuitry) unit or set of hardware elements configured to implement a task scheduler.As shown in FIG. 1, display controller 128 and touch screen controller 130 are coupled to multi-core CPU 210. In turn, display/touch screen 132 external to system on chip 120 is coupled to display controller 128 and touch screen controller 130. A video encoder 134 (eg, a progressive phase inversion (PAL) encoder, a sequential transfer color and storage (SECAM) encoder, or a National Television System Committee (NTSC) encoder) is coupled to the multi-core CPU 210. In addition, video amplifier 136 is coupled to video encoder 134 and display/touch screen 132. In addition, video port 138 is coupled to video amplifier 136. As depicted in FIG. 1, a universal serial bus (USB) controller 140 is coupled to the multi-core CPU 210. USB storage device 142 is coupled to USB controller 140. System memory 230 and Subscriber Identity Module (SIM) card interface 146 may also be coupled to multi-core CPU 210 by a connection 219 between multi-core CPU 210 and system memory 230, which is used to transfer data between these elements in a system on a chip. Two or more physical channels or pathways are formed. Further, as shown in FIG. 1, digital camera 148 can be coupled to multi-core CPU 210. In an exemplary aspect, digital camera 148 is a charge coupled device (CCD) camera or a complementary metal oxide semiconductor (CMOS) camera.As shown in FIG. 1, stereo audio codec 150 can be coupled to multi-core CPU 210. Additionally, audio amplifier 152 can be coupled to stereo audio codec 150. In an exemplary aspect, first stereo speaker 154 and second stereo speaker 156 are coupled to audio amplifier 152. FIG. 1 shows that microphone amplifier 158 can also be coupled to stereo audio codec 150. Additionally, the microphone 116 can be coupled to a microphone amplifier 158. In a particular aspect, a frequency modulation (FM) radio tuner 162 can be coupled to the stereo audio codec 150. In addition, FM antenna 164 is coupled to FM radio tuner 162. Additionally, stereo port 166 can be coupled to stereo audio codec 150.FIG. 1 also indicates that a radio frequency (RF) transceiver 168 is coupled to the multi-core CPU 210. The RF switch 170 can be coupled to an RF transceiver 168 and an RF antenna 172. As shown in FIG. 1, keyboard 174 is coupled to multi-core CPU 210. Additionally, a mono headset 176 with a microphone can be coupled to the multi-core CPU 210. Additionally, the vibration device 178 can be coupled to the multi-core CPU 210. FIG. 1 also shows that power supply 180 can be coupled to system on chip 120 via USB controller 140. In a particular aspect, power source 180 is a direct current (DC) power source that provides power to various components of PCD 100 that require power. Moreover, in a particular aspect, the power source 180 is a rechargeable DC battery or a DC power source, wherein the DC power source is derived from an alternating current (AC) to DC transformer connected to an AC power source (not shown).Figure 1 also indicates that PCD 100 can also include a network card 188 that can be used to access a data network, such as a local area network, a personal area network, any other network. Network card 188 may be a Bluetooth network card, a WiFi network card, a personal area network (PAN) card, or any other network card known in the art. Additionally, network card 188 can be incorporated into an integrated circuit. That is, the network card 188 can be a complete solution in the chip and may not be a separate network card 188.The SoC 120 is also arranged with a random access memory (RAM) 216, a distribution controller (DC) 212, a digital signal processor (DSP) 220, and a graphics processing unit (GPU) 225. In the illustrated embodiment, each of these elements is represented as a single hardware element or unit. However, it should be understood that, like multi-core CPU 210, RAM 216, DC 212, DSP 220, and GPU 225 may include multiple instances or copies of circuit components, such as arithmetic logic units or other computing elements designed for a particular task (eg, , circuits for pixel shading, geometric shading, vector processing, etc., as can be expected. Moreover, it should be understood that a first set of one or more functions associated with any of these elements (including DC 212) can be implemented in hardware or a combination of hardware and firmware, with one or The second or remaining set of functions may be implemented in software, where the software is executed by a suitable processor to execute instructions stored in the software.As depicted in FIG. 1, display/touch screen 132, video port 138, USB port 142, camera 148, first stereo speaker 154, second stereo speaker 156, microphone 116, FM antenna 164, stereo port 166, RF switch 170, The RF antenna 172, keyboard 174, mono headset 176, vibrator 178, and power source 180 are external to the system on chip 120.The RF transceiver 168 (which may include one or more modems) supports one or more of the following: Global System for Mobile Communications ("GSM"), Code Division Multiple Access ("CDMA"), wideband code division Address ("W-CDMA"), Time Division Synchronous Code Division Multiple Access ("TDSCDMA"), Long Term Evolution ("LTE"), and variants of LTE (such as, but not limited to, FDB/LTE and PDD/LTE wireless protocols).In the illustrated embodiment, a single instance of multi-core CPU 210 is depicted. However, it should be understood that any number of similarly configured multi-core CPUs can be included to support the various peripherals and functions associated with PCD 100. Alternatively, a single processor or multiple processors, each with a single arithmetic logic unit or core, may be deployed in the PCD 100 or other computing device to support various peripherals associated with the PCD 100 and Features as expected.The illustrated embodiment shows system memory 230 that is disposed within fully integrated system on chip 120. However, it should be understood that two or more vendor-provided memory modules having respective data storage capacities of M bytes may be disposed external to the system on chip 120. When disposed external to system on chip 120, respective memory modules supporting system memory 230 are coupled to the modified multi-channel memory bus (not shown), which includes suitable electrical connections for transferring data and power to the memory modules. CPU 210.In a particular aspect, one or more of the method steps described herein may be via a combination of hardware elements supported by data and processor instructions as stored in system memory 230 (eg, multi-core CPU 210, DSP 220) GPU 225) is implemented in which the data and processor instructions are retrieved by one or more cores or other hardware components of multi-core CPU 210 and cached in RAM 216, internal cache (not shown) as desired. Or in various registers (not shown) within the multi-core CPU 210. Those skilled in the art of designing CPUs, DSPs, and GPUs are familiar with a variety of techniques for managing and manipulating data and processing instructions to support various applications executing on a PCD.2 is a schematic diagram showing an example embodiment of a subsystem for synchronously managing the distribution of tasks from primary processing resources to secondary processing resources in the SoC 120 introduced in FIG. As shown, subsystem 200 includes volatile and non-volatile memory elements (e.g., RAM 216 and system memory 230) coupled to main processing resource 202 and auxiliary processing resources via bus 228. The main processing resource or host 202 includes the multi-core CPU 210 introduced in FIG. 1 and executes an operating system such as O/S 211. As is known, the O/S 211 acts as an intermediary between the application or program and the hardware in the PCD 100. Although the application code is executed by hardware such as the CPU 210, the application code will frequently interact with or be interrupted by the O/S 211 because the O/S 211 manages PCD resources and is other applications on the PCD 100 or The program provides a public service.In the default mode of operation, the main processing resource 202 is used to execute an application or program on the PCD 100. In an alternate mode of operation, the main processing resource 202 executes a run-time interface that dynamically determines one of the primary processing resource 202 and the available auxiliary processing resource 204 for work with a task or thread form. An optimal solution for partitioning between multiple auxiliary processing resources. Alternatively, each program or application can be compiled in such a way as to direct when the main processing resource 202 forwards certain threads or tasks to the auxiliary processing resource 204. However, if so arranged, the primary processing resource or host 202 responds to the detected unload condition by generating and forwarding a request to offload the task to the auxiliary processing resource 204, which in the illustrated example may be the DSP 220. Or one or more instances or copies of GPU 225. In other arrangements (not shown), portions of some threads may be processed using a Field Programmable Gate Array (FPGA), an Arithmetic Logic Unit (ALU), or other device. As indicated in FIG. 2, the offload request is communicated indirectly to the auxiliary processing resource 204. The first path or branch uses the scheduler 214 and the second branch or path uses the distribution controller 212 to communicate with the auxiliary processing resource 204.The asynchronous distribution request is generated by the primary processing resource 202 and passed to the scheduler 214. The scheduler 214 includes a coordinator component 250 for managing priorities and one or more additional inputs to determine when to forward requests to the distribution controller 212. The synchronized distribution request is transmitted to the distribution controller 212. Whether receiving the request indirectly or asynchronously from the scheduler 214 or receiving the request directly and synchronously from the main processing resource 202, the distribution controller 212 generates a distribution packet that provides assistance for the distribution of the packet information. Processing resource 204 performs all the information needed by the unloaded thread. The shared virtual memory space 240 within the RAM 216 is utilized to provide the context of the thread to the auxiliary processing resource 204, and once the thread is complete, the shared virtual memory space 240 is available for continued processing by the main processing resource 202.Responsive to one or more input signals received by primary processing resource 202 and/or one or more internally identified conditions (such as those generated within one or more applications or operating systems executing within primary processing resource 202) ), generate task uninstall conditions. When the task unload condition is recognized by the main processing resource 202, the task unload condition directs the main processing resource 202 to suspend execution threads desiring to branch to the auxiliary processing resource 204 and generate a low latency (eg, approximately 1 ns or less) signal/wait indicator . The example signaling construct does not need to include data and can include a set of instructions, such as sigresourcealloc_signal(), signal(sigresource), and wait(sigresource). The example signaling constructs are exposed to and/or extended to processing resources in PCD 100 and other elements used to support processing resources.The main processing resource 202 preferably suspends the execution thread before generating and transmitting a request to offload the thread to the auxiliary processing resource 204. The suspend prevents the thread from restarting on the main processing resource 202 until the task completion signal is returned to the main processing resource 202. The request issued by the main processing resource 202 is similar to the function call and includes information identifying the particular instance and type of the auxiliary processing resource 204 to be used to execute the unloading portion of the thread.The distribution of the offload threads is indirect because the main processing resource 202 is coupled to the auxiliary processing resource 204 via the distribution controller 212. Although shown as a single element, it should be understood that the distribution controller instance is for each thread that is unloaded or transferred from the primary processing resource 202 to the secondary processing resource 204. For example, if multi-core CPU 210 includes four processing cores and two processing cores in the processing core have separately identified task offload conditions, then at least two instances of distribution controller 212 will be used to generate distribution packets and to, for example, a DSP. Individually identified auxiliary processing resources, such as 220 and GPU 225, forward individual distribution packets. Distribution controller 212 can be implemented as a hardware component or as software executing on host processing resource 202 or host. In addition, distribution controller 212 may utilize some of the functions implemented in hardware elements such as adders, registers, and other devices, and other functions implemented in software by processing resources coupled to registers or other storage elements.Distribution groups provide control without dependencies. That is, everything that is submitted to and forwarded by the distribution control controller 212 is ready for execution. The distribution controller 212 generates a distribution packet that not only identifies the auxiliary processing resource 204, but also provides a single work item space (eg, NDRange) to the identified auxiliary processing resource, which may be one-dimensional, two-dimensional, or three-dimensional. For example, if it is desired to apply a filter to each pixel in a 960x640 image, the thread will identify 960x640 work items, each work item applying a filter to the pixels in the image, ie the work item (x, y) to the pixel (x , y) Apply the filter. Upon completion, the auxiliary processing resource 204 provides a task/thread completion signal to the distribution controller 212 that issued the distribution packet. In turn, distribution controller 212 forwards the same indication to primary processing resource 202. The main processing resource 202 can transfer the same content to the O/S 211 executing on one or more of the remaining cores of the multi-core CPU 210. As arranged, the distribution controller 212 ensures that the distribution request is issued to the auxiliary processing resource 204 available to the execution thread and is served by the auxiliary processing resource 204. In addition, distribution controller 212 can maintain a relationship between the set of requests and the distribution packets, which further dictates a one-to-one relationship between the particular thread and processing resources on PCD 100.As further shown in FIG. 2, a global coordinator or scheduler 214 is provided to receive the offload request and asynchronously generate a dispatch command to the distribution controller 212. The global coordinator or scheduler 214 provides a unit or mechanism for scheduling tasks that are offloaded by the main processing resource 202. The global coordinator or scheduler is adapted to direct the completion of tasks associated with applications executing on the PCD 100. The global coordinator 214 or scheduler is arranged with one or more instances or copies of the coordinator 250, wherein the coordinator 250 generates and issues a distribution command in response to one or more requests received from the main processing resource 202. Each coordinator 250 can execute or include a micro-scheduler configured to initiate a distribution command to the distribution controller 212.In an alternative or alternative arrangement, the DSP 220 is arranged with a real time operating system (RTOS) 221 to process the distribution packets in response to task offload conditions. The RTOS 221 services the received real-time requests with minimal buffer latency. Scheduler 222 provides a predictable execution mode for embedded systems within PCD 100. Scheduler 222 also provides an alternate unit or mechanism for scheduling tasks that are offloaded by primary processing resource 202. The RTOS 221 responds within a strictly defined time or deadline. In this alternative arrangement, DSP 220 issues a request for an offload task to other processing resources, such as GPU 225, by means of an instance or copy of distribution controller 212.As described, the first unit or mechanism for processing threads includes a primary processing resource or host 202. Auxiliary or alternative units or mechanisms for processing threads include one or more of DSP 220, GPU 225, ALU (not shown), or other circuitry or processor. A unit or mechanism for synchronously distributing threads previously executed within the main processing resource or host 202 includes a distribution controller 212. The first unit for processing the thread may be arranged to transmit the offload request directly to the distribution controller 212 or indirectly through the scheduler 214. In this regard, the scheduler 214 and one or more coordinators 250 functioning under the direction of the scheduler 214 provide means for asynchronously receiving requests from the first unit for processing threads. Scheduler 214 may also be considered a global coordinator or unit or mechanism for scheduling execution of tasks in a computer environment. As further described, GPU 225 provides a unit or mechanism for processing graphics commands.3 is a schematic diagram showing an example embodiment of a computing environment 300 that manages the distribution of tasks or threads from primary processing resources to secondary processing resources in the SoC 120 of FIG. Computing environment 300 includes a primary processing resource or host 202, an alternate processing environment 324, and a shared virtual memory 240. The main processing resource 202 includes a plurality of CPUs, computing elements, or cores. In the illustrated embodiment, main processing resource 202 includes CPU (0) 320, CPU (1) 321, CPU (2) 322, and CPU (3) 323. However, it should be understood that fewer computing elements, more computing elements, or a mixture of various computing elements may be included in the host or main processing resource 202 within the PCD 100.Processing environment 324 includes a set of distribution controllers (ie, DCs) having a one-to-one relationship with primary processing resources (ie, CPU (0) 320, CPU (1) 321, CPU (2) 322, and CPU (3) 323). (0) 330, DC (1) 331, DC (2) 332, and DC (3) 333). The workgroup scheduler 325 receives one or more distribution packets from various distribution controllers and forwards the information provided therein to the identified execution units of the auxiliary processing resources 204. In the illustrated embodiment, the auxiliary processing resource 204 is configured with an execution unit (0) 340 to an execution unit (N) 348, where N is an integer. Note that execution units 340-348 may be similarly arranged and associated with a single DSP or GPU, or the execution units may be different types of DSP-specific, several DSPs, GPUs, or a number of GPUs and / or the execution of a combination of these components. The integer N indicates that any desired number of execution units may be included in the auxiliary or alternate processing resource 204.In the illustrated embodiment, execution units 340 - 348 of auxiliary processing resource 204 are sub-elements of DSP 220 and GPU 225. However, the auxiliary processing resource 204 is not limited to this. The execution unit may be a sub-element such as an application specific integrated circuit (ASIC) or even other device such as a separate arithmetic logic unit distributed across the SoC 120.As further shown in FIG. 3, the overall processing flow through computing environment 300 is indicated by an arrow having a sequence identifier enclosed within a circle. For example, CPU (2) 322 generates a request (depicted by arrow 1) in response to the detected unload condition, which is transmitted to distribution controller DC (2) 332. In response, DC (2) 332 generates a distribution packet (depicted by arrow 2) that is forwarded to workgroup scheduler 325. In turn, the workgroup scheduler 325 forwards the information included in the distribution packet to the execution unit identified in the distribution packet, as indicated by arrow 3. As also identified in the distribution packet, execution unit (0) 340 is directed to use the information stored in the specified range 245 of shared virtual memory 240. This specified range 245 of virtual memory 240 includes the context of the thread being unloaded or distributed. Upon completion of the specified work (as also specified in the distribution packet), execution unit (0) 340 leaves a modified version of the information in the specified range 245 of shared virtual memory 240 for use by primary processing resource 202. Additional processing. In addition, execution unit (0) 340 sends an indication to work scheduler 325 that the thread or task has completed, as indicated by arrow 4. The workgroup scheduler 325 records the task/thread completion and sends the same indication to the distribution controller DC(2) 322 as indicated by arrow 5. In turn, as indicated by arrow 6, DC(2) 322 forwards an indication to the CPU (2) 322 that the task/thread has completed. The master or host processing resource 202 waits for a task completion signal from the distribution controller and, upon receiving the task completion signal, resumes execution of the suspended thread.6 is a schematic diagram showing an example embodiment of a user mode scheduling task and the computing environment 300 introduced in FIG. As illustrated in computing environment 300' of Figure 6, request 375 (which is used to offload threads transferred from CPU (2) 322 to DC (2) 332) is presented as a function call that includes the specified application binary interface. (ABI) and arguments passed to the specified distribution controller (ie, DC(2) 332). In the illustrated user mode scheduled task, request 375 (which is depicted by arrow 1 in FIG. 3) is replaced by arrow 1 and arrow 2. The subsequent processing sequence is the same as in FIG. 3 by processing environment 324, where DC(2) 332 interacts with workgroup scheduler 325 and workgroup scheduler 325 also directs execution unit (0) 340. Similarly, task completion is signaled in the same manner as in Figure 3, except that the task or thread completion indication from DC(2) 332 is depicted by arrow 7 and arrow 8, where arrow 7 and arrow 8 together indicate The task completion signal from DC(2) 332 terminates the "While Logic Part" of the function call represented in request 375, which notifies CPU (2) 322 that the unloaded task/thread has completed. As briefly described, CPU (2) 322 may continue to execute the thread using information in the specified range 245 of shared virtual memory 240 as modified by execution unit (0) 340.4 is a schematic diagram showing another example embodiment of a computing environment 400 that manages the distribution of tasks or threads in the SoC 120 of FIG. Computing environment 400 includes a primary processing resource or host 202, an alternate processing environment 420, and a shared virtual memory 240. The main processing resource 202 includes a plurality of CPUs, computing elements, or cores. In the illustrated embodiment, main processing resource 202 includes CPU (0) 320, CPU (1) 321, CPU (2) 322, and CPU (3) 323. However, it should be understood that fewer computing elements, more computing elements, or a mixture of various computing elements may be included in the host or main processing resource 202 within the PCD 100.Processing environment 420 includes a set of distribution controllers (ie, DC(0) 430, DC(1) 431, DC(2) 432, DC(3) 433, and DC(4) 434). Therefore, the CPU of the main processing resource 202 no longer has a one-to-one relationship with the distribution controllers (ie, DC(0) 430, DC(1) 431, DC(2) 432, and DC(4) 433). The workgroup scheduler 325 receives one or more distribution packets from various distribution controllers and forwards the information provided therein to the identified execution units of the auxiliary processing resources 204. In the illustrated embodiment, the auxiliary processing resource 204 is configured with an execution unit (0) 340 to an execution unit (N) 348, where N is an integer. Note that execution units 340-348 may be similarly arranged and associated with a single DSP or GPU, or the execution units may be different types of DSP-specific, several DSPs, GPUs, or a number of GPUs and / or the execution of a combination of these components. The integer N indicates that any desired number of execution units may be included in the auxiliary or alternate processing resource 204.In computing environment 400 shown in FIG. 4, scheduler 410 asynchronously receives an offload request from primary processing resource 202. Scheduler 410 includes multiple instances or replicas of a coordinator such as coordinator (0) 412 and coordinator (M) 418. Coordinators 412-418 (which may be implemented in hardware and/or software) to one of the distribution controllers (ie, DC(0) 430, DC(1) 431, DC(2) 432, and DC(4) 433) Or multiple instances or replicas synchronously unload thread requests. Therefore, in this arrangement, the distribution controller has a one-to-one relationship with the coordinator.As further shown in FIG. 4, the overall processing flow through computing environment 300 is indicated by an arrow having a sequence identifier enclosed within a circle. For example, CPU (1) 321 generates a request (depicted by arrow 1) in response to the detected unload condition, which is transmitted to scheduler 410. The scheduler 410 forwards the offload request to the coordinator (M) 418 in response to the current condition on the PCD 100 and one or more execution algorithms, and the coordinator (M) 418 instead (as shown by arrow 2) distributes The controller DC (4) 434 transmits the unload request. In response, DC (4) 434 generates an unloading packet (depicted by arrow 3) that is forwarded to workgroup scheduler 325. In turn, the workgroup scheduler 325 forwards the information included in the distribution packet to the execution unit identified in the distribution packet, as indicated by arrow 4. As identified in the distribution packet, execution unit (N) 348 is directed to use the information stored in the specified range 445 of shared virtual memory 240. This specified range 445 of virtual memory 240 includes the context of the thread being unloaded or distributed. Upon completion of the specified work (as also specified in the distribution packet), execution unit (M) 348 leaves a modified version of the information in the specified range 445 of shared virtual memory 240 for use by primary processing resource 202. Additional processing. In addition, execution unit (M) 348 sends an indication to work scheduler 325 that the thread or task has completed, as indicated by arrow 5. The workgroup scheduler 325 records the task/thread completion and sends the same indication to the distribution controller DC(4) 434 as indicated by arrow 6. In turn, as indicated by arrow 7, DC(4) 434 forwards an indication of thread/task completion to CPU (1) 321 .Coordinators 412-418 are global to the processing environment and can communicate with distribution controllers 430-434 that offload tasks or threads from primary processing resources to second or alternate processing resources as may be desired. Coordinators 412-418 may be directly executed by a micro-scheduler such as scheduler 410 and may be exposed to developers and programmers via one or more domain-specific languages. The coordinators 412-418 provide the ability to build and manage two levels or layers of scheduling within the PCD 100 when deployed with the described CPUs 320-323.7 is a schematic diagram showing the tasks of a coordinator implementation of user mode scheduling and the computing environment 400 introduced in FIG. In addition to the elements shown in FIG. 4, computing environment 400', and more specifically, alternative processing environment 720, includes RTOS 349, where RTOS 349 communicates with scheduler 410 via connection 405. Connection 405 is a two-way communication path that enables RTOS 349 and scheduler 410 to controllably execute offloaded tasks or threads. As illustrated by computing environment 400' in Figure 7, coordinator request 475 (which is used to offload threads transferred from CPU(1) 321 to DC(4) 434) is presented as a function call that includes the specified application The binary interface (ABI) and the arguments passed to the specified distribution controller (ie, DC(4) 434). In the illustrated coordinator-implemented task of the user mode schedule, the coordinator request 475 (which is depicted by arrow 2 in FIG. 3) is replaced by arrow 2 and arrow 3. The subsequent processing sequence is the same as that shown in FIG. 4, where DC(4) 434 interacts with workgroup scheduler 325 and workgroup scheduler 325 also directs execution unit (N) 348. Similarly, tasks or thread completions are signaled in the same manner as in Figure 4, except that the task or thread completion indication from DC(4) 434 is depicted by arrow 9 and arrow 10, where arrow 9 and arrow 10 Together, the task completion signal from DC(4) 434 is terminated to terminate the "While Logic Part" of the function call represented in coordinator request 475, which notifies CPU (1) 321 that the unloaded task/thread has completed. Thereafter, CPU (1) 321 may continue to execute threads and/or signal to O/S using information in the specified range 245 of shared virtual memory 240 as modified by execution unit (M) 348. The thread is complete.FIG. 5 is a schematic diagram showing a third example embodiment of a computing environment 500 that manages the distribution of tasks or threads in the SoC 120 of FIG. In this arrangement, the processing environment 520 is configured with a controller 534 specific to the graphics processing unit in place of the DC(X) instance. This arrangement allows the workgroup scheduler 325 to process graphics commands independently or simultaneously using non-graphical type threads. Moreover, this arrangement allows the workgroup scheduler 325 to prioritize one type of thread (eg, a graphics thread) than other threads. Note that although a graphics processing unit specific controller 534 is shown, the processing environment 520 is not limited thereto and may include any desired number of graphics processing unit specific controllers.As in other illustrated embodiments, each instance or copy of the graphics processing unit-specific controller 534 can be used to receive a request from the main processing resource 202 for an offloading task or thread. The request is signaled or signaled by the primary processing resource 202 in response to one or more signals or conditions. The graphics processing unit-specific controller 534 executes a scheduler (not shown) capable of initiating a distribution command responsive to the request.As further indicated in FIG. 5, the overall processing flow through computing environment 500 is indicated by an arrow having a sequence identifier enclosed within a circle. For example, CPU (3) 323 generates a request (depicted by arrow 1) in response to the detected unload condition, which is transmitted to controller 534, which is specific to the graphics processing unit. In response, the graphics processing unit-specific controller 534 generates a distribution packet (depicted by arrow 2) that is forwarded to the workgroup scheduler 325. In turn, the workgroup scheduler 325 forwards the information included in the distribution packet to the execution unit identified in the distribution packet, as indicated by arrow 3. As also identified in the distribution packet, execution unit (0) 540 is directed to use the information stored in the specified range 545 of shared virtual memory 240. This specified range 545 of virtual memory 240 includes the context of the thread being unloaded or distributed. Upon completion of the specified work (as also specified in the distribution packet), execution unit (0) 540 leaves a modified version of the information in the specified range 545 of shared virtual memory 240 for use by primary processing resource 202. Additional processing performed. In addition, execution unit (0) 540 sends an indication to work scheduler 325 that the thread or task has completed, as indicated by arrow 4. The workgroup scheduler 325 records the task/thread completion and sends the same indication to the graphics processing unit specific controller 534 as indicated by arrow 5. In turn, as indicated by arrow 6, the graphics processing unit-specific controller 534 forwards an indication to the CPU (323) 323 that the task/thread has completed. As described, the graphics processing unit-specific controller provides a unit or mechanism for scheduling graphics command processing by GPU 225.FIG. 8 is a diagram showing the use of a register set to specify a distribution packet. The distribution packet specifies an Application Binary Interface (ABI) 800 for communicating with the auxiliary processing resources. As indicated in Figure 8, the ABI 800 is a collection of registers arranged in a particular manner. For example, an ABI can include a collection of registers having a desired number. That is, the ABI can be specified by an integer number of registers for storing information used by the auxiliary processing resources. For example, register 801 includes a kernel address. Register 802 includes 2 bits for identifying the dimensions of the workspace, a plurality of reserved bits, and a size of the x-dimensional of the workspace. Register 803 includes bits that identify the size of the y-dimensional and z-dimensional dimensions of the workspace. Register 804 identifies the workgroup size for the x and y dimensions. Register 805 identifies the working size for the z-dimension and the working component segment size. Register 806 includes the completion value address, while register 807 and register 808 specify the individual arguments that can be passed to the auxiliary processing resource. It is possible that alternative registers can be used to optimize the independent variable transmission and various encoding techniques and/or processing rules can be implemented to block calls to the distribution controller 212.9 is a flow diagram of an example embodiment of a method 900 for synchronization task distribution in SoC 120. As shown, the method 900 begins at block 902, where the portable computing device 100 is configured with a primary processing resource and a secondary processing resource, the primary processing resource and the secondary processing resource shared memory space 240. In block 904, the main processing resource 202 or other detector or sensor disposed on the PCD 100 detects an unload condition. In response to the unload condition, the thread executing in the main processing resource 202 is suspended, as indicated in block 906. The main processing resource 202 generates a request in response to the unload condition and/or acknowledges the hang of the execution thread, as indicated in block 908. As further shown in block 910, the request is communicated to the distribution controller 212. The step of transmitting the request to the distribution controller 212 includes transmitting at least one location in the shared memory space (ie, the SVM 240) accessible by the primary processing resource 202 and the secondary processing resource 204, wherein the information associated with the thread is being stored in this. As described, the request identification is applicable to the auxiliary processing resource that continues to execute the previously suspended thread. As also described, the shared memory space is utilized such that auxiliary or alternate processing resources can easily use context information associated with the suspended thread and can be completed when further processing by the primary processing resource is required or desired It is simply passed back to the main processing resource.When the unloaded thread or task is still processing, "NO" exits decision block 912 as indicated by the marked arrow, and upon completion of the wait command, exits block 914 and repeats the query. Otherwise, as indicated in block 916, the distribution controller 212 is used to indicate the primary processing resource: task or thread completion. Thereafter, as indicated in block 918, the main processing resource 202 resumes the thread and repeats the functions of blocks 904 through 918 as desired.As shown and described, the main processing resource or host 202 provides a mechanism for processing threads. A thread is the smallest sequence of instructions that can be managed independently by an O/S or controller. As further described, one or more distribution controllers 212 or one or more graphics processing-specific controllers 334 provide threads for synchronously offloading or distributing the identities (eg, from being transmitted or transmitted from the main processing resource 202). The mechanism indicated in the request. As also shown and described, auxiliary or alternative processing resources, such as DSP, GPU, FPGA, ALU, etc., provide mechanisms for performing an unloading thread as directed by the distribution packet.As described, one or more non-transitory processors or computer readable media or media may have processor instructions stored thereon that, when executed, direct the processor to perform the desired functions of the operations : detecting a task unloading condition; suspending execution of a thread executing in the main processing resource; generating a request in response to the task unloading condition; transmitting the request to the distribution controller, the request identifying the execution of the thread, and the main processing resource Different auxiliary processing resources. The function of transmitting the request to the distribution controller can include specifying an application binary interface that directs the auxiliary processing resource to locate the thread-related entry currently stored in the shared memory space.Certain steps in the processes or process flows described in this specification naturally precede other steps to operate the invention as described. However, the systems and methods of the present invention are not limited to the order of the steps described, if such order or order does not alter the functionality of the systems and methods described above. That is, it is recognized that some steps may be performed before, after, or in parallel (substantially simultaneously) with other steps. In some instances, certain steps may be omitted or not performed without departing from the systems and methods described above. Further, words such as "thereafter", "the", "the", "the", "the", "the", and the like are not intended to limit the order of the steps. These terms are only used to guide the reader through the description of the exemplary methods.In addition, one of ordinary skill in the programming arts will be able to write computer code or identify appropriate hardware and/or circuitry to implement the disclosed elements or functions, without departing from the flowcharts and associated examples in the specification. . Thus, the disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary to obtain a sufficient understanding of how to implement and use the systems and methods of the present invention. The functions of the claimed processor-implemented processes are explained in more detail in the above description and in conjunction with the accompanying drawings, which illustrate various process flows.In one or more exemplary aspects as indicated above, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer readable medium (e.g., non-transitory processor readable medium). Computer readable media includes data storage media.A storage medium may be any available media that can be accessed by a computer or processor. Such computer readable media may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, disk storage or other magnetic storage device, or may be used for carrying or storing instructions or The desired program code in the form of a data structure and any other medium that can be accessed by a computer. As used herein, magnetic disks and optical disks include compact disk ("CD"), laser disk, optical disk, digital versatile disk ("DVD"), floppy disk, and Blu-ray disk, where the disk typically magnetically replicates data while the disk utilizes Laser to optically replicate data. Combinations of the above should also be included within the scope of non-transitory computer readable media.Accordingly, while the selected aspects have been illustrated and described in detail, it will be understood that the various embodiments of the invention may Replacement and change. |
An apparatus is provided which comprises: a comparator circuitry (e.g., auto-zero comparator) to having a first input, a second input, a third input; and an output; a first device (e.g., a low-side switch) coupled to the first and second inputs of the comparator; and a circuitry (e.g., a self-tuning logic) to generate a digital code which represents a comparator offset adjustment with reference to detection of current through a second device (e.g., an inductor), wherein the digital code (e.g., a multibit digital signal) is provided to the third input of the comparator circuitry. |
1.A device that includes:A comparator circuit, the comparator circuit having a first input, a second input, and a third input;A first device coupled to the first input and the second input of the comparator circuit; andA circuit for generating a digital code that represents a comparator offset adjustment, the comparator offset adjustment referring to the detection of the current flowing through the second device, wherein the digital code is provided to the comparator The third input of the circuit.2.The device of claim 1, wherein the comparator circuit comprises:A first AC coupling capacitor and a second AC coupling capacitor, the first AC coupling capacitor and the second AC coupling capacitor are coupled to the first input and the second via a first controllable switch and a second controllable switch, respectively enter;A capacitor coupled to one of the first input or the second input of the comparator circuit via a third controllable switch; andAn analog-to-digital converter (ADC) coupled to the third input of the comparator circuit, wherein the output of the ADC is switchably coupled to the terminal of the capacitor.3.The device of claim 2, wherein the comparator circuit includes a fourth input for receiving a common mode voltage, and wherein the common mode voltage is coupled to the first AC coupling capacitor and the second AC Coupling capacitor.4.The device of claim 1, wherein the comparator circuit is power gated.5.The apparatus of claim 1, wherein the first device is a part of a low-side switch of a DC-DC converter.6.The device, wherein the circuit includes:A first transistor, which is coupled to the second device;A second transistor, which is coupled in series with the first transistor;A buffer coupled to the first transistor and the second transistor; andA flip-flop coupled to the buffer, wherein the clock input of the flip-flop is controlled by a signal received by a low-side switch, wherein the first device is a part of the low-side switch, and wherein The output of the trigger is used to provide the detection of the current flowing through the second device.7.7. The device of claim 6, wherein the circuit includes a counter for counting up or down the value of the digital code according to the output of the flip-flop.8.The device of claim 6, wherein the gate terminal of the first transistor is coupled to a power supply node.9.The device of claim 6, wherein the gate terminal of the second transistor is coupled to ground.10.7. The device of claim 6, wherein the signal received by the low-side switch is delayed before being provided as a clock input of the flip-flop.11.The apparatus according to any one of claims 1 to 10, wherein the second device is an inductor.12.A device that includes:A high-voltage side switch, which is coupled to the first power rail;A low-voltage side switch, which is coupled in series with the high-side switch, wherein the low-voltage side switch is coupled to ground;An inductor, which is coupled to the high-side switch and the low-side switch;A comparator circuit having a first input, a second input, and a third input, wherein the first input and the second input are coupled to the low-side switch, whereinA digital code is provided to the third input to adjust the offset of the comparator circuit according to the current flowing through the inductor;A current detection circuit for detecting the current flowing through the inductor and providing the detection as an output; andA circuit for receiving the output of the current detection circuit and adjusting the digital code according to the output of the current detection circuit.13.The device of claim 12, wherein the current detection circuit comprises:A first transistor, which is coupled to the inductor;A second transistor, which is coupled in series with the first transistor;A buffer coupled to the first transistor and the second transistor; andA flip-flop, which is coupled to the buffer, wherein the clock input of the flip-flop is controlled by a signal received by the low-side switch.14.The device of claim 12, wherein the comparator circuit comprises:A first AC coupling capacitor and a second AC coupling capacitor, which are coupled to the first input and the second input via a first controllable switch and a second controllable switch, respectively ;A capacitor coupled to one of the first input or the second input of the comparator circuit via a third controllable switch; andAn analog-to-digital converter (ADC) coupled to the third input of the comparator circuit, wherein the output of the ADC is switchably coupled to the terminal of the capacitor.15.The device of claim 14, wherein the comparator circuit includes a fourth input for receiving a common mode voltage, and wherein the common mode voltage is coupled to the first AC coupling capacitor and the second AC Coupling capacitor.16.15. The device of any one of claims 12 to 15, wherein the comparator circuit is power gated.17.A system including:MemoryA processor coupled to the memory, wherein the processor includes a DC-DC converter, and the DC-DC converter includes the device according to claims 1 to 11; andA wireless interface, which is used to allow the processor to communicate with another device.18.A system including:MemoryA processor coupled to the memory, wherein the processor includes a DC-DC converter, and the DC-DC converter includes the device according to claims 12 to 16; andA wireless interface, which is used to allow the processor to communicate with another device.19.One method includes:Generating a digital code that represents an offset adjustment of the comparator, the offset adjustment of the comparator referring to the detection of the current flowing through the second device;Providing the digital code to the third input of the second device; andThe first device is coupled to the first input and the second input of the comparator.20.The method of claim 19, comprising:Coupling a first AC coupling capacitor and a second AC coupling capacitor to the first input and the second input;Coupling a capacitor to one of the first input or the second input of the comparator circuit; andCoupling an analog-to-digital converter (ADC) to the third input of the comparator circuit; andThe output of the ADC is switchably coupled to the terminal of the capacitor.21.The method of claim 20, comprising:Receiving a common mode voltage at the fourth input of the comparator; andThe common mode voltage is coupled to the first AC coupling capacitor and the second AC coupling capacitor.22.The method of claim 19, comprising: power gating the comparator.23.The method of claim 19, comprising: counting up or down the value of the digital code according to the output of the flip-flop.24.A device comprising means for performing the method according to any one of claims 19 to 23.25.One method includes:Providing a digital code to the third input of the comparator to adjust the offset of the comparator according to the current flowing through the inductor;Detecting the current flowing through the inductor;Provide the detected current as output;Receive the output; andAdjust the digital code according to the output. |
Self-tuning zero current detection circuitPriority statementThis application claims the priority of U.S. Patent Application No. 16/144,961 filed on September 27, 2018, entitled "Self-Tuning Zero Current Detection Circuit", the entire content of which is incorporated by reference. this.Background techniqueFully integrated voltage regulator (FIVR) with packaged embedded air core inductors or on-die solenoid inductors with planar cores ensures efficient power transmission and fine-grained in complex system-on-chip (SoC) Wide-range dynamic voltage and frequency scaling (DVFS), while providing fast transient response. It is expected that FIVR provides high conversion efficiency over a wide operating range of output voltage and load current (including light load to medium load), so as to maximize the overall energy efficiency of the SoC under different power states. Phase shedding and switch scaling have been used in continuous conduction mode (CCM) for high-frequency FIVR designs with pulse width modulation (PWM) control to maintain high efficiency at large load currents, and Pulse frequency modulation (PFM) and hysteresis control have been used to achieve high efficiency at light to medium loads.However, for high-speed DC-DC converters operating in discontinuous conduction mode (DCM), fast zero current detection (ZCD) is desired for efficient operation. Non-ideal conditions in ZCD-related analog circuits (such as delays and offsets in comparators) can have a significant impact on overall system efficiency. In modern digital complementary metal oxide semiconductor (CMOS) technology, it is becoming more and more difficult to design such high-performance analog circuits that cause high power, area overhead, and expensive trimming.Description of the drawingsAccording to the detailed description given below and the accompanying drawings of various embodiments of the present disclosure, the embodiments of the present disclosure will be more fully understood, however, the detailed description and accompanying drawings should not be construed as limiting the present disclosure to specific embodiments , But only for illustration and understanding.FIG. 1 illustrates a device of a DC-DC converter according to some embodiments of the present disclosure, the DC-DC converter has zero current detection (ZCD) and associated non-idealities related to (ZCD) Self-tuning logic.Figure 2 illustrates a set of curves showing DC-DC converter output and ZCD detection operation according to some embodiments.Figure 3 illustrates a device including a residual current detection circuit for ZCD detection according to some embodiments.Figure 4 illustrates a device including a comparator with offset calibration for ZCD detection according to some embodiments.FIG. 5 illustrates the device of the comparator of FIG. 4 according to some embodiments.FIG. 6 shows a set of curves showing waveforms for residual current detection with negative residual current according to some embodiments.FIG. 7 illustrates a set of curves showing waveforms for residual current detection with positive residual current according to some embodiments.Figure 8 illustrates a set of curves showing the waveform of a ZCD with a self-tuning loop according to some embodiments.FIG. 9 illustrates a set of curves showing a self-tuning operation to obtain low current undershoot according to some embodiments.10A-10B illustrate curves showing measurement data of output voltage ripple in light-load PFM operation according to some embodiments, in which the curve of FIG. 10A shows a curve caused by the resonance effect of a power transmission network (PDN) Double trigger, and Figure 10B shows that the programmable off time is effective in preventing re-triggering.11A-11B illustrate curves showing the transient measurement waveforms of the reference step with and without automatic on-time adjustment, respectively, according to some embodiments.FIGS. 12A-12B illustrate curves respectively showing the transient waveforms of output current loading and unloading under high-speed on-ship load.Figures 13A-13D illustrate the measured efficiency data and load current for different output voltages, the efficiency and output voltage for constant and variable (automatic adjustment) on-time, and the inductor power consumption spectrum, respectively. Curve.FIG. 14 illustrates a smart device or a computer system or a system on chip (SoC) with a DC-DC converter according to some embodiments, the DC-DC converter has ZCD and a device for mitigating non-idealities related to ZCD.Detailed waysOne of the challenges for efficient DCM operation at high switching frequencies is to quickly and accurately detect the zero-crossing of the inductor current. The comparator of the DC-DC converter at both ends of the low-side n-type switch is usually used for DCM. However, compared with the tens of millivolt voltage drop across the low-side n-type switch, the low-power comparator (especially in the scaling process node) exhibits a larger random offset. This can reduce efficiency and create electromagnetic interference (EMI) or radio frequency interference (RFI) problems. The main challenge for compensating the delay of the comparator and gate driver is accurate and efficient zero current detection (ZCD). In addition, when the DC-DC converter is to operate at a high switching frequency (for example, 100 MHz or higher), the challenge is to design a sufficiently fast comparator. In a low voltage DC-DC converter, the voltage signal at the input of the comparator can be very small (for example, tens of millivolts). In this way, even small offsets and delays in the comparator can lead to large residual inductor currents.Since the current in the inductor (for example, 2.5nH) can experience a large negative drift in a short period of time, the conversion efficiency is strongly controlled by the accuracy and speed of the ZCD. Although small ZCD comparator offsets and delays are desired, it is also necessary to minimize its total power consumption in order to achieve high conversion efficiency under light loads (for example, less than 10 mA).The delay of the conventional comparator causes a significant negative inductor current, which results in significant power consumption. At higher switching frequencies (for example, 100MHz or higher), this challenge becomes more severe. In modern digital CMOS process technology nodes, comparator offset can also significantly cause power consumption, because the device mismatch is more serious in modern deep sub-micron CMOS process technology nodes. Although some designs deliberately introduce an offset in the comparator design to compensate for the delay, the introduction of such a one-time deliberate offset in modern CMOS technology nodes has become a challenge because it cannot be accurately controlled across process, voltage, and temperature changes. Offset. Additionally, the random process mismatch can be larger than the offset required to compensate for the delay, which eliminates the benefits of this method.Some embodiments use a self-tuning mechanism to compensate for offset and delay. In some embodiments, a controlled negative offset is added to the ZCD comparator to compensate for the comparator offset and loop delay. In some embodiments, after the ZCD comparator has triggered, the circuit is used to detect the residual current. Then, the detected information is used to increase or decrease the comparator offset by incrementing or decrementing the register value. In this way, the non-idealities in the ZCD process (for example, comparator delay, loop delay, random offset, etc.) are alleviated, allowing fast and accurate operation of the DC-DC converter. In addition, without affecting the efficiency of the system, the design challenge of analog comparator design is significantly relaxed. Other technical effects will be apparent from various drawings and embodiments.In the following description, many details are discussed to provide a more thorough description of the embodiments of the present disclosure. However, it will be obvious to those skilled in the art that the embodiments of the present disclosure can be practiced without these specific details. In other instances, well-known structures and devices are shown in the form of block diagrams rather than in detail to avoid obscuring the embodiments of the present disclosure.Note that in the corresponding drawings of the embodiment, the signals are represented by lines. Some lines may be thicker to indicate more constituent signal paths, and/or may have arrows at one or more ends to indicate the main information flow direction. Such instructions are not intended to be limiting. Rather, these lines together with one or more exemplary embodiments are used to facilitate an easier understanding of the circuit or logic unit. Any signal represented as indicated by design needs or preferences may actually include one or more signals that can propagate in any direction and can be implemented with any suitable type of signal scheme.The term "device" can generally refer to a device according to the context in which the term is used. For example, a device may refer to a stack of layers or structures, a single structure or layer, the connection of various structures having active and/or passive elements, and the like. Generally, the device is a three-dimensional structure with a plane along the x-y direction in an x-y-z rectangular coordinate system and a height along the z direction. The plane of the device may also be the plane of the equipment including the device.Throughout the specification and in the claims, the term "connection" means a direct connection, such as an electrical, mechanical, or magnetic connection between connected things, without any intermediate devices.The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between connected things, or an indirect connection through one or more passive or active intermediate devices.The term "adjacent" here generally refers to the location of a thing adjacent to another thing (for example, close to or with one or more things in between) or adjacent to another thing (for example, abutting against it).The term "circuit" or "module" may refer to one or more passive and/or active components arranged to cooperate with each other to provide a desired function.The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meanings of "a", "a" and "the" include plural designations. The meaning of "in" includes "in" and "on".The term "scaling" generally refers to converting a design (schematics and layout) from one process technology to another, and then reducing the layout area. The term "scaling" generally also refers to shrinking the layout and devices within the same technology node. The term "scaling" can also refer to adjusting (e.g., slowing down or accelerating-ie, reducing or enlarging, respectively) the frequency of a signal relative to another parameter (e.g., power supply level).The terms "substantially", "close", "approximately", "almost" and "approximately" generally mean within +/- 10% of the target value. For example, unless otherwise specified in the clear context in which it is used, the terms "substantially equal," "approximately equal," and "approximately equal" mean that there are only incidental changes between the things described. In the art, this change is usually no more than +/-10% of the predetermined target value.Unless otherwise specified, the use of ordinal adjectives "first", "second", and "third" to describe common objects only indicates that different instances of the same object are being referenced, not intended to imply that the objects described in this way are in time , Space, sorting or any other way must be in the given order.For the purposes of this disclosure, the phrases "A and/or B" and "A or B" mean (A), (B), or (A and B). For the purposes of this disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or ( A, B and C).The terms "left", "right", "front", "rear", "top", "bottom", "above", "below", etc. (if any) in the specification and claims are used for descriptive purposes , Not necessarily used to describe permanent relative position. For example, the terms "above", "below", "front", "rear", "top", "bottom", "above", "below", and "up" as used herein refer to a component within a device The relative position of, structure or material with respect to other reference components, structures or materials, where this physical relationship is worth noting. These terms are used herein for descriptive purposes only, and are mainly in the context of the z-axis of the device, and therefore may be relative to the orientation of the device. Therefore, if the device is oriented upside down with respect to the context of the provided drawings, the first material "above" the second material in the context of the drawings provided herein may also be "below" the second material. In the context of materials, a material disposed above or below another material may be in direct contact or may have one or more intermediate materials. Moreover, one material disposed between the two materials may directly contact the two layers or may have one or more intermediate layers. Instead, the first material "on" the second material is in direct contact with the second material. In the context of component assembly, a similar distinction will be made.The term "between" may be used in the context of the z-axis, x-axis, or y-axis of the device. The material between the two other materials can be in contact with one or both of these materials, or can be separated from both of these two other materials by one or more intermediate materials. Therefore, the material between the two other materials can be in contact with either of these two other materials, or can be coupled to the two other materials through an intermediate material. The device between two other devices may be directly connected to one or two of these devices, or may be separated from both of these two other devices by one or more intermediate devices.Here, the term "back end" generally refers to the portion of the die opposite to the "front end", and where the IC (Integrated Circuit) package is coupled to the IC die bump. For example, high-level metal layers (e.g., metal layer 6 and above in a ten-metal stacked die) and corresponding vias closer to the die package are considered to be part of the back end of the die. In contrast, the term "front end" generally refers to the part of the die that includes the following items: the active area (e.g., where transistors are made), and lower-level metal layers and closer to the active area (e.g., in a ten-metal stacked die) The metal layer 5 and below) corresponding through holes.For the purpose of this embodiment, the transistors in the various circuits and logic blocks described herein are metal oxide semiconductor (MOS) transistors or derivatives thereof, where the MOS transistor includes a drain, a source, a gate, and a body terminal. Transistor and/or MOS transistor derivatives also include Tri-Gate and FinFET transistors, all-around gate cylindrical transistors, tunnel FETs (TFETs), square wiring or rectangular strip transistors, ferroelectric FETs (FeFET) or other transistors that implement transistor functions Devices (e.g., carbon nanotubes or spintronic devices). That is, the symmetric source and drain terminals of the MOSFET are the same terminal and can be used interchangeably here. On the other hand, TFET devices have asymmetric source and drain terminals. Those skilled in the art will understand that other transistors may be used without departing from the scope of the present disclosure, for example, bipolar junction transistors—BJT PNP/NPN, BiCMOS, CMOS, etc.It should be pointed out that those elements in each figure that have the same reference numerals (or names) as the elements of any other figure can operate or function in any manner similar to the described manner, but are not limited thereto.Although various embodiments are discussed here with reference to a step-down DC-DC converter, the embodiments are not limited thereto. For example, the embodiments can be used for boost converters, low dropout regulators, and other types of regulators.FIG. 1 illustrates an apparatus 100 of a DC-DC converter according to some embodiments of the present disclosure, the DC-DC converter has zero current detection (ZCD) and associated non-idealities related to (ZCD) The self-tuning logic. In some embodiments, the device 100 includes an input power rail Vin, an output power rail Vout, a high-side switch 101, a low-side switch 102, a ZCD comparator 103, a self-tuning logic 104, a residual current detection circuit 105, a load 106, an inductor L, load capacitance CL, voltage divider with resistive devices R1 and R2 and capacitive device C1, comparator 108, pulse train finite machine (PFM) logic 107 (or pulse code modulation logic), digital Ton logic 109, Tuning current source Is, tunable capacitor C2. The resistive devices can be R1 and R2 that can be implemented using passive resistors, or active devices (such as transistors operating in a linear region). Passive resistors can be located at the back end of the die, and transistors can be located at the front end of the die. The capacitive device here can be implemented using passive capacitors (e.g., metal capacitors) or active devices (e.g., transistors operating as capacitors). In some embodiments, the capacitive device is a hybrid device in that it includes a metal capacitor and a transistor-based capacitor. The metal capacitor can be located at the back end of the die, and the transistor can be located at the front end of the die.The high-side and low-side switches 101 and 102 are driven by a pulse modulation train of pulses PDrv and NDrv generated by the PFM logic 107. The PFM logic 107 may include delay lines, level shifters, registers, and combinational logic. In some embodiments, the high-side and low-side switches 101 and 102 further include bias transistors coupled in series with the switching transistors of the high-side and low-side switches 101 and 102, respectively. These bias transistors are biased by, for example, Vin/2, Vcc, or Vcc/2. The node Vx coupling the high-side and low-side switches 101 and 102 is coupled to the inductor L, which is coupled to the load capacitor CL, the output power rail Vout, the voltage divider, and the load 106. The output power rail Vout provides a regulated output power Vout for the load 106 (eg, processor core, cache, I/O circuit, or any integrated on-chip or off-chip circuit).The output Vo,div of the voltage divider is compared with the reference voltage Vref by the comparator 108. The reference voltage Vref can be converted from a programmable digital code by a digital-to-analog converter (DAC). The output of the comparator 108 is the Up/Dn indicator, which increases or decreases the pulse width of PDrv and/or NDrv and/or the switching frequency of PDrv and/or NDrv to adjust Vout until Vo,div and Vref are substantially the same.In various embodiments, for ZCD, a ZCD comparator 103 is provided, which detects zero current by comparing the voltages V1 and V2 across the low-side switch 102. The output of the comparator 103 is Cmp_out, which is used to turn on/off the low-side switch 102 when detecting the residual inductor current. The ZCD comparator 103 includes circuits for power gating, auto-zeroing, and digital self-tuning to reduce power consumption while maintaining high accuracy and speed.Another challenge for high-frequency DCM operation stems from the resonance effect of the distributed output power transmission network (PDN) and ceramic capacitors on the package. This may cause undesired re-triggering of the PFM pulses for PDrv and NDrv, resulting in significantly higher output voltage ripple on the Vx node under light load conditions. To suppress re-triggering, some embodiments use a programmable forced off time (generated by PFM logic 107), which prevents triggering of a new pulse within a certain period of time after the end of the previous pulse. In one example, a turn-off time of about 1 ns is sufficient to prevent double pulses as shown by curves 1000 and 1020 in FIGS. 10A-10B, respectively.Referring again to Figure 1, the constant on-time DCM operation causes a large change in the inductor peak current over a wide output voltage range (for example, the range of 0.7 to 1.2V), thereby reducing the peak inductor current far exceeding the rated target value Conversion efficiency at low output voltage. For example, under a 1.2V, 500mA maximum output load, an on-time of 7.5ns limits the inductor peak current to 1.2A. At 0.7V output voltage, the same turn-on time increases the inductor peak current to 2.7A, which affects efficiency. Some embodiments use a digitally controlled on-time Ton generator 109 that utilizes digital input/output voltage commands available from the SoC power management unit to calculate the correct on-time for a specific operating load range and inductor peak current target. These digital input/output voltage commands include Vout_code (for example, a digital code indicating the output voltage Vout), Vin_code (for example, a digital code indicating the input voltage Vin), and a peak current code Ipeak_code (for example, a digital code indicating the peak current on node Vx). Code).In addition to using an auto-zeroing comparator, a sensor is added that can detect the residual current in the inductor based on the overshoot or undershoot on the switch node Vx after the low-side switch 102 is turned off. Then, use this information to increment or decrement the register value. This register controls the negative offset of the comparator through the DAC, which pre-bias the capacitor that is subsequently connected to the AC coupling capacitor. Through charge sharing, the voltage on the AC coupling capacitor is changed, thereby effectively introducing offset. The offset is now a function of the pre-bias voltage that can be controlled by the DAC.Figure 2 illustrates a set of curves 200 showing DC-DC converter output and ZCD detection operation according to some embodiments. Here, the waveform 201 illustrates the inductor current IL, the waveform 202 illustrates the switch output voltage Vx, and the waveform 203 illustrates the ZCD mechanism. When the inductor current drops below zero, the ZCD mechanism should be activated. In the absence of a self-tuning mechanism, the comparator delay of the ZCD comparator causes a delay in turning on the ZCD circuit loop, resulting in power consumption. In various embodiments, the comparator delay is reduced or eliminated through a self-tuning mechanism that allows the ZCD comparator to turn on when the inductor current just drops below zero. In this way, the power consumption derived from the negative inductor current is reduced. Once the inductor current is positive, the ZCD mechanism is disabled.FIG. 3 illustrates an apparatus 300 including a residual current detection circuit for ZCD detection according to some embodiments. The device 300 includes p-type transistors MP1 and MP2 of the high-side switch 101, n-type transistors MN1 and MN2 of the low-side switch 102, capacitors C1 and C2, and a residual current detection circuit 105. In some embodiments, the residual current detection circuit 105 includes the following devices coupled together as shown: n-type transistors MNr1 and MNr2, delay buffer 301, inverter 302, buffer 303, and sequence coupled together in series. Logic (eg, flip-flop) 304.In order to allow accurate tuning of the offset of the comparator, the residual current detection circuit 105 reliably detects the residual current in the inductor L. The residual current detection circuit 105 observes the switch node (Vx) voltage at the correct point in time (for example, when Vx rises above zero). In some embodiments, the transistor MNr1 is a protection device that limits the voltage to the digital power supply (for example, Vcc or Vin/2). When the converter input voltage Vin is higher than the digital Vcc, the protection device is used to protect other circuits in the residual current detection circuit 105. The output Vx2 is a digital signal that can be fed to the sequence unit 304. In order to sample the signal at the correct time, a delayed version of the low-side switch gate signal NDrv is used. For example, NDrv is delayed by the buffer 301 and inverted by the inverter 302 to sample the buffered output Vx2. These delay buffers 301 are tuned in corners simulation to ensure reliable detection across process, voltage, and temperature (PVT). The sampled output Vx_detect is received by the self-tuning logic 104. In some embodiments, the self-tuning logic 104 applies the Vx_detect output to increment or decrement the offset code. For example, the up/down counter of the self-tuning logic 104 is incremented or decremented to update the offset code as the output value of the counter. The operation of the residual current detection circuit 105 will be described with reference to FIGS. 6-7.Figure 4 illustrates a device 400 including a comparator with offset calibration for ZCD detection according to some embodiments. In some embodiments, the device 400 includes AC coupling capacitors AC_Cap1, AC_cap2, switches sw1, sw2, sw3, sw4, sw5, sw6, sw7, sw8, and sw9, a power gated comparator 401, and a digital-to-analog converter (DAC) 402 And capacitive device Ctrim. AC coupling capacitors are coupled between nodes n1 and n3 and n3 and n4, respectively. The common mode voltage Vcm is provided to the nodes n3 and n4 via the switches sw4 and sw6, and the ground or 0V is provided to the nodes n1 and n2 via the switches sw3 and sw4. The input to the comparator 103 is In+ (e.g., V1) and In- (e.g., V2), and the output is Comp_out.Here, the "Z" and "en" signals are derived from the high-side switching signal PDrv and the low-side switching signal NDrv. When the high-side switch 101 is turned on (for example, Z and en become high), the current pulse starts, and after a certain time has passed, the high-side switch 101 is turned off and the low-side switch 102 is turned on (for example, Z becomes low). , En remains high). After the ZCD comparator 103 triggers the output (for example, Cmp_out goes low), the low-side switch 102 is turned off (for example, en goes low). According to some embodiments, both the high-side switch 101 and the low-side switch remain off until a new inductor current pulse begins.The switches sw1, sw2, and sw8 can be controlled by Zb (the inverse of Z), and the switches sw3, sw4, sw5, sw6, sw7, and sw9 can be controlled by the zero signal Z. Zb is generated by inverter 403. When Z is high, this causes switches sw1, sw2, and sw8 to close, and switches sw3, sw4, sw5, sw6, sw7, and sw9 to open. When Zb (the inverse of Z) is high, this closes sw3, sw4, sw5, sw6, sw7, and sw9, and opens sw1, sw2, and sw8.In some embodiments, the ZCD comparator 103 is power gated between DCM pulses to save bias current. This power gating can be performed by an enable signal (en) generated by the self-tuning logic 104. According to some embodiments, during this idle period (or gating period), the input bias branch remains on to enable rapid transition to the active state. The input bias branch provides common mode voltage Vcm to nodes n3 and n4. Once the inductor current pulse is started, the device 400 enters the auto-zero mode, and the high-side power switch 101 is turned on at the same time. In the auto-zeroing mode, the internal compensation network for the comparator 401 is activated to allow stable operation in feedback, and the offset voltage is sampled on the offset storage capacitor Ctrim at the input. After the high-side switch 101 is turned off, the compensation network and the feedback connection are disabled, and the comparator enters the comparison mode.In some embodiments, the capacitor Ctrim is precharged to a numerically controlled voltage by a capacitive DAC (eg, a 5-bit DAC). The DAC 402 receives an offset code (eg, a multi-bit signal) from the self-tuning logic 104. The offset code is based on the output Cmp_out of the comparator 401. For example, the offset code is the output of the up/down counter of the self-tuning logic 104, and the self-tuning logic 104 uses the Vx_detect signal to increment or decrement its up/down counter. During the transition of Vx from high to low side, Ctrim is connected to the offset storage capacitor AC_Cap2 to introduce a small controllable offset. The offset is controlled by an additional loop that uses the residual current detector 105 to adjust the comparator trip point, thereby correcting for any residual offset and circuit delay. As shown in Figure 9, the measurement results show that after the offset register is reset, the loop converges to a very low residual current.FIG. 5 illustrates a device 500 (e.g., 401) of the comparator of FIG. 4 according to some embodiments. The device 500 includes p-type transistors MP1c, MP2c, and MP3c; n-type transistors MN1en, MN1c, MN2en, MN2c, MN3c, and MN4c; a feedback compensation resistor Rfb and a switch swc1 that can be controlled by Z, and are coupled together as shown. During the power gating or idle period, the enable en is low, which cuts off the current path through the comparator 500. Although the embodiment shows the device 500 as the comparator 401, other low-offset comparators capable of power gating can also be used.FIG. 6 illustrates a set of curves 600 showing waveforms for residual current detection with negative residual current according to some embodiments. FIG. 7 illustrates a set of curves 700 showing waveforms for residual current detection with positive residual current according to some embodiments. As shown in FIGS. 6-7, after the transistor MN1 of the low-side switch 101 is turned off (for example, NDrv becomes low), the switch node voltage (Vx) changes depending on whether there is a negative residual current or a positive residual current in the inductor L Is high or low. Even a small residual current causes a large voltage swing on Vx, which can be easily detected. Another advantage of using this mechanism for detection is that it is a direct indicator of residual current and does not suffer from further measurement errors like other current sensing methods. Although Vx_detect is a one-bit signal (indicating positive or negative current), in some embodiments, a multi-bit Vx_detect signal can be used to determine how negative or positive the inductor current is.Figure 8 illustrates a set of curves 800 showing the waveform of a ZCD with a self-tuning loop according to some embodiments. FIG. 8 shows the simulation result of the full DC-DC converter in operation with the ZCD self-tuning loop of some embodiments enabled. It can be seen from the inductor current waveform that the initial comparator delay causes a significant undershoot in the inductor current IL, which may be undesirable. In the subsequent cycle, the self-tuning loop slowly tunes the comparator offset (Offset_code) to achieve a residual inductor current close to zero. According to various embodiments, due to residual current detection, all offsets and delays in the loop can be compensated, and very small steady-state errors can be achieved.FIG. 9 illustrates a set of curves 900 showing a self-tuning operation to obtain low current undershoot, according to some embodiments. Curve 901 shows the ripple in Vx as a function of time. After self-dressing (e.g., offset compensation), an enlarged version of the moire is shown in curve 902. As seen in the enlarged version, the smooth Vx transition represents low residual current after self-tuning (for example, no diode conduction is seen in this example). Curve 903 illustrates the undershoot of devices using various embodiments after self-trimming.10A-10B illustrate curves 1000 and 1020 respectively showing measurement data of output voltage ripple in light-load PFM operation according to some embodiments, wherein the curve 1000 of FIG. 10A shows that due to the power transmission network (PDN ) Double triggering caused by resonance effect, and Figure 10B 1020 shows that the programmable off time is effective in preventing re-triggering.11A-11B illustrate curves 1100 and 1120 respectively showing the transient measurement waveforms of the reference step with and without automatic on-time adjustment, according to some embodiments. Measurements show that, contrary to the constant on-time scheme, before and after the voltage conversion, the inductor peak current remains approximately constant after the output voltage changes according to a constant pulse frequency.12A-12B illustrate curves 1200 and 1220 respectively showing the output current loading and unloading transient waveforms under a high-speed on-ship load (for example, a rise time of 50 ps). Compared with a constant on-time implementation, the conversion efficiency measurement results over a wide output voltage range demonstrate the effectiveness of the digitally controlled variable on-time scheme of various embodiments. The transient performance of the FIVR control loop for reference voltage steps and load current transients is measured. A fast FIVR response to 200mA load and unload transients enabled by a high-speed comparator was demonstrated using an on-chip load with a turn-on time of less than ns.Figures 13A-13D illustrate curves showing measured efficiency data and load current for different output voltages, efficiency and output voltage for constant and variable (automatic adjustment) on-time, and inductor power consumption spectra, respectively 1300, 1320, 1330 and 1340. Figures 13A-13D show FIVR conversion efficiency measurement results and main loss components under a wide range of output voltage and load current. In this example, at a load current of 500mA, efficiencies of 88%, 82%, and 75% are achieved for output voltages of 1.2V, 1V, and 0.8V, respectively. Due to the low power consumption of the 33uA PFM controller, the efficiency is quite stable under a load of 5mA-500mA, but the efficiency drops significantly below 5mA. As shown by the measured inductor current and the spectral components of the inductor's AC resistance characteristics, this efficiency drop is mainly caused by the low quality factor hollow core embedded in the coreless ultra-thin package, which becomes the main loss component under light load conditions Caused by AC resistance loss in the inductor. For larger loads, the greater part of the inductor current spectrum is located at DC, thus reducing inductor losses. Under light load, the DC component becomes smaller, and the AC resistance loss dominates.FIG. 14 illustrates a smart device or a computer system or a system on chip (SoC) with a DC-DC converter according to some embodiments, the DC-DC converter has ZCD and a device for mitigating non-idealities related to ZCD. FIG. 14 illustrates a smart device or a computer system or a system on chip (SoC) with a DC-DC converter according to some embodiments, the DC-DC converter has ZCD and a device for mitigating non-idealities related to ZCD. Figure 14 illustrates a block diagram of an embodiment of a mobile device in which a flat interface connector can be used. In some embodiments, the computing device 1600 represents a mobile computing device (such as a computing tablet, mobile phone, or smart phone), a wireless-enabled e-reader, or other wireless mobile device. It will be understood that some components are shown in general, and not all components of such a device are shown in the computing device 1600.In some embodiments, according to some of the embodiments discussed, the computing device 1600 includes a processor 1610 having a DC-DC converter with ZCD and equipment for mitigating ZCD-related non-idealities . According to some embodiments, other blocks of the computing device 1600 may also include a DC-DC converter with a ZCD and a device for mitigating ZCD-related non-idealities. Various embodiments of the present disclosure may also include a network interface within 1670, such as a wireless interface, so that system embodiments may be incorporated into wireless devices (e.g., cellular phones or personal digital assistants).In one embodiment, the processor 1610 can include one or more physical devices, such as a microprocessor, an application processor, a microcontroller, a programmable logic device, or other processing modules. The processing operations performed by the processor 1610 include execution of an operating platform or operating system on which applications and/or device functions are executed. The processing operations include operations related to I/O (input/output) performed by a human user or with other devices, operations related to power management, and/or operations related to connecting the computing device 1600 to another device. The processing operations may also include operations related to audio I/O and/or display I/O.In one embodiment, the computing device 1600 includes an audio subsystem 1620, which represents hardware (eg, audio hardware and audio circuits) and software (eg, drivers, codecs) components associated with providing audio functions to the computing device . Audio functions can include speaker and/or headphone output and microphone input. In some embodiments, the audio subsystem 1620 includes device and/or machine executable instructions to avoid self-hearing according to some embodiments. The device for such functions can be integrated into the computing device 1600 or connected to the computing device 1600. In one embodiment, the user interacts with the computing device 1600 by providing audio commands that are received and processed by the processor 1610.The display subsystem 1630 represents hardware (e.g., display device) and software (e.g., driver) components that provide visual and/or tactile displays for the user to interact with the computing device 1600. The display subsystem 1630 includes a display interface 1632, and the display interface 1632 includes a specific screen or hardware device for providing a display to the user. In one embodiment, the display interface 1632 includes logic separate from the processor 1610 to perform at least some processing related to display. In one embodiment, the display subsystem 1630 includes a touch screen (or touch pad) device that provides both output and input to the user.The I/O controller 1640 represents hardware devices and software components related to the interaction with the user. The I/O controller 1640 is operable to manage hardware that is part of the audio subsystem 1620 and/or the display subsystem 1630. Additionally, the I/O controller 1640 exemplifies a connection point for an additional device connected to the computing device 1600, and the user can interact with the system through the additional device. For example, devices that can be attached to the computing device 1600 may include a microphone device, a speaker or stereo system, a video system or other display device, a keyboard or keypad device, or other I/O devices for use with specific applications (such as , Card reader or other device).As mentioned above, the I/O controller 1640 can interact with the audio subsystem 1620 and/or the display subsystem 1630. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of the computing device 1600. Additionally, audio output can be provided instead of display output, or in addition to display output. In another example, if the display subsystem 1630 includes a touch screen, the display device also serves as an input device, which can be at least partially managed by the I/O controller 1640. There can also be additional buttons or switches on the computing device 1600 to provide I/O functions managed by the I/O controller 1640.In one embodiment, the I/O controller 1640 manages devices such as accelerometers, cameras, light sensors, or other environmental sensors, or other hardware that can be included in the computing device 1600. The input can be part of direct user interaction, as well as providing environmental input to the system to affect its operation (such as filtering noise, adjusting the display for brightness detection, applying flash or other features to the camera).In one embodiment, computing device 1600 includes power management 1650 that manages battery power usage, battery charging, and features related to power saving operations. The memory subsystem 1660 includes a memory device for storing information in the computing device 1600. The memory can include non-volatile (in the case of interruption of power to the memory device, the state does not change) and/or volatile (in the case of interruption of power to the memory device, the state is indeterminate) memory device . The memory subsystem 1660 can store application data, user data, music, photos, documents or other data, as well as system data (whether long-term or temporary) related to the execution of applications and functions of the computing device 1600.The elements of an embodiment are also provided as a machine-readable medium (e.g., memory 1660) for storing computer-executable instructions (e.g., instructions for implementing any of the other processes discussed herein). The machine-readable medium (for example, the memory 1660) may include, but is not limited to, flash memory, optical disk, CD-ROM, DVD ROM, RAM, EPROM, EEPROM, magnetic or optical card, phase change memory (PCM) or suitable for storing electronic or computer Other types of machine-readable media that can execute instructions. For example, an embodiment of the present disclosure may be downloaded as a computer program (for example, BIOS), and the computer program may be sent from a remote computer (for example, a server) to a remote computer (for example, a server) via a communication link (for example, a modem or a network connection) through a data signal. The requesting computer (for example, the client) transmits.The connection 1670 includes hardware devices (for example, wireless and/or wired connectors and communication hardware) and software components (for example, drivers, protocol stacks) for enabling the computing device 1600 to communicate with external devices. The computing device 1600 may be a separate device, such as other computing devices, wireless access points or base stations, and peripheral devices such as headsets, printers, or other devices.Connection 1670 may include many different types of connections. In summary, the computing device 1600 is exemplified with a cellular connection 1672 and a wireless connection 1674. Cellular connection 1672 generally refers to a cellular network connection provided by a wireless operator, such as a cellular network connection provided via: GSM (Global System for Mobile Communications) or variants or derivatives, CDMA (Code Division Multiple Access) or variants or derivatives Devices, TDM (Time Division Multiplexing) or variants or derivatives or other cellular service standards. The wireless connection (or wireless interface) 1674 refers to a non-cellular wireless connection, and can include a personal area network (such as Bluetooth, near field, etc.), a local area network (such as Wi-Fi), and/or a wide area network (such as WiMax) or Other wireless communications.The peripheral connection 1680 includes hardware interfaces and connectors, and software components (for example, drivers, protocol stacks) for making peripheral connections. It should be understood that the computing device 1600 can be a peripheral device to other computing devices ("to" 1682), and can also have a peripheral device connected to it ("from" 1684). The computing device 1600 generally has a "docking" connector to connect to other computing devices for purposes such as managing (e.g., downloading and/or uploading, changing, synchronizing) content on the computing device 1600. Additionally, the docking connector can allow the computing device 1600 to connect to certain peripheral devices that allow the computing device 1600 to control, for example, the output of content to audiovisual or other systems.In addition to proprietary docking connectors or other proprietary connection hardware, the computing device 1600 can also make peripheral connections 1680 via common or standards-based connectors. Common types can include Universal Serial Bus (USB) connectors (which can include any of many different hardware interfaces), DisplayPort (display port) including MiniDisplayPort (mini display port, MDP), high-definition multimedia interface (HDMI) ), Firewire (Firewire) or other types.Reference in the specification to "embodiments," "one embodiment," "some embodiments," or "other embodiments" means that specific features, structures, or characteristics described in conjunction with these embodiments are included in at least some of the embodiments , But not necessarily in all embodiments. The various appearances of "an embodiment," "one embodiment," or "some embodiments" do not necessarily all refer to the same embodiments. If the specification mentions that "may", "may" or "can" include a component, feature, structure or characteristic, it is not required to include the specific component, characteristic, structure or characteristic. If the specification or claims refer to "a" or "an" element, that does not mean that there is only one of these elements. If the specification or claims refer to "additional" elements, it does not exclude the presence of more than one additional element.In addition, specific features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, wherever specific features, structures, functions, or characteristics associated with the first embodiment and the second embodiment are not mutually exclusive, the first embodiment can be combined with the second embodiment.Although the present disclosure has been described in conjunction with specific embodiments of the present disclosure, in view of the foregoing description, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art. For example, other memory architectures such as dynamic RAM (DRAM) can use the discussed embodiments. The embodiments of the present disclosure are intended to include all such alternatives, modifications, and variations as falling within the broad scope of the appended claims.In addition, for simplicity of illustration and discussion, and in order not to obscure the present disclosure, well-known power/ground connections to integrated circuit system (IC) chips and other components may or may not be shown in the presented figures. Further, in order to avoid obscuring the present disclosure, and in addition, in view of the fact that the arrangement may be shown in block diagram form: the details about the implementation of such block diagram arrangement highly depend on the platform within which the present disclosure will be implemented (ie, such The details should be well within the field of vision of those skilled in the art). In the case of elaborating specific details (for example, a circuit) in order to describe exemplary embodiments of the present disclosure, it should be obvious to those skilled in the art that it is possible to utilize variations of these specific details without these specific details Come to practice this disclosure. The description will therefore be considered illustrative rather than restrictive.Provides a description abstract that will allow the reader to ascertain the nature and points of the technical disclosure. The abstract of the specification is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The appended claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. |
In one embodiment, a device containing sensitive information may be placed in a data security mode. In such a data security mode, certain activities may trigger the partial or full erasure of the sensitive date before the data can be retrieved by an unauthorized user. In one embodiment, the data security mode may be a park mode in which unauthorized physical movement of the device triggers the partial or full erasure of the sensitive data stored in a nonvolatile memory before the data can be retrieved by an unauthorized user. In another aspect of the present description, the earths magnetic field may be used to detect movement of a device in the park mode, and may be used to power the erasure of sensitive data as the device is moved relative to the earths magnetic field. Other aspects are described herein. |
What is claimed is: 1. An apparatus, comprising:a memory configured to store sensitive information in at least a portion of the memory;a detector configured to detect a security event;a selector input configured to input a security mode selection; anda controller coupled to the detector, memory and selector input, said controller configured to receive a security mode selection, and to protect sensitive information stored as data in the at least a portion of the memory, including said controller configured to:place the apparatus carrying the memory in a security mode in response to a received security mode selection; andin response to said detector detecting a first security event while the controller is in the security mode, change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information by reading said portion of said memory. 2. The apparatus of claim 1 wherein said memory is a nonvolatile memory and said detector is a motion detector configured to detect motion of the apparatus wherein said detecting a first security event includes detecting motion of the apparatus carrying said nonvolatile memory. 3. The apparatus of claim 2 wherein the motion detector includes a coil configured to detect motion by generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field wherein said detecting a first security event includes generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field. 4. The apparatus of claim 3 wherein said controller includes a switch configured to direct said generated current to said controller, and wherein said controller is configured to use said generated current to change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information.5. The apparatus of claim 4 wherein said first security mode is a park security mode wherein said controller is configured to:place the apparatus carrying the memory in the park security mode in response to a received park security mode selection; andin response to said motion detector detecting motion of the apparatus carrying said nonvolatile memory while the controller is in the park security mode, change bits of said data of said sensitive information when said apparatus is detected to be in motion while in said park security mode.6. The apparatus of claim 5 wherein said controller is configured to enable said switch when said apparatus is placed in the park security mode, so that said generated current is directed to said controller so that so that bits of said data of said sensitive information are changed by said generated current when said apparatus is in motion while in said park mode. 7. The apparatus of claim 6 wherein the selector input is configured to input a second mode selection other than said park mode, wherein said controller isconfigured to disable said switch when said apparatus is placed in the second mode which disables said directing said generated current to said controller so that any current generated by motion of the coil through the earth's magnetic field when the apparatus is in the second mode is disabled from changing bits of said data of said sensitive information when said apparatus is in motion while in said second mode. 8. A computing system for use with a display, comprising: a memory configured to store sensitive information in at least a portion of the memory; a processor configured to write data in and read data from the memory; a video controller configured to display information represented by data in the memory; a detector configured to detect a security event;a selector input configured to input a security mode selection; and a controller coupled to the detector, memory and selector input, said controller configured to receive a security mode selection, and to protect sensitive information stored as data in the at least a portion of the memory, including said controller configured to:place the apparatus carrying the memory in a security mode in response to a received security mode selection; andin response to said detector detecting a first security event while the controller is in the security mode, change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information by reading said portion of said memory. 9. The system of claim 8 wherein said memory is a nonvolatile memory and said detector is a motion detector configured to detect motion of the apparatus wherein said detecting a first security event includes detecting motion of the apparatus carrying said nonvolatile memory. 10. The system of claim 9 wherein the motion detector includes a coil configured to detect motion by generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field wherein said detecting a first security event includes generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field. 11. The system of claim 10 wherein said controller includes a switch configured to direct said generated current to said controller, and wherein said controller is configured to use said generated current to change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information.The system of claim 11 wherein said first security mode is a park security mode wherein said controller is configured to:place the apparatus carrying the memory in the park security mode in response to a received park security mode selection; andin response to said motion detector detecting motion of the apparatus carrying said nonvolatile memory while the controller is in the park security mode, change bits of said data of said sensitive information when said apparatus is detected to be in motion while in said park security mode.13. The system of claim 12 wherein said controller is configured to enable said switch when said apparatus is placed in the park security mode, so that said generated current is directed to said controller so that so that bits of said data of said sensitive information are changed by said generated current when said apparatus is in motion while in said park mode. 14. The system of claim 13 wherein the selector input is configured to input a second mode selection other than said park mode, wherein said controller isconfigured to disable said switch when said apparatus is placed in the second mode which disables said directing said generated current to said controller so that any current generated by motion of the coil through the earth's magnetic field when the apparatus is in the second mode is disabled from changing bits of said data of said sensitive information when said apparatus is in motion while in said second mode. 15. A method, comprising:protecting sensitive information stored as data in at least a portion of a memory, said protecting including:selectively placing an apparatus carrying the memory in a security mode; detecting a first event while in the security mode; andin response to said first event detecting, changing bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information by reading said portion of said memory. 16. The method of claim 15 wherein said memory is a nonvolatile memory and wherein said detecting a first event includes detecting motion of the apparatus carrying said nonvolatile memory. 17. The method of claim 16 wherein the motion detecting includes generating a current in a coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field.18. The method of claim 17 wherein said changing bits of said data including directing said generated current to a controller, said controller using said generated current to change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information. 19. The method of claim 16 wherein the placing an apparatus carrying the memory in a security mode includes selectively placing the apparatus in a park security mode, wherein said detecting the first event includes detecting whether the apparatus is in the park security mode, and detecting motion of the apparatus carrying said nonvolatile memory when the apparatus is in the park security mode so that bits of said data of said sensitive information are changed when said apparatus is detected to be in motion while in said park mode. 20. The method of claim 18 further comprising selectively placing the apparatus in a park security mode which enables said directing said generated current to said controller so that motion of the coil through the earth's magnetic field when the apparatus is in the park mode, generates current which is directed to said controller so that bits of said data of said sensitive information are changed by said controller using current generated when said apparatus is in motion while in said park mode. 21. The method of claim 20 further comprising selectively placing the apparatus in a second mode other than said park mode, which disables said directing said generated current to said controller so that any current generated by motion of the coil through the earth's magnetic field when the apparatus is in the second mode is disabled from changing bits of said data of said sensitive information when said apparatus is in motion while in said second mode. |
SECURITY MODE DATA PROTECTIONTECHNICAL FIELDCertain embodiments of the present invention relate generally to nonvolatile memory.BACKGROUNDIn a nonvolatile memory, the data stored in the memory is retained.Accordingly, nonvolatile memory retains data during stand by and even power down conditions. Thus, nonvolatile memory may be used to store and retain data in a variety of devices including portable devices which may lack an internal power source. However, such data retention may not be appropriate for storing sensitive data such as passwords and personal keys, for example, particularly in portable devices which may be stolen or otherwise more readily accessed by unauthorized users.One approach for protecting sensitive data has been to program the operating system of the device to store sensitive data in volatile memory. Accordingly, once the device enters the power down condition, removal of power from the volatile memory typically destroys the data in the volatile memory including any sensitive data stored in the volatile memory.Another approach has been to provide for long range wireless remote control of devices such as cellular telephones, for example, which may be lost or otherwise no longer in the possession of the owner. Such remote control features may permit the rightful owner of the cellular telephone to remotely disable the device or erase sensitive data stored in the memory of the telephone. BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.FIG. 1 depicts a high-level block diagram illustrating selected aspects of a system employing data security in accordance with an embodiment of the present disclosure.FIG. 2 depicts a basic architecture of a memory employing data security in accordance with an embodiment of the present disclosure.FIG. 3 depicts a device having a memory employing data security in accordance with an embodiment of the present disclosure.FIG. 4 depicts one example of operations for data security in a memory in accordance with an embodiment of the present disclosure.DESCRIPTION OF EMBODIMENTSIn the description that follows, like components have been given the same reference numerals, regardless of whether they are shown in different embodiments. To illustrate an embodiment(s) of the present disclosure in a clear and concise manner, the drawings may not necessarily be to scale and certain features may be shown in somewhat schematic form. Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.In accordance with the present description, techniques including a sensitive information security circuit are provided for enhancing security of sensitive information stored in memory. In one embodiment, at least a portion of a nonvolatile memory of a device may be automatically erased in response to a detected event such as unauthorized movement of the device, for example. It is recognized herein that it may be appropriate to automatically erase sensitive data stored in nonvolatile memory of a device in response to certain events to prevent or inhibit unauthorized access to the sensitive data which may have been stored in the device. It is further recognized that such sensitive data erasure may be triggered by events in addition to or instead of unauthorized movement, depending upon the particular application.As used herein, the term "erase" refers to resetting or otherwise changing bits stored in memory to eliminate or increase the difficulty of unauthorized recovery of sensitive data stored in the memory. Thus, bits of sensitive data may be erased by resetting bits from their current state to a logical zero or in some embodiments, by resetting bits from their current state to a logical one. In other embodiments, bits of sensitive data may be erased by randomly flipping states of bits of the sensitive data from their current state to the opposite state. It is appreciated that sensitive data stored in memory may be erased using other bit state changing techniques.It is further appreciated that preserving the security of sensitive information stored in various devices is of growing concern as the number of devices containing sensitive information proliferates. Sensitive information may include passwords, account numbers, or other information of a business, financial or personal nature. In addition, devices containing such information are becoming increasingly small and portable and therefore more vulnerable to being stolen. Sensitive information stored in a memory of a device in the possession of an unauthorized person may be extracted and used or otherwise disseminated by the unauthorized person.Moreover, small form factor devices such as credit cards, identity cards and key cards, for example, may be particularly vulnerable to data breaches. A larger form factor device such as a cellular telephone typically has a battery or other active power source to power security protection. For example, a cellular telephone may have the capability of permitting the owner of the cellular telephone to remotely instruct the cellular telephone to destroy sensitive data in the event the telephone becomes lost or stolen before the information is compromised. By comparison, small form factor devices frequently lack costly long range wireless connections and active power sources for such security features.In one aspect of the present description, a device containing sensitive information may be placed in a data security mode. In such a data security mode, certain activities may trigger the partial or full erasure of the sensitive data before the data can be retrieved by an unauthorized user.In one embodiment, the data security mode may be a "park" mode in which unauthorized physical movement of the device triggers the partial or full erasure of the sensitive data stored in a nonvolatile memory before the data can be retrieved by an unauthorized user. It is appreciated herein that unauthorized access to sensitive data in a device often begins with the device being taken by an unauthorized user and moving the device to another location to open the device to retrieve the sensitive data. In accordance with the present description, once such unauthorized movement begins while the device is in the park mode, erasure of sensitive data by the sensitive information security circuit begins and continues in response to continued movement in the park mode. Conversely, upon disabling the park mode of the device, the device may be freely moved by the user without causing the erasure of data.In another aspect of the present description, the earth's magnetic field may be used to detect movement of a device in the park mode, and may be used to power the erasure of sensitive data as the device is moved relative to the earth's magnetic field. As a result, techniques for enhancing security of sensitive information stored in memory as described herein may be utilized by a variety of devices including small form factor devices which may lack an internal power source, for example. It is appreciated that other types of motion detectors may be utilized, depending upon the particular application.Turning to the figures, FIG. 1 is a high-level block diagram illustrating selected aspects of a system implemented, according to an embodiment of the present disclosure. System 10 may represent any of a number of electronic and/or computing devices, that may include a memory device. Such electronic and/or computing devices may include large form computing devices and small form computing devices such as a mainframe, server, personal computer, workstation, telephony device, network appliance, virtualization device, storage controller, portable or mobile devices (e.g., laptops, netbooks, tablet computers, personal digital assistant (PDAs), portable media players, portable gaming devices, digital cameras, mobile phones, smartphones, feature phones, etc.), credit cards, identity cards, key cards or component (e.g. system on a chip, processor, bridge, memory controller, memory, etc.). In alternative embodiments, system 10 may include more elements, fewer elements, and/or different elements. Moreover, although system 10 may be depicted as comprising separate elements, it will be appreciated that such elements may be integrated on to one platform, such as systems on a chip (SoCs).In the illustrative example, system 10 comprises a processor 20 such as a microprocessor or other logic device, a memory controller 30, a memory 40 and peripheral components 50 which may include a sensitive information security circuit in accordance with the present description. The peripheral components 50 may also include, for example, a video controller, input device, output device, storage, network adapter, etc.. The processor 20 may optionally include a cache 25 that may be part of a memory hierarchy to store instructions and data, and the system memory 40 may also be part of the memory hierarchy. Communication between the processor 20 and the memory 40 may be facilitated by the memory controller (or chipset) 30, which may also facilitate in communicating with the peripheral components 50.Storage of the peripheral components 50 may be, for example, nonvolatile storage, such as solid-state drives, magnetic disk drives, optical disk drives, a tape drive, flash memory, etc. The storage may comprise an internal storage device or an attached or network accessible storage. The processor 20 is configured to write data in and read data from the memory 40. Programs in the storage are loaded into the memory and executed by the processor. A network controller or adapter enables communication with a network, such as an Ethernet, a Fiber Channel Arbitrated Loop, etc. Further, the architecture may, in certain embodiments, include a video controller configured to render information on a display monitor, where the video controller may be embodied on a video card or integrated on integrated circuit components mounted on a motherboard or other substrate. An input device is used to provide user input to the processor, and may include a keyboard, mouse, pen-stylus, microphone, touch sensitive display screen, input pins, sockets, or any other activation or input mechanism known in the art. An output device is capable of rendering information transmitted from the processor, or other component, such as a display monitor, printer, storage, output pins, sockets, etc. The network adapter may embodied on a network card, such as a Peripheral Component Interconnect (PCI) card, PCI-express, or some other I/O card, or on integrated circuit components mounted on a motherboard or other substrate.One or more of the components of the device 10 may be omitted, depending upon the particular application. For example, a network router may lack a video controller, or wireless input/output devices, for example. In another example, small form factor devices such as credit cards, for example, may lack many of the components discussed above and may be limited primarily to logic and memory as well as a sensitive information security circuit as described herein.Any one or more of the memory devices 25, 40, and the other devices 10, 20,30, 50 may include a sensitive information security circuit in accordance with the present description. FIG. 2 shows an example of a memory 56 having a sensitive information security circuit 58 in accordance with one embodiment of the present description. The memory 56 includes an array 60 of rows and columns of bitcells 64 of a nonvolatile memory such as, for example, a Spin Transfer Torque Random Access Memory (STTRAM) which is a type of magnetoresistive Random Access Memory (MRAM). It is appreciated that the memory 56 may be other types of MRAM memory or other types of nonvolatile memory such as single or multi- threshold level NAND flash memory, NOR flash memory, single or multilevel phase change memory (PCM, PRAM), byte addressable three-dimensional (3D) cross-point memory, resistive memory, nanowire memory, ferroelectric transistor memory (F- RAM, FeTRAM), thermal-assisted switching memory (TAS), millipede memory, floating junction gate memory (F JG RAM), battery-backed RAM, memristor-based memory, or a combination of any of the above, or may be a volatile memory such as a DRAM memory, for example.The memory 56 may also include a row decoder, a timer device and I/O devices. Bits of the same memory word may be separated from each other for efficient I/O design. A multiplexer (MUX) may be used to connect each column to the required circuitry during a READ operation. Another MUX may be used to connect each column to a write driver during a WRITE operation. A control circuit 68 performs read operations, write operations and utilizes the security circuit 58 to perform sensitive information security operations to the bitcells 64 as explained below. The control circuit 68 is configured to perform the described operations using appropriate hardware, software or firmware, or various combinations thereof.In one embodiment, a portion 80 of the memory 56 is a subarray of bitcells 64 containing sensitive information. In this example, the operating system of the device has designated the subarray 80 for storing sensitive information. The size and location of the subarray 80 may vary, depending upon the particular application. At least a portion of the bits stored in the subarray 80 may be automatically erased in response to a detected event such as unauthorized movement of the device, for example.In this embodiment, the sensitive information security circuit 58 includes a security event detector 82 which detects a security event such as unauthorized movement of the device, for example. In response to detection of the security event, a security circuit logic circuit 84 of the sensitive information security circuit 58 commences erasing at least a portion of the bits stored in the subarray 80 containing the sensitive information, if the device has been placed in a data security mode as represented by a data security mode signal. An example of one such data security mode is a "park" mode in which detection of motion by the detector 82 results in erasure of at least some sensitive information stored in the subarray 80.Accordingly, one example of a suitable security event detector is a motion detector which detects motion of the memory 56 which may be unauthorized motion as indicated by the state of the data security mode signal. It is appreciated that a security event detector 82 in accordance with the present description may detect other types of security events. For example, in a large form factor device having an internal power source, the device entering a power on or power off mode may represent a security event. In such applications, the security event detector 82 may detect the device entering a power on or power off mode. In response, the security circuit logic circuit 84 of the sensitive information security circuit 58 commences erasing at least a portion of the bits stored in the subarray 80 containing the sensitive information, if the device has been placed in a data security mode as represented by a data security mode signal.In some embodiments, such as a small form factor device such as a credit card or key card, for example, the device may lack an internal power source such as a battery to power logic circuitry of the device. Accordingly, in these embodiments, the sensitive information security circuit 58 may optionally include a security circuit power source 86 which powers the security operations of the sensitive information security circuit 58. In one embodiment, the security circuit power source 86 may be an active source of power such as a battery or external line power. In other embodiments, the security circuit power source 86 may be a passive power source. One example of a passive power source of the security circuit power source 86 may include a coil which generates power by electromagnetic induction in response to relative motion of the device with respect to the earth's magnetic fields. Another example, is an internal antennae which may provide power in response to an externally provided RF signal received by the internal antenna. For example, an RFID circuit may be excited with a wireless RF signal provided externally from the device. Yet another example is a photo-voltaic array which generates electricity in response to solar or other radiation. It is appreciated that other active and passive power sources may be provided for the security circuit 58, depending upon the particular application.Although the security circuit logic 84, security event detector 82 and the security circuit power source 86 of the security circuit 58 are depicted separately in the schematic diagram of FIG. 2, it is appreciated that one or more of these functions may be combined so as to be provided by a single device. For example, FIG. 3 shows a small form factor device 100 having a sensitive information security circuit 58 in accordance with one embodiment of the present description. In this example, the sensitive information security circuit 58 includes security circuit logic 84 similar to the security circuit logic 84 discussed above in connection with FIG. 2. Here, the functions of the security event detector 82 and the security circuit power source 86 of FIG. 2 are provided by a combined device which includes a multi-turn coil 130 embedded in a plastic substrate 140 of the device 100 which may be a credit card or key card, for example. It is appreciated that the substrate 140 may be made of any suitable material, depending upon the particular application.In accordance with one aspect of the present description, the earth's magnetic field is utilized to provide for data security. In the embodiment of FIG. 3, the coil 130 is placed around the device 100 to detect motion and to generate electric current. As the device 100 is moved, the earth's magnetic field inside the coil 130 changes, causing current to flow through the coil 130. In accordance with the present description, this earth's magnetic field generated current may be used to both signal a security event and to provide the power to erase data in a memory such as the nonvolatile memory subarray 60. Sensitive data may be erased in its entirety by a security circuit bit erasure logic 140, or selected bits may be erased to change the information partially. In this embodiment, the coil 130 functions as a motion detector to detect unauthorized motion of the device 100 as a security event. It is appreciated that other types of motion detectors may be utilized, depending upon the particular application. For example, gyro sensors may be utilized as motion detectors.The amount of current generated by the coil 130 is a function of the size of the coil, the number of turns of the coil and the change in the earth's magnetic field passing through the coil 130 as a result of motion of the device 100. In one example, for a credit card size form factor of the device 100, the coil 130 may be formed of a wire having a thickness of approximately 1 mm, for example, and may have, in this example, approximately three turns. The current generated by such a coil 130 in the device 100 may be calculated to be approximately 1 mA in one full turn of the coil 130 as the device 100 is moved by a person carrying the device 100.In accordance with the present description, such a quantity of current generated using the earth's magnetic field is sufficient not only to provide a signal indicating movement of the device 100, but also to erase some or all of the bits of sensitive data. In this example, the current generated by motion of the coil 130 through the earth's magnetic field is enough to erase on average 10-20 bits every 10 ns as the motion of the device continues. It is appreciated that the amount of current generated, and the number of bits which may be erased utilizing that generated current, will vary, depending upon the particular application. In another aspect of the present description, the device 100 has an input 150 by which the user may selectively place the device 100 in the park mode in which the output of the coil 130 is coupled by a switch 154 to the security circuit bit erasure logic 140. The device may detect whether it is in a security mode such as the park mode by the state of the switch 154. Thus, in the park mode, current generated by the coil 130 in response to motion of the device 100, is directed by the switch 154 to the security circuit bit erasure logic 140 to signal the unauthorized motion of the device 100 in the park mode and to provide the power to erase bits of the array 80. The input 150 may be any suitable input device such as a touch sensitive area of the device 100, for example.The input 150 may also be used to selectively disable the park mode or otherwise release the device 100 from the park mode. When in the second "nonpark" security mode, the coil 130 is disabled by the switch 154 and removed from the security circuit 58. As a result, the security circuit bit erasure logic 140 is disabled and the device 100 may be freely moved without initiating the erasure of data.Security codes or patterns known to the authorized user may be programmed into the device 100 to ensure that the device 100 is not inadvertently switched to the park mode by the authorized user and is not released from the park mode by anunauthorized user.In one embodiment in which the sensitive data is stored in a subarray of the memory, the portion of bits which are erased to destroy or at least obfuscate sensitive information may be randomly distributed over the subarray. Such a random distribution of erased bits of sensitive data is believed to enhance prevention of unauthorized recovery of the sensitive data. It is recognized that random distribution of erased bits of sensitive data may be achieved in a variety of techniques, depending upon the particular application.For example, it is recognized that physical characteristics of individual bitcells of an array of bitcells in a memory may vary from bitcell to bitcell as a result of variations encountered in typical fabrication processes. One such physical characteristic which may randomly vary from bitcell to bitcell is the level of write current at which a particular bitcell may be changed from one state to another. Thus, a percentage of the bitcells of a subarray may be changed with a relatively weak write current. Such bitcells referred to herein as "weak bitcells" may also be changed relatively quickly as compared to other bitcells of the array. As a consequence, "weak bit" bitcells which may be changed relatively quickly with a relatively weak write current may be randomly distributed over a subarray. By applying the relatively weak write current to the subarray over a relatively short period of time, the weak bit bitcells may be changed. Conversely, those "strong bit" bitcells which may be changed upon application of a relatively strong write current over a relatively long period of time may remain unchanged in the presence of the weak write current.However, the changing of the randomly distributed weak bit bitcells may be sufficient to render unauthorized recovery of the sensitive data of the subarray as a whole sufficiently impractical notwithstanding that the bits of the strong bitcells may remain unchanged. In this manner, write current and write time for sensitive data erasure may be correspondingly reduced to a level lower than that utilized to ensure erasure of all bitcells including strong bit bitcells.In another aspect of the present description, random distribution of erased bits to protect against unauthorized recovery of sensitive data may be achieved by an onboard randomization circuit of the security circuit bit erasure logic 140. In response to detection of a security event such unauthorized motion of the device 100 in the park mode, the randomization circuit may randomly select bits of the sensitive data to be erased. It is appreciated that in some embodiments, erasure of bits of sensitive data may occur automatically in response to detection of a security related event. In other embodiments, sensitive data erasure may be triggered manually by the authorized user.It is further appreciated that a device such as the device 100 may contain different tiers of sensitive data such that sensitive data stored in the subarrays 80, 160, 162, and 164, for example, may have varying degrees of sensitivity. Thus, the sensitive data stored in the subarray 80 may be most sensitive, the sensitive data stored in the subarray 164 may be the least sensitive, and the sensitive data stored in the subarrays 160 and 162 may be more sensitive than the sensitive data of the subarray 164 but less sensitive than the sensitive data of the subarray 80.In yet another aspect of the present description, upon detection of a security event such as unauthorized motion of the device 100 while placed in the park mode, the security circuit bit erasure logic 140 may initiate erasure of bits of the most sensitive data such as that stored in in the subarray 80 first. Upon completion of erasure of a sufficient number of bits of the subarray 80, the security circuit bit erasure logic 140 may initiate erasure of bits of the next most sensitive data of the different tiers of sensitive data such as that stored in in the subarray 160, for example. Upon completion of erasure of a sufficient number of bits of the subarrays 80, 160, 162, the security circuit bit erasure logic 140 may initiate erasure of bits of the least sensitive data of the subarray 164, for example.FIG. 4 shows one example of operations of a device such as a microprocessor controlled device 10 of FIG. 1 in which the device is placed (block 410) in a security mode such as a park security mode, for example. In this security mode, a security related event is detected (block 420). As previously mentioned, one example of such a security related event may be unauthorized motion of the device when placed in a park mode. The coil 130 is an example of a motion detector utilizing the earth's magnetic field.Upon detection of a security related event, at least a portion of the bits representing sensitive data stored in a subarray may be erased (block 430). As previously mentioned, the coil 130 is an example of a power source utilizing the earth's magnetic field to generate current to erase bits of sensitive data as the device is moved. Upon erasure of some or all of the sensitive information stored in the subarray, it is believed that unauthorized recovery of the sensitive information is prevented or rendered more difficult as to be impractical in many applications.ExamplesThe following examples pertain to further embodiments.Example 1 is an apparatus, comprising:a memory configured to store sensitive information in at least a portion of the memory;a detector configured to detect a security event;a selector input configured to input a security mode selection; anda controller coupled to the detector, memory and selector input, said controller configured to receive a security mode selection, and to protect sensitive information stored as data in the at least a portion of the memory, including said controller configured to: place the apparatus carrying the memory in a security mode in response to a received security mode selection; andin response to said detector detecting a first security event while the controller is in the security mode, change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information by reading said portion of said memory.In Example 2, the subject matter of Examples 1-7 (excluding the present Example) can optionally include that said memory is a nonvolatile memory and said detector is a motion detector configured to detect motion of the apparatus wherein said detecting a first security event includes detecting motion of the apparatus carrying said nonvolatile memory.In Example 3, the subject matter of Examples 1-7 (excluding the present Example) can optionally include that the motion detector includes a coil configured to detect motion by generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field wherein said detecting a first security event includes generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field.In Example 4, the subject matter of Examples 1-7 (excluding the present Example) can optionally include that said controller includes a switch configured to direct said generated current to said controller, and wherein said controller is configured to use said generated current to change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information.In Example 5, the subject matter of Examples 1-7 (excluding the present Example) can optionally include that said first security mode is a park security mode wherein said controller is configured to:place the apparatus carrying the memory in the park security mode in response to a received park security mode selection; andin response to said motion detector detecting motion of the apparatus carrying said nonvolatile memory while the controller is in the park security mode, change bits of said data of said sensitive information when said apparatus is detected to be in motion while in said park security mode.In Example 6, the subject matter of Examples 1-7 (excluding the present Example) can optionally include that said controller is configured to enable said switch when said apparatus is placed in the park security mode, so that said generated current is directed to said controller so that so that bits of said data of said sensitive information are changed by said generated current when said apparatus is in motion while in said park mode.In Example 7, the subject matter of Examples 1-7 (excluding the presentExample) can optionally include that the selector input is configured to input a second mode selection other than said park mode, wherein said controller is configured to disable said switch when said apparatus is placed in the second mode which disables said directing said generated current to said controller so that any current generated by motion of the coil through the earth's magnetic field when the apparatus is in the second mode is disabled from changing bits of said data of said sensitive information when said apparatus is in motion while in said second mode.Example 8 is a computing system for use with a display, comprising:a memory configured to store sensitive information in at least a portion of the memory;a processor configured to write data in and read data from the memory;a video controller configured to display information represented by data in the memory;a detector configured to detect a security event;a selector input configured to input a security mode selection; anda controller coupled to the detector, memory and selector input, said controller configured to receive a security mode selection, and to protect sensitive information stored as data in the at least a portion of the memory, including said controller configured to:place the apparatus carrying the memory in a security mode in response to a received security mode selection; andin response to said detector detecting a first security event while the controller is in the security mode, change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information by reading said portion of said memory.In Example 9, the subject matter of Examples 8-14 (excluding the present Example) can optionally include that said memory is a nonvolatile memory and said detector is a motion detector configured to detect motion of the apparatus wherein said detecting a first security event includes detecting motion of the apparatus carrying said nonvolatile memory.In Example 10, the subject matter of Examples 8-14 (excluding the present Example) can optionally include that the motion detector includes a coil configured to detect motion by generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field wherein said detecting a first security event includes generating a current in the coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field.In Example 11, the subject matter of Examples 8-14 (excluding the present Example) can optionally include that said controller includes a switch configured to direct said generated current to said controller, and wherein said controller is configured to use said generated current to change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information.In Example 12, the subject matter of Examples 8-14 (excluding the present Example) can optionally include that said first security mode is a park security mode wherein said controller is configured to:place the apparatus carrying the memory in the park security mode in response to a received park security mode selection; andin response to said motion detector detecting motion of the apparatus carrying said nonvolatile memory while the controller is in the park security mode, change bits of said data of said sensitive information when said apparatus is detected to be in motion while in said park security mode.In Example 13, the subject matter of Examples 8-14 (excluding the present Example) can optionally include that said controller is configured to enable said switch when said apparatus is placed in the park security mode, so that said generated current is directed to said controller so that so that bits of said data of said sensitive information are changed by said generated current when said apparatus is in motion while in said park mode.In Example 14, the subject matter of Examples 8-14 (excluding the present Example) can optionally include that selector input is configured to input a second mode selection other than said park mode, wherein said controller is configured to disable said switch when said apparatus is placed in the second mode which disables said directing said generated current to said controller so that any current generated by motion of the coil through the earth's magnetic field when the apparatus is in the second mode is disabled from changing bits of said data of said sensitive information when said apparatus is in motion while in said second mode.Example 15 is a method, comprising:protecting sensitive information stored as data in at least a portion of a memory, said protecting including:selectively placing an apparatus carrying the memory in a security mode; detecting a first event while in the security mode; andin response to said first event detecting, changing bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information by reading said portion of said memory.In Example 16, the subject matter of Examples 15-21 (excluding the present Example) can optionally include that said memory is a nonvolatile memory and wherein said detecting a first event includes detecting motion of the apparatus carrying said nonvolatile memory.In Example 17, the subject matter of Examples 15-21 (excluding the present Example) can optionally include that the motion detecting includes generating a current in a coil by electromagnetic induction caused by motion of the coil through the earth's magnetic field.In Example 18, the subject matter of Examples 15-21 (excluding the presentExample) can optionally include that said changing bits of said data including directing said generated current to a controller, said controller using said generated current to change bits of said data of said sensitive information to prevent recovery of at least a portion of said sensitive information.In Example 19, the subject matter of Examples 15-21 (excluding the presentExample) can optionally include that the placing an apparatus carrying the memory in a security mode includes selectively placing the apparatus in a park security mode, wherein said detecting the first event includes detecting whether the apparatus is in the park security mode, and detecting motion of the apparatus carrying said nonvolatile memory when the apparatus is in the park security mode so that bits of said data of said sensitive information are changed when said apparatus is detected to be in motion while in said park mode. In Example 20, the subject matter of Examples 15-21 (excluding the present Example) can optionally include selectively placing the apparatus in a park security mode which enables said directing said generated current to said controller so that motion of the coil through the earth's magnetic field when the apparatus is in the park mode, generates current which is directed to said controller so that bits of said data of said sensitive information are changed by said controller using current generated when said apparatus is in motion while in said park mode.In Example 21, the subject matter of Examples 15-21 (excluding the present Example) can optionally include selectively placing the apparatus in a second mode other than said park mode, which disables said directing said generated current to said controller so that any current generated by motion of the coil through the earth's magnetic field when the apparatus is in the second mode is disabled from changing bits of said data of said sensitive information when said apparatus is in motion while in said second mode.Example 22 is directed to an apparatus comprising means to perform a method as described in any preceding Example.The described operations may be implemented as a method, apparatus or computer program product using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as computer program code maintained in a "computer readable storage medium", where a processor may read and execute the code from the computer storage readable medium. The computer readable storage medium includes at least one of electronic circuitry, storage materials, inorganic materials, organic materials, biological materials, a casing, a housing, a coating, and hardware. A computer readable storage medium may comprise, but is not limited to, a magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and nonvolatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), Solid State Devices (SSD), etc. The code implementing the described operations may further be implemented in hardware logic implemented in a hardware device (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.). Still further, the code implementing the described operations may be implemented in "transmission signals", where transmission signals may propagate through space or through a transmission media, such as an optical fiber, copper wire, etc. The transmission signals in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The program code embedded on a computer readable storage medium may be transmitted astransmission signals from a transmitting station or computer to a receiving station or computer. A computer readable storage medium is not comprised solely of transmissions signals. Those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present description, and that the article of manufacture may comprise suitable information bearing medium known in the art. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present description, and that the article of manufacture may comprise any tangible information bearing medium known in the art.In certain applications, a device in accordance with the present description, may be embodied in a computer system including a video controller to render information to display on a monitor or other display coupled to the computer system, a device driver and a network controller, such as a computer system comprising a desktop, workstation, server, mainframe, laptop, handheld computer, etc.Alternatively, the device embodiments may be embodied in a computing device that does not include, for example, a video controller, such as a switch, router, etc., or does not include a network controller, for example.The illustrated logic of figures may show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified or removed. Moreover, operations may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.The foregoing description of various embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. |
Technologies for accessing memory devices of a memory module device includes receiving a memory read request form a host and reading, in response to the memory read request, a rank of active non-volatile memory devices of the memory module device while contemporaneously accessing a volatile memory device of the memory module device. The volatile memory device shares data lines of a data bus of the memory module device with a spare non-volatile memory device associated with the rank of active non-volatile memory devices. During write operations, each of the rank of active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices are written to facilitate proper wear leveling of the non-volatile memory devices. The spare non-volatile memory device may replace a failed non-volatile memory devices of the rank of active non-volatile memory devices. In such an event, the volatile memory device is no longer contemporaneously accessed during read operations of the rank of active non-volatile memory devices. |
WHAT IS CLAIMED IS:1. A memory module device for accessing memory, the memory module device comprising:a rank of active non- volatile memory devices;a spare non-volatile memory device associated with the rank of active nonvolatile memory devices;at least one volatile memory device; anda memory controller communicatively coupled to (i) the rank of active nonvolatile memory devices via corresponding data bus lines of a data bus and (ii) to the spare nonvolatile memory device and the volatile memory device by the same set of data bus lines of the data bus, and wherein the memory controller is to:receive a memory read request from a host;read, via the data bus, the rank of active non-volatile memory devices in response to the memory read request; andaccess, via the set of data bus lines, the volatile memory device contemporaneously with the read of the rank of active non-volatile memory devices.2. The memory module device of claim 1, wherein to read the rank of active non-volatile memory devices comprises to read the rank of active non- volatile memory devices while the spare non-volatile memory device associated with the rank of active non- volatile memory devices is not read.3. The memory module device of claim 1, wherein to access the volatile memory device comprises to read or write to the volatile memory device of the memory module device.4. The memory module device of claim 1, wherein the memory controller is further to:set an assignable identification value of each of the active non- volatile memory devices of the rank of active non-volatile memory devices to a common value; andset an assignable identification value of the volatile memory device to a unique value.5. The memory module device of claim 4, wherein to read the rank of active non-volatile memory devices comprises to set a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non- volatile memory devices to the common value.6. The memory module device of claim 5, wherein to set the selection identification value of each of the active non-volatile memory devices and the spare nonvolatile memory device associated with the rank of active non-volatile memory devices to the common value causes each of the active non-volatile memory devices to respond to a read command and the spare non-volatile memory device to ignore the read command.7. The memory module device of claim 5, wherein to set the selection identification value comprises to set the selection identification value of each of the active nonvolatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value if the last memory access command issued to the rank of active non-volatile memory devices was a write command.8. The memory module device of claim 4, wherein to:set the assignable identification value of each of the active non-volatile memory devices comprises to write the common value to a mode register of each of the active nonvolatile memory devices, andset the assignable identification value of the volatile memory device comprises to write the unique value to a mode register of the volatile memory device.9. The memory module device of claim 1, wherein the memory controller is further to:receive a memory write request from a host; andwrite to the rank of active non-volatile memory devices and the spare nonvolatile memory device associated with the rank of active non-volatile memory devices in response to the memory write request.10. The memory module device of claim 9, wherein each of the active nonvolatile memory devices and the spare non-volatile memory device has a master identification value that is the same value, andwherein to write to the rank of active non-volatile memory devices and the spare non-volatile memory device comprises to set a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non- volatile memory devices to the master identification value.11. The memory module device of claim 10, wherein to set the selection identification value of each of the active non-volatile memory devices and the spare nonvolatile memory device to the master identification value causes each of the active non-volatile memory devices and the spare non-volatile memory device to respond to the write command.12. The memory module device of claim 1, wherein the memory controller is further to:detect a failed non-volatile memory device of the rank of active non-volatile memory devices;migrate data from the failed non-volatile memory device to the spare nonvolatile memory device associated with the rank of active non-volatile memory devices; and respond to future memory read requests to read the spare non-volatile memory device and each of the active non-volatile memory devices of the rank of active non-volatile memory devices except for the failed non- volatile memory device,wherein to read the spare non-volatile memory device comprises to not access the volatile memory device contemporaneously with the reading of the spare non-volatile memory device.13. A method for accessing memory devices of a memory module device, the method comprising:receiving, by a memory controller of the memory module device, a memory read request from a host;reading, by the memory controller and via a data bus, a rank of active nonvolatile memory devices of the memory module device in response to the memory read request; and accessing, by the memory controller, a volatile memory device of the memory module device contemporaneously with the reading of the rank of active non-volatile memory devices using a set of data bus lines of the data bus communicatively coupled to both the volatile memory device and a spare non- volatile memory device associated with the rank of active non- volatile memory devices.14. The method of claim 13, wherein reading the rank of active non-volatile memory devices comprises reading the rank of active non-volatile memory devices while not reading the spare non-volatile memory device associated with the rank of active non- volatile memory devices.15. The method of claim 13, wherein accessing the volatile memory device comprises reading from or writing to the volatile memory device of the memory module device.16. The method of claim 13, further comprising:setting an assignable identification value of each of the active non-volatile memory devices of the rank of active non-volatile memory devices to a common value; and setting an assignable identification value of the volatile memory device to a unique value.17. The method of claim 16, wherein reading the rank of active non-volatile memory devices comprises setting a selection identification value of each of the active nonvolatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value.18. The method of claim 17, wherein setting the selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value causes each of the active non-volatile memory devices to respond to a read command and the spare non-volatile memory device to ignore the read command.19. The method of claim 17, wherein setting the selection identification value comprises setting the selection identification value of each of the active non-volatile memory devices and the spare non- volatile memory device associated with the rank of active non- volatile memory devices to the common value if the last memory access command issued to the rank of active non-volatile memory devices was a write command.20. The method of claim 16, wherein:setting the assignable identification value of each of the active non- volatile memory devices comprises writing the common value to a mode register of each of the active non-volatile memory devices, andsetting the assignable identification value of the volatile memory device comprises writing the unique value to a mode register of the volatile memory device.21. The method of claim 13, further comprising:receiving, by the memory controller, a memory write request from a host; and writing, by the memory controller, to the rank of active non-volatile memory devices and the spare non- volatile memory device associated with the rank of active nonvolatile memory devices in response to the memory write request.22. The method of claim 21, wherein each of the active non- volatile memory devices and the spare non-volatile memory device has a master identification value that is the same value, andwherein writing to the rank of active non- volatile memory devices and the spare non-volatile memory device comprises setting a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non- volatile memory devices to the master identification value.23. The method of claim 13, further comprising:detecting, by the memory controller, a failed non-volatile memory device of the rank of active non-volatile memory devices;migrating, by the memory controller, data from the failed non-volatile memory device to the spare non-volatile memory device associated with the rank of active non-volatile memory devices; andresponding, by the memory controller, to future memory read requests by reading the spare non-volatile memory device and each of the active non-volatile memory devices of the rank of active non-volatile memory devices except for the failed non-volatile memory device, wherein reading the spare non-volatile memory device comprises not accessing the volatile memory device contemporaneously with the reading of the spare non-volatile memory device.24. One or more machine -readable storage media comprising a plurality of instructions stored thereon that, when executed, cause a memory controller of a memory module device to perform the method of any of claims 13-23.25. A memory module device for accessing memory, the memory module device comprising means for performing the method of any of claims 13-23. |
TECHNOLOGIES FOR CONTEMPORANEOUS ACCESS OF NON- VOLATILE AND VOLATILE MEMORY IN A MEMORY DEVICECROSS-REFERENCE TO APPLICATION[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 14/975,160, entitled "TECHNOLOGIES FOR CONTEMPORANEOUS ACCESS OF NON- VOLATILE AND VOLATILE MEMORY IN A MEMORY DEVICE," which was filed on December 18, 2015.BACKGROUND[0002] Memory devices, such as memory integrated circuits, are used to store data.Memory devices may be embodied as non-volatile memory in which the data is stored in a persistent manner or as volatile memory in which the data is stored until removal of power from the memory device. Oftentimes, memory devices form a sub-component of a larger computing system or electrical device. For example, memory devices may be incorporated in computers, solid state drives, portable memory systems, and/or the like.[0003] Memory module devices provide larger memory capacity by incorporating multiple memory devices into a single package, board, or component. Memory module devices may include non-volatile memory devices and/or volatile memory devices in a single module. The various memory devices of a memory module device may be arranged into multiple groups or ranks of memory devices to provide a larger address space and overall memory capacity for the memory module device.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0005] FIG. 1 is a simplified block diagram of a memory module device for contemporaneous access of non-volatile and volatile memory;[0006] FIG. 2 is a simplified block diagram of various interconnections of memory device components of the memory module device of FIG. 1;[0007] FIG. 3 is a simplified block diagram of an environment that may be established by the memory module device of FIG. 1; [0008] FIG. 4 is a simplified flow diagram of at least one embodiment of a method for initialization that may be executed by the memory module device of FIGS. 1-3;[0009] FIG. 5 is a simplified flow diagram of at least one embodiment of a method for contemporaneous access of non-volatile and volatile memory that may be executed by the memory module device of FIGS. 1-3;[0010] FIG. 6 is a simplified flow diagram of at least one embodiment of a method for handling a failure of a non-volatile memory device that may be executed by the memory module device of FIGS. 1-3; and[0011] FIG. 7 is a simplified block diagram of at least one embodiment of a computing device including the memory module device of FIGS. 1-3.DETAILED DESCRIPTION OF THE DRAWINGS[0012] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0013] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0014] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).[0015] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0016] As shown in FIG. 1, an illustrative memory module device 100 includes a memory controller 102, a non-volatile memory 110, and a volatile memory 120. The nonvolatile memory 110 includes an active memory array 112 and spare memory 114. The active memory array 112 includes multiple memory devices that are "active," i.e., they are presently used by the memory controller 102 to actively store and retrieve data. Conversely, the spare memory 114 includes a number of memory devices that are "inactive," i.e., they are not presently used by the memory controller 102 to actively store and retrieve data (although some data may be written to the spare memory devices of the spare memory 114 for wear leveling purposes). As discussed in more detail below, should a memory device of the active memory array 112 fail, a corresponding memory device of the spare memory 114 may be used in its place.[0017] The volatile memory 120 is used by the memory controller 102 to store various data during operation of the memory module device 100, such as metadata associated with the non-volatile memory 110. For example, in the illustrative embodiment, the memory controller 102 temporarily stores and manages a logical-to-physical indirection table 122 in the volatile memory 120 during operation of the memory module device 100. Illustratively, the logical-to- physical indirection table 122 correlates logical addresses associated with the non-volatile memory 110 to the corresponding physical addresses of the non- volatile memory 110. Of course, the memory controller 102 may store and access additional data in the volatile memory 120 in other embodiments. As such, during operation of the memory module device 100, the memory controller 102 may periodically, continually, and/or responsively access the volatile memory 120. [0018] An illustrative embodiment of the active memory array 112, the spare memory114, and the volatile memory 120 of the memory module device 100 is shown in FIG. 2. The active memory array 112 includes multiple active non- volatile memory devices 202 arranged in individual columns or ranks. Each rank of active non-volatile memory devices 202 is communicatively coupled to the memory controller 102 via a data bus 210. That is, each active non-volatile memory device 202 of a particular rank is communicatively coupled to the memory controller 102 via a corresponding set of data bus links (e.g., the top tier non-volatile memory devices 202 is communicatively coupled to the memory controller 102 via data bus links DQ[7:0]). Of course, the memory module device 100 also includes additional command interconnections (e.g., column select lines) between the memory controller 102 and each memory device of the non- volatile memory 110 and volatile memory 120, only some of which are shown in FIG. 2 for clarity of that figure. Additionally, although the illustrative active memory array 112 of FIG. 2 includes four ranks of ten active non-volatile memory devices 202 each, the active memory array 112 may include additional or fewer ranks of greater or fewer active non-volatile memory devices 202 in other embodiments depending on, for example, the memory capacity of the memory module device 100.[0019] The illustrative spare memory 114 of the memory module device 100 includes a spare non- volatile memory device 204 associated with each rank of active non-volatile memory devices 202. For example, as shown in FIG. 2, the spare memory 114 includes four spare nonvolatile memory devices 204, one for each of the illustrative four ranks of active non-volatile memory devices 202. Similar to the active non-volatile memory device 202, the spare nonvolatile memory devices 204 of the spare memory 114 are communicatively coupled to the memory controller 102 via a set of data bus lines 212 of the data bus 210 (e.g., illustratively, data lines DQ[87:80].)[0020] The illustrative volatile memory 120 includes a number of volatile memory devices 206, which store various data during operation of the memory module device 100 as discussed above. For example, as shown in FIG. 2, the volatile memory 120 includes two volatile memory devices 206, but may include additional or fewer volatile memory devices 206 in other embodiments. Each of the volatile memory devices 206 is communicatively coupled to the memory controller 102 via the same set of data bus lines 212 as the spare non-volatile memory devices 204 of the spare memory 114. As such, during operation, the memory controller 102 can access either the volatile memory devices 206 or the spare non- volatile memory devices, but not both, at a particular point in time. [0021] As discussed above, in use, the memory controller 102 may periodically and/or repeatedly access the volatile memory 120. However, to reduce latency of memory accesses to the volatile memory 120 and lower overall power consumption of the memory module device 100, the memory controller 102 is configured to access the volatile memory 120 contemporaneously with reads from the active memory array 112 of the non-volatile memory 110. To do so, the memory controller 102 selects only the active non- volatile memory devices 202 of the addressed rank, and not the spare non- volatile memory device 204 associated with the addressed rank, for each read operation. As such, because the spare non-volatile memory devices 204 are not being accessed during the read operation, the data bus lines 212 coupled to both the spare non- volatile memory devices 204 of the spare memory 114 and the volatile memory devices 206 of the volatile memory 120 are used to access (e.g., read and/or write) the volatile memory devices 206 during the read operation of the active non- volatile memory devices 202. By accessing the volatile memory devices 206 during the read operation of the active non- volatile memory devices 202, the latency of volatile memory accesses may be reduced. Additionally, because the spare non-volatile memory devices 204 are not accessed during the read operations of the active non-volatile memory devices 202, the overall power consumption of the memory module device 100 may be reduced. To facilitate proper wear leveling across the various memory devices 202, 204, the memory controller 102 is configured to write to each of the active non- volatile memory devices 202 of a selected rank and the spare non- volatile memory device 204 associated with the selected rank during a write operation.[0022] Referring back to FIG. 1, the memory controller 102 of the memory module device 100 may be embodied as any type of control device, circuitry, or collection of hardware devices capable of reading, writing, and managing the non- volatile memory 110 and the volatile memory 120. In the illustrative embodiment, the memory controller 102 includes a processor 104, a local memory 106, and a host interface 108. Of course, the memory controller 102 may include additional devices, circuits, and/or components commonly found in a memory controller of a memory module device in other embodiments.[0023] The processor 104 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 104 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the local memory 106 may be embodied as any type of volatile and/or non-volatile memory or data storage capable of performing the functions described herein. In the illustrative embodiment, the local memory 106 stores firmware and/or other instructions executable by the processor 104 to perform the described functions of the memory controller 102. In some embodiments, the processor 104 and the local memory 106 may form a portion of a System-on-a-Chip (SoC) and be incorporated, along with other components of the memory controller 102, onto a single integrated circuit chip.[0024] The host interface 108 may also be embodied as any type of hardware processor, processing circuitry, input/output circuitry, and/or collection of components capable of facilitating communication of the memory module device 100 with a host device or service (e.g., a host application). That is, the host interface 108 embodies or establishes an interface for accessing data stored on the memory module device 100 (e.g., stored in the non-volatile memory 110 or the volatile memory 120). To do so, the host interface 108 may be configured to utilize any suitable communication protocol and/or technology to facilitate communications with the memory module device 100.[0025] In the illustrative embodiment, the memory module device 100 is embodied as a non-volatile dual in-line memory module (NVDIMM), but may be embodied as any other type of memory module capable of performing the functions described herein in other embodiments. Additionally, each of the active non-volatile memory devices 202 and the spare non-volatile memory devices 204 of the non-volatile memory 110 are illustratively embodied as bit- addressable, write-in -place non-volatile memory devices, such as three-dimensional (3D) crosspoint memory or other types of bit addressable, write-in-place non-volatile memory, such as ferroelectric random-access memory (FeTRAM), nanowire -based non-volatile memory, phase change memory (PCM), memory that incorporates memristor technology, Magnetoresistive random-access memory (MRAM) or Spin Transfer Torque (STT)-MRAM. Similarly, the volatile memory devices 206 of the volatile memory 120 are illustratively embodied as dynamic random-access memory (DRAM) devices, but may be embodied as other types of volatile memory devices and/or memory technologies capable of storing data while the memory module device 100 is operational such as DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (in development by JEDEC), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD209-4, originally published by JEDEC in August 2014), WI02 (Wide 170 2 (WideI02), JESD229-2, originally published by JEDEC in August 2014), HBM (HIGH BANDWIDTH MEMORY DRAM, JESD235, originally published by JEDEC in October 2013), DDR5 (DDR version 5, currently in discussion by JEDEC), LPDDR5 (currently in discussion by JEDEC), HBM2 (HBM version 2), currently in discussion by JEDEC), and/or others, and technologies based on derivatives or extensions of such specifications. [0026] Referring now to FIG. 3, in use, the memory module device 100 may establish an environment 300. The illustrative environment 300 includes an initialization module 302, a non-volatile memory access module 304, a volatile memory access module 306, and a failure management module 308. Of course, the environment 300 may include additional or other modules in other embodiments. Each of the modules and other components of the environment 300 may be embodied as firmware, software, hardware, or a combination thereof. For example the various modules, logic, and other components of the environment 300 may form a portion of, or otherwise be established by, the memory controller 102 or other hardware components of the memory module device 100. As such, in some embodiments, any one or more of the modules of the environment 300 may be embodied as a circuit or collection of electrical devices (e.g., an initialization circuit 302, a non-volatile memory access circuit 304, a volatile memory access circuit 306, a failure management circuit 308, etc.).[0027] The initialization module 302 is configured to initialize the various non-volatile memory devices 202, 204 of the non- volatile memory 110 by setting an assignable identification value of each of the memory devices 202, 204. That is, each of the illustrative active non-volatile memory devices 202 and the spare non-volatile memory devices 204 includes an assignable identification value, a master identification value, and a selection identification value. The assignable identification value is assignable during operation of the memory module device 100 and typically resets upon each power cycle of the memory module device 100. The master identification value may be set to a default value by the manufacturer of the memory module device 100 and is persistent across power cycles. Additionally, the master identification value overrides the assignable identification value of the corresponding non-volatile memory device 202, 204. The selection identification value is also settable during operation of the memory module device 100 and determines which non- volatile memory device 202, 204 will respond to a memory access command (e.g., a read or write command). That is, each non- volatile memory device 202, 204 will respond to a memory access command from the memory controller 102 if its corresponding selection identification value matches its assignable identification value. However, if the selection identification value matches its master identification value, the corresponding non-volatile memory device 202, 204 will respond to a memory access command regardless of its present assignable identification value (i.e., the master identification value overrides the assignable identification value).[0028] As discussed above, during an initialization process of the memory module device 100 (see, e.g., method 400 of FIG. 4), the initialization module 302 sets the assignable identification value of the active non-volatile memory devices 202 and the spare non-volatile memory devices 204. In the illustrative embodiment, the initialization module 302 sets the assignable identification value of each of the active non-volatile memory devices 202 to a common value and sets the assignable identification value of each of the spare non- volatile memory devices 204 to a unique value (i.e., a value different from the common value of the active non- volatile memory devices 202). The master identification value of each of the active non-volatile memory devices 202 and the spare non- volatile memory devices 204 may be left at the default value or otherwise set to the same value (but different from the common value). In this way, each of the active non-volatile memory devices 202 of a particular rank may be accessed, without accessing the associated spare non-volatile memory device 204, by setting the selection identification value of each of the non-volatile memory devices 202, 204 to the common value. Because the assignable identification value of the associated spare non-volatile memory device 204 is not equal to the common value, the associated spare non- volatile memory device 204 will not respond to any memory access command under such conditions. Alternatively, each of the active non- volatile memory devices 202 of a particular rank and the associated spare non-volatile memory device 204 may be accessed by setting the selection identification value of the non-volatile memory devices 202, 204 to the value of the master identification value.[0029] The initialization module 302 may be configured to set the assignable identification values automatically or autonomously in some embodiments. Additionally or alternatively, in some embodiments, the initialization module 302 may provide a user interface to a host 350 (e.g., a host application or device) to facilitate user-customization of the assignable identification values of the non-volatile memory devices 202, 204.[0030] The non-volatile memory access module 304 is configured to access the nonvolatile memory 110 based on access requests received from the host 350. To do so, the illustrative non- volatile memory access module 304 includes a read access module 310 and a write access module 312. The read access module 310 is configured to respond to read requests from the host 350 by setting the selection identification value of each of the non-volatile memory devices 202, 204 to the value of the assignable identification value of the active nonvolatile memory devices 202 (i.e., the common value). The read access module 310 may subsequently issue a read command to the non- volatile memory 110 to read the data contents from the addressed rank of active non-volatile memory devices 202. However, because the assignable identification value of the associated spare non-volatile memory device 204 is different, the associated spare non-volatile memory device 204 will not respond to the read command. [0031] Similar to the read access module 310, the write access module 312 is configured to respond to write requests from the host 350. To do so, the write access module 312 sets the selection identification value of each of the non- volatile memory devices 202, 204 to the value of the master identification value of the non-volatile memory devices 202, 204. The write access module 312 may subsequently issue a write command to write data to each of the addressed rank of active non- volatile memory device 202, as well as the associated spare non- volatile memory device 204. In this way, a common wear leveling of the non-volatile memory devices 202, 204 is maintained.[0032] The volatile memory access module 306 is configured to access the volatile memory 120 during read accesses to the active memory array 112. That is, the volatile memory access module 306 is configured to access the volatile memory devices 206 when the read access module 310 accesses the addressed rank of active non- volatile memory devices 202. Because the associated spare non-volatile memory device 204 does not respond to the read commands as discussed above, the data bus lines 212, which are communicatively coupled to each of the spare non-volatile memory devices 204 and the volatile memory devices 206, are free to be used to access the volatile memory devices 206. Of course, the volatile memory access module 306 may also access the volatile memory 120 at other times during which the non-volatile memory is not being accessed.[0033] The failure management module 308 is configured to detect and respond to a failure of one or more of the active non-volatile memory devices 202. If a failed non-volatile memory device 202 of a particular rank is detected, the failure management module 308 is configured to migrate the data stored on the failed non-volatile memory device 202 to the associated spare non-volatile memory device 204. Of course, after a spare non-volatile memory device 204 is configured to replace a failed non-volatile memory device 202, the data bus lines 212 may not be used for the contemporaneous access of the volatile memory 120. As such, the failure management module 308 may subsequently disable the contemporaneous volatile memory access feature of the memory module device 100. For example, the failure management module 308 may instruct the non-volatile memory access module to use the master identification value for all future read and write accesses to the non-volatile memory 110.[0034] Referring now to FIG. 4, in use, the memory controller 102 of the memory module device 100 may execute a method 400 for initializing the non- volatile memory 110. The method 400 begins with block 402 in which the memory controller 102 determines whether to initialize the non-volatile memory 110. In some embodiments, the memory controller 102 may be configured to initialize the non-volatile memory 110 upon each power-up cycle. Additionally or alternatively, as discussed above, the memory controller 102 may provide a user interface to the host 350 to facilitate user customization of the initialization of the non-volatile memory 110.[0035] If the memory controller 102 determines to initialize the non-volatile memory110, the method 400 advances to block 404 in which the memory controller 102 sets the assignable identification values of the non-volatile memory devices 202, 204. For example, in block 406, the memory controller 102 sets the assignable identification value of each active non-volatile memory device 202 to a common value. Additionally, in block 408, the memory controller 102 sets the assignable identification value of each spare non-volatile memory device 204 to a unique value (i.e., a value different form the common value).[0036] Subsequently, in block 410, the memory controller 102 enables contemporaneous access of the volatile memory 120 during read operations of the volatile memory 120. For example, in some embodiments, the memory controller 102 may set a flag or bit of an associated register to indicate that contemporaneous access is enabled. In some embodiments, the flag or bit may be embodied as an unused or "spare" bit of an internal register of the memory controller 102. After the non- volatile memory 110 has been initialized in block 404 and 410, the memory controller 102 notifies the host that the initializations has been completed and the memory module device 100 is ready to receive memory access requests in block 412.[0037] Referring now to FIG. 5, in use, the memory controller 102 of the memory module device 100 may execute a method 500 for contemporaneously accessing the volatile memory 120 during read operations of the non- volatile memory 110. The method 500 begins with block 502 in which the memory controller 102 determines whether a memory access request has been received from the host 350. If so, the method 500 advances to block 504 in which the memory controller 102 determines whether contemporaneous access of the volatile memory 120 is enabled. As discussed above, in some embodiments, a bit or flag may be set to provide an indication that contemporaneous access of the volatile memory 120 during read operations of the non-volatile memory 110 is enabled. If the contemporaneous access of the volatile memory 120 is not enabled, the method 500 advances to block 506 in which the memory controller 102 performs a standard memory access based on the memory access request, and the method 500 subsequently loops back to block 502 in which the memory controller 102 continues to monitor for memory access requests from the host 350.[0038] If, however, contemporaneous access of the volatile memory 120 is enabled, the method 500 advances to block 508. In block 508, the memory controller 102 determines whether the received memory access request is a read request. If so, the method 500 advances to block 510 in which the memory controller 102 determines whether the last request was also read request (i.e., whether the last access command issued by the memory controller 102 to the non-volatile memory 110 was a read command). If not, the method 500 advances to block 512 in which the memory controller 102 sets the selection identification value of each of the nonvolatile memory devices 202, 204 to the common value to which each of the assignable identification values of the active non-volatile memory devices 202 were previously set. In this way, the memory controller 102 sets the selection identification value of the non-volatile memory devices 202, 204 only if the most previous memory access was a write access.[0039] The method 500 subsequently advances to blocks 514 and 516. In block 514, the memory controller 102 performs a read operation on the non-volatile memory 110 by issuing a read command to the non-volatile memory devices 202, 204. Because the selection identification value matches the assignable identification value of the active non- volatile memory devices 202 of the addressed rank, each of the active non-volatile memory devices 202 respond to the issued read command. Conversely, because the selection identification value does not match the assignable identification value of spare non- volatile memory device 204 associated with the addressed rank, the spare non-volatile memory device 204 does not respond to the read command. As such, the data bus lines 212 are available to perform access operations with the volatile memory 120. Accordingly, in block 516 and contemporaneously with the read operation performed in block 514, the memory controller 102 performs any pending access requests to the volatile memory 120. For example, the memory controller 102 may read from and/or write to the volatile memory devices 206 in block 516 while reading from the active non-volatile memory devices 202 in block 514. The method 500 subsequently loops back to block 502 in which the memory controller 102 continues to monitor for memory access requests from the host 350.[0040] Referring back to block 508, if the received memory access request is not a read request, the method 500 advances to block 518. In 518, the memory controller 102 determines whether the memory access request is a write request. If so, the method 500 advances to block 520 in which in which the memory controller 102 determines whether the last request was write request (i.e., whether the last access command issued by the memory controller 102 to the nonvolatile memory 110 was a write command). If not, the method 500 advances to block 522 in which the memory controller 102 sets the selection identification value of each of the nonvolatile memory devices 202, 204 to the value of the master identification value of each of the non-volatile memory devices 202, 204. In this way, the memory controller 102 sets the selection identification value of the non-volatile memory devices 202, 204 only if the most previous memory access was a read access.[0041] The method 500 subsequently advances to block 524 in which the memory controller 102 performs a write operation to each of the non-volatile memory devices 202, 204 by issuing a write command. That is, because the selection identification of each of the nonvolatile memory devices 202, 204 matches the master identification, each of the non-volatile memory devices 202, 204 responds to the write command regardless of their individual assignable identification value. The method 500 subsequently loops back to block 502 in which the memory controller 102 continues to monitor for memory access requests from the host 350.[0042] Referring now to FIG. 6, in use, the memory controller 102 may also execute a method 600 for handling a failure of an active non-volatile memory device 202. The method 600 begins with block 602 in which the memory controller 102 determines whether an active non-volatile memory device 202 has failed. To do so, the memory controller 102 may utilize any suitable method and/or mechanism to determine such a device failure. If the memory controller 102 determines that an active non-volatile memory device 202 has failed, the method 600 advances to block 604. In block 604, the memory controller 102 migrates data from the failed active non-volatile memory device 202 to the associated spare non- volatile memory device 204. In block 606, the memory controller 102 sets the master identification value of the failed non-volatile memory device 202 to a unique value such that the failed non-volatile memory device 202 will not respond to future read or write commands from the memory controller 102. Additionally, in block 608, the memory controller 102 performs further read and write operations using the master identification value of the non-volatile memory devices 202, 204. To do so, as shown in block 610, the memory controller 102 may set the selection identification of each non-volatile memory device 202, 204 to the value of the master identification value prior to issuing the corresponding read or write command. In this way, the memory controller 102 may disable the contemporaneous access of the volatile memory 120 during read operations of the non-volatile memory 110 in response to failure of an active nonvolatile memory device 202.[0043] Referring now to FIG. 7, in some embodiments, the memory module device 100 may be incorporated in, or form a portion of, a computing device 700. The computing device 700 may be embodied as any type of computing device in which the memory module device 100 may be used. For example, the computing device 700 may be embodied as a smart phone, a tablet computer, a notebook, a laptop computer, a netbook, an Ultrabook™, a wearable computing device, a pair of smart glasses, a head-mounted computing device, a cellular phone, a desktop computer, a smart device, a personal digital assistant, a mobile Internet device, a server, a data storage device, and/or any other computing/communication device. As shown in FIG. 7, the illustrative computing device 700 includes a processor 710, an input/output ("I/O") subsystem 712, and a main memory 714. Of course, the computing device 700 may include other or additional components, such as those commonly found in a typical computing device (e.g., various input/output devices and/or other components), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 714, or portions thereof, may be incorporated in the processor 710 in some embodiments.[0044] The processor 710 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 710 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the memory 714 may be embodied as any type of volatile or non- volatile memory or data storage capable of performing the functions described herein. In operation, the memory 714 may store various data and software used during operation of the computing device 700 such as operating systems, applications, programs, libraries, and drivers. The memory 714 is communicatively coupled to the processor 710 via the I/O subsystem 712, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 710, the memory 714, and other components of the computing device 700. For example, the I/O subsystem 712 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations.[0045] In the illustrative embodiment, the main memory 714 includes the memory module device 100. However, in other embodiments, the memory module device 100 may form a portion of another device of the computing device 700. For example, in some embodiments, the computing device 700 may include a solid state drive 720 and/or other peripheral devices 730. In such embodiments, the memory module device 100 may be included in, or otherwise form a portion of, the solid state drive 720. Of course, in other embodiments, the memory module device 100 may be included in or form a portion of other components of the computing device 700.[0046] Reference to memory devices can apply to different memory types, and in particular, any memory that has a bank group architecture. Memory devices generally refer to volatile memory technologies. Volatile memory is memory whose state (and therefore the data stored on it) is indeterminate if power is interrupted to the device. Nonvolatile memory refers to memory whose state is determinate even if power is interrupted to the device. Dynamic volatile memory requires refreshing the data stored in the device to maintain state. One example of dynamic volatile memory includes DRAM (dynamic random access memory), or some variant such as synchronous DRAM (SDRAM).EXAMPLES[0047] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0048] Example 1 includes a memory module device for accessing memory, the memory module device comprising a rank of active non-volatile memory devices; a spare nonvolatile memory device associated with the rank of active non- volatile memory devices; at least one volatile memory device; and a memory controller communicatively coupled to (i) the rank of active non-volatile memory devices via corresponding data bus lines of a data bus and (ii) to the spare non-volatile memory device and the volatile memory device by the same set of data bus lines of the data bus, and wherein the memory controller is to receive a memory read request from a host; read, via the data bus, the rank of active non-volatile memory devices in response to the memory read request; and access, via the set of data bus lines, the volatile memory device contemporaneously with the read of the rank of active non- volatile memory devices.[0049] Example 2 includes the subject matter of Example 1, and wherein to read the rank of active non-volatile memory devices comprises to read the rank of active non-volatile memory devices while the spare non- volatile memory device associated with the rank of active non-volatile memory devices is not read.[0050] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to access the volatile memory device comprises to read or write to the volatile memory device of the memory module device.[0051] Example 4 includes the subject matter of any of Examples 1-3, and wherein the memory controller is further to set an assignable identification value of each of the active nonvolatile memory devices of the rank of active non-volatile memory devices to a common value; and set an assignable identification value of the volatile memory device to a unique value. [0052] Example 5 includes the subject matter of any of Examples 1-4, and wherein to read the rank of active non-volatile memory devices comprises to set a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value.[0053] Example 6 includes the subject matter of any of Examples 1-5, and wherein to set the selection identification value of each of the active non-volatile memory devices and the spare non- volatile memory device associated with the rank of active non-volatile memory devices to the common value causes each of the active non-volatile memory devices to respond to a read command and the spare non- volatile memory device to ignore the read command.[0054] Example 7 includes the subject matter of any of Examples 1-6, and wherein to set the selection identification value comprises to set the selection identification value of each of the active non- volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value if the last memory access command issued to the rank of active non-volatile memory devices was a write command.[0055] Example 8 includes the subject matter of any of Examples 1-7, and wherein to set the assignable identification value of each of the active non- volatile memory devices comprises to write the common value to a mode register of each of the active non-volatile memory devices, and set the assignable identification value of the volatile memory device comprises to write the unique value to a mode register of the volatile memory device.[0056] Example 9 includes the subject matter of any of Examples 1-8, and wherein the memory controller is further to receive a memory write request from a host; and write to the rank of active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices in response to the memory write request.[0057] Example 10 includes the subject matter of any of Examples 1-9, and wherein writing to the rank of active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices comprises to not access the volatile memory device contemporaneously with the writing to the rank of active nonvolatile memory devices.[0058] Example 11 includes the subject matter of any of Examples 1-10, and wherein each of the active non-volatile memory devices and the spare non- volatile memory device has a master identification value that is the same value, and wherein to write to the rank of active non-volatile memory devices and the spare non-volatile memory device comprises to set a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the master identification value.[0059] Example 12 includes the subject matter of any of Examples 1-11, and wherein to set the selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device to the master identification value causes each of the active non-volatile memory devices and the spare non-volatile memory device to respond to the write command.[0060] Example 13 includes the subject matter of any of Examples 1-12, and wherein to set the selection identification value comprises to set a selection identification value of each of the active non- volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the master identification value if the last memory access command issued to the rank of active non-volatile memory devices was a read command.[0061] Example 14 includes the subject matter of any of Examples 1-13, and wherein the memory controller is further to detect a failed non-volatile memory device of the rank of active non-volatile memory devices; migrate data from the failed non-volatile memory device to the spare non-volatile memory device associated with the rank of active non-volatile memory devices; and respond to future memory read requests to read the spare non-volatile memory device and each of the active non-volatile memory devices of the rank of active non-volatile memory devices except for the failed non-volatile memory device, wherein to read the spare non-volatile memory device comprises to not access the volatile memory device contemporaneously with the reading of the spare non-volatile memory device.[0062] Example 15 includes the subject matter of any of Examples 1-14, and wherein the memory controller is further to set a master identification value of the failed non-volatile memory device to a unique value to cause the failed non-volatile memory device to not respond to future read or write commands form the memory controller.[0063] Example 16 includes the subject matter of any of Examples 1-15, and further including a plurality of ranks of active non-volatile memory devices; and a plurality of spare non-volatile memory devices, wherein each spare non- volatile memory device is associated with a corresponding rank of active non- volatile memory devices.[0064] Example 17 includes a method for accessing memory devices of a memory module device, the method comprising receiving, by a memory controller of the memory module device, a memory read request from a host; reading, by the memory controller and via a data bus, a rank of active non-volatile memory devices of the memory module device in response to the memory read request; and accessing, by the memory controller, a volatile memory device of the memory module device contemporaneously with the reading of the rank of active non-volatile memory devices using a set of data bus lines of the data bus communicatively coupled to both the volatile memory device and a spare non-volatile memory device associated with the rank of active non-volatile memory devices.[0065] Example 18 includes the subject matter of Example 17, and wherein reading the rank of active non-volatile memory devices comprises reading the rank of active non-volatile memory devices while not reading the spare non-volatile memory device associated with the rank of active non-volatile memory devices.[0066] Example 19 includes the subject matter of any of Examples 17 and 18, and wherein accessing the volatile memory device comprises reading from or writing to the volatile memory device of the memory module device.[0067] Example 20 includes the subject matter of any of Examples 17-19, and further including setting an assignable identification value of each of the active non- volatile memory devices of the rank of active non-volatile memory devices to a common value; and setting an assignable identification value of the volatile memory device to a unique value.[0068] Example 21 includes the subject matter of any of Examples 17-20, and wherein reading the rank of active non-volatile memory devices comprises setting a selection identification value of each of the active non-volatile memory devices and the spare nonvolatile memory device associated with the rank of active non-volatile memory devices to the common value.[0069] Example 22 includes the subject matter of any of Examples 17-21, and wherein setting the selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value causes each of the active non-volatile memory devices to respond to a read command and the spare non- volatile memory device to ignore the read command.[0070] Example 23 includes the subject matter of any of Examples 17-22, and wherein setting the selection identification value comprises setting the selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value if the last memory access command issued to the rank of active non-volatile memory devices was a write command. [0071] Example 24 includes the subject matter of any of Examples 17-23, and wherein setting the assignable identification value of each of the active non- volatile memory devices comprises writing the common value to a mode register of each of the active non-volatile memory devices, and setting the assignable identification value of the volatile memory device comprises writing the unique value to a mode register of the volatile memory device.[0072] Example 25 includes the subject matter of any of Examples 17-24, and further including receiving, by the memory controller, a memory write request from a host; and writing, by the memory controller, to the rank of active non-volatile memory devices and the spare non- volatile memory device associated with the rank of active non-volatile memory devices in response to the memory write request.[0073] Example 26 includes the subject matter of any of Examples 17-25, and wherein writing to the rank of active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices comprises not accessing the volatile memory device contemporaneously with the writing to the rank of active nonvolatile memory devices.[0074] Example 27 includes the subject matter of any of Examples 17-26, and wherein each of the active non-volatile memory devices and the spare non- volatile memory device has a master identification value that is the same value, and wherein writing to the rank of active nonvolatile memory devices and the spare non-volatile memory device comprises setting a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the master identification value.[0075] Example 28 includes the subject matter of any of Examples 17-27, and wherein setting the selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device to the master identification value causes each of the active non- volatile memory devices and the spare non-volatile memory device to respond to the write command.[0076] Example 29 includes the subject matter of any of Examples 17-28, and wherein setting the selection identification value comprises setting a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the master identification value if the last memory access command issued to the rank of active non-volatile memory devices was a read command. [0077] Example 30 includes the subject matter of any of Examples 17-29, and further including detecting, by the memory controller, a failed non-volatile memory device of the rank of active non-volatile memory devices; migrating, by the memory controller, data from the failed non-volatile memory device to the spare non- volatile memory device associated with the rank of active non-volatile memory devices; and responding, by the memory controller, to future memory read requests by reading the spare non-volatile memory device and each of the active non-volatile memory devices of the rank of active non-volatile memory devices except for the failed non-volatile memory device, wherein reading the spare non-volatile memory device comprises not accessing the volatile memory device contemporaneously with the reading of the spare non-volatile memory device.[0078] Example 31 includes the subject matter of any of Examples 17-30, and further including setting, by the memory controller, a master identification value of the failed nonvolatile memory device to a unique value to cause the failed non-volatile memory device to not respond to future read or write commands form the memory controller.[0079] Example 32 includes one or more machine -readable storage media comprising a plurality of instructions stored thereon that, when executed, cause a memory controller of a memory module device to perform the method of any of Examples 17-31.[0080] Example 33 includes a memory module device for accessing memory, the memory module device comprising means for receiving a memory read request from a host; means for reading, via a data bus, a rank of active non-volatile memory devices of the memory module device in response to the memory read request; and means for accessing a volatile memory device of the memory module device contemporaneously with the reading of the rank of active non-volatile memory devices using a set of data bus lines of the data bus communicatively coupled to both the volatile memory device and a spare non-volatile memory device associated with the rank of active non-volatile memory devices.[0081] Example 34 includes the subject matter of Example 33, and wherein the means for reading the rank of active non-volatile memory devices comprises means for reading the rank of active non-volatile memory devices while not reading the spare non-volatile memory device associated with the rank of active non-volatile memory devices.[0082] Example 35 includes the subject matter of any of Examples 33 and 34, and wherein the means for accessing the volatile memory device comprises means for reading from or writing to the volatile memory device of the memory module device.[0083] Example 36 includes the subject matter of any of Examples 33-35, and further including means for setting an assignable identification value of each of the active non-volatile memory devices of the rank of active non-volatile memory devices to a common value; and means for setting an assignable identification value of the volatile memory device to a unique value.[0084] Example 37 includes the subject matter of any of Examples 33-36, and wherein the means for reading the rank of active non-volatile memory devices comprises means for setting a selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value.[0085] Example 38 includes the subject matter of any of Examples 33-37, and wherein the means for setting the selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices to the common value causes each of the active non-volatile memory devices to respond to a read command and the spare non-volatile memory device to ignore the read command.[0086] Example 39 includes the subject matter of any of Examples 33-38, and wherein the means for setting the selection identification value comprises means for setting the selection identification value of each of the active non-volatile memory devices and the spare nonvolatile memory device associated with the rank of active non-volatile memory devices to the common value if the last memory access command issued to the rank of active non-volatile memory devices was a write command.[0087] Example 40 includes the subject matter of any of Examples 33-39, and wherein the means for setting the assignable identification value of each of the active non-volatile memory devices comprises means for writing the common value to a mode register of each of the active non-volatile memory devices, and the means for setting the assignable identification value of the volatile memory device comprises means for writing the unique value to a mode register of the volatile memory device.[0088] Example 41 includes the subject matter of any of Examples 33-40, and further including means for receiving a memory write request from a host; and means for writing to the rank of active non-volatile memory devices and the spare non-volatile memory device associated with the rank of active non-volatile memory devices in response to the memory write request.[0089] Example 42 includes the subject matter of any of Examples 33-41, and wherein the means for writing to the rank of active non-volatile memory devices and the spare nonvolatile memory device associated with the rank of active non-volatile memory devices comprises means for not accessing the volatile memory device contemporaneously with the writing to the rank of active non-volatile memory devices.[0090] Example 43 includes the subject matter of any of Examples 33-42, and wherein each of the active non-volatile memory devices and the spare non- volatile memory device has a master identification value that is the same value, and wherein the means for writing to the rank of active non-volatile memory devices and the spare non-volatile memory device comprises means for setting a selection identification value of each of the active non-volatile memory devices and the spare non- volatile memory device associated with the rank of active nonvolatile memory devices to the master identification value.[0091] Example 44 includes the subject matter of any of Examples 33-43, and wherein the means for setting the selection identification value of each of the active non-volatile memory devices and the spare non-volatile memory device to the master identification value causes each of the active non-volatile memory devices and the spare non-volatile memory device to respond to the write command.[0092] Example 45 includes the subject matter of any of Examples 33-44, and wherein the means for setting the selection identification value comprises means for setting a selection identification value of each of the active non-volatile memory devices and the spare nonvolatile memory device associated with the rank of active non-volatile memory devices to the master identification value if the last memory access command issued to the rank of active nonvolatile memory devices was a read command.[0093] Example 46 includes the subject matter of any of Examples 33-45, and further including means for detecting a failed non-volatile memory device of the rank of active nonvolatile memory devices; means for migrating data from the failed non-volatile memory device to the spare non-volatile memory device associated with the rank of active non-volatile memory devices; and means for responding to future memory read requests by reading the spare nonvolatile memory device and each of the active non-volatile memory devices of the rank of active non- volatile memory devices except for the failed non- volatile memory device, wherein the means for reading the spare non-volatile memory device comprises means for not accessing the volatile memory device contemporaneously with the reading of the spare non-volatile memory device.[0094] Example 47 includes the subject matter of any of Examples 33-46, and further including means for setting a master identification value of the failed non-volatile memory device to a unique value to cause the failed non-volatile memory device to not respond to future read or write commands form the memory controller. |
A semiconductor device comprises an array of magnetic cell structures each comprising a magnetic tunnel junction over an electrode on a substrate. Each of the magnetic tunnel junctions includes a magnetic material over the substrate, a first tunnel barrier material over the magnetic material, a second tunnel barrier material over the annealed first tunnel barrier material, and another magnetic material over the second tunnel barrier material. Each magnetic tunnel junction is configured to exhibit a tunnel magnetoresistance greater than or equal to about 180% at a resistance area product of less than about 8 ohm m2. The semiconductor device also includes another electrode over the another magnetic material. Semiconductor devices including the magnetic tunnel junctions, methods of forming the magnetic tunnel junctions, and methods of forming semiconductor devices including the magnetic tunnel junctions are disclosed. |
CLAIMS What is claimed is: 1. A method of forming a semiconductor device, the method comprising:forming a magnetic material over an electrode on a substrate;forming a first tunnel barrier material over the magnetic material;annealing the magnetic material and the first tunnel barrier material;forming a second tunnel barrier material over the annealed first tunnel barrier material;forming another magnetic material over the second tunnel barrier material; andforming another electrode over the another magnetic material. 2. The method of claim 1, further comprising forming an array of memory cells over the substrate, each memory cell comprising the magnetic material, the first tunnel barrier material, the second tunnel barrier material, and the another magnetic material. 3. The method of claim 1, wherein forming a first tunnel barrier material over the magnetic material comprises forming at least one of magnesium oxide, aluminum oxide, titanium dioxide, tantalum oxide, and ruthenium oxide over the magnetic material. 4. The method of claim 1, wherein forming a first tunnel barrier material comprises forming magnesium oxide by sputter deposition. 5. The method of claim 1, wherein annealing the magnetic material and the first tunnel barrier material comprises crystallizing the magnetic material and the first tunnel barrier material to exhibit the same crystal structure. 6. The method claim 1, wherein forming a second tunnel barrier material over the annealed first tunnel barrier material comprises forming the second tunnel barrier material at a higher temperature than the first tunnel barrier material.7. The method of claim 1, wherein forming a second tunnel barrier material over the annealed first tunnel barrier material comprises forming the second tunnel barrier material comprising the same material as the first tunnel barrier material. 8. The method of any one of claims 1 through 7, wherein forming a first tunnel barrier material comprises forming the first tunnel barrier material to a thickness between about 1.0 and about 1.5 times a thickness of the second tunnel barrier material. 9. The method of any one of claims 1 through 7, wherein forming a first tunnel barrier material over the magnetic material comprises forming the first tunnel barrier material at a temperature between about 20°C and about 25°C. 10. The method of any one of claims 1 through 7, wherein annealing the magnetic material and the first tunnel barrier material comprises exposing the magnetic material and the first tunnel barrier material to a temperature between about 300ºC and about 600°C. 11. The method of any one of claims 1 through 7, wherein annealing the magnetic material and the first tunnel barrier material comprises annealing the magnetic material and the first tunnel barrier material prior to forming the second tunnel barrier material over the first tunnel barrier material. 12. The method of any one of claims 1 through 7, wherein forming a second tunnel barrier material over the annealed first tunnel barrier material comprises forming the second tunnel barrier material at a temperature between about 300ºC and about 600°C. 13. The method of any one of claims 1 through 7, wherein:forming a first tunnel barrier material comprises forming the first tunnel barrier material at a temperature between about 0°C and about 25°C; andforming a second tunnel barrier material comprises forming the second tunnel barrier material at a temperature between about 300ºC and about 600°C.14. A semiconductor device, comprising:an array of magnetic cell structures, each magnetic cell structure comprising a magnetic tunnel junction over an electrode on a substrate, each magnetic tunnel junction comprising: a magnetic material over the substrate;a first tunnel barrier material over the magnetic material;a second tunnel barrier material over the first tunnel barrier material; andanother magnetic material over the second tunnel barrier material, each magnetic tunnel junction configured to exhibit a tunnel magnetoresistance of between about 180% and about 600% at a resistance area product of less than about 8 ohm μm2; andanother electrode over the another magnetic material. 15. The semiconductor device of claim 14, wherein the first tunnel barrier material over the magnetic material comprises magnesium oxide. 16. The semiconductor device of claim 14, wherein each of the first tunnel barrier material and the second tunnel barrier material comprises magnesium oxide. 17. The semiconductor device of any one of claims 14 through 16, wherein each magnetic tunnel junction is configured to exhibit a tunnel magnetoresistance of between about 180% and about 205% at a resistance area product of between about 4 ohm μm2and about 8 ohm μm2. 18. The semiconductor device of any one of claims 14 through 16, wherein each magnetic tunnel junction is configured to exhibit a tunnel magnetoresistance of between about 180% and about 300% at a resistance area product of between about 6 ohm μm2and about 7 ohm μm2. 19. The semiconductor device of any one of claims 14 through 16, wherein a ratio of a thickness of the first tunnel barrier material to a thickness of the second tunnel barrier material is between about 1.0 and about 1.5.20. The semiconductor device of any one of claims 14 through 16, wherein the first tunnel barrier material exhibits a higher density than the second tunnel barrier material. |
SEMICONDUCTOR DEVICES, MAGNETIC TUNNEL JUNCTIONS, AND METHODS OF FABRICATION THEREOF PRIORITY CLAIMThis application claims the benefit of the filing date of United States PatentApplication Serial No.14/597,903, filed January 15, 2015, for“Semiconductor Devices, Magnetic Tunnel Junctions, and Methods of Fabrication Thereof.” TECHNICAL FIELDEmbodiments disclosed herein relate to semiconductor devices including magnetic memory cells having a magnetic tunnel junction and methods of forming such devices and magnetic tunnel junctions. More specifically, embodiments disclosed herein relate to magnetic tunnel junctions exhibiting a low resistance area product at a high tunnel magnetoresistance, semiconductor devices including the magnetic tunnel junctions, and methods of forming the magnetic tunnel junctions and semiconductor devices. BACKGROUNDMagnetic Random Access Memory (MRAM) is a non-volatile memory technology based on magnetoresistance. One type of MRAM is spin torque transfer MRAM(STT-MRAM), in which a magnetic cell core includes a magnetic tunnel junction (“MTJ”) sub-structure with at least two magnetic regions, for example, a“fixed region” and a“free region,” with a non-magnetic region (e.g., a tunnel barrier material) between. The free region and the fixed region may exhibit magnetic orientations that are either horizontally oriented (“in-plane”) or perpendicularly oriented (“out-of-plane”) relative to the thickness of the regions. The fixed region includes a magnetic material that has a substantially fixed (e.g., a non-switchable) magnetic orientation. The free region, on the other hand, includes a magnetic material that has a magnetic orientation that may be switched, during operation of the cell, between a“parallel” configuration and an“anti-parallel” configuration. In the parallel configuration, the magnetic orientations of the fixed region and the free region are directed in the same direction (e.g., north and north, east and east, south and south, or west and west, respectively). In the“anti-parallel” configuration, the magnetic orientations of the fixed region and the free region are directed in opposite directions (e.g., north and south, east and west, south and north, or west and east, respectively). In the parallel configuration, the STT-MRAM cell exhibits a lower electrical resistance across the magnetoresistive elements (e.g., the fixed region and free region), defining a“0” logic state of the MRAM cell. In the anti-parallel configuration, the STT-MRAM cell exhibits a higher electrical resistance across the magnetoresistive elements, defining a“1” logic state of the STT-MRAM cell.Switching of the magnetic orientation of the free region may be accomplished by passing a programming current through the magnetic cell core, including the fixed and free regions. The fixed region polarizes the electron spin of the programming current, and torque is created as the spin-polarized current passes through the core. The spin-polarized electron current exerts the torque on the free region. When the torque of the spin-polarized electron current is greater than a critical switching current density (Jc) of the free region, the direction of the magnetic orientation of the free region is switched. Thus, the programming current can be used to alter the electrical resistance across the magnetic regions. The resulting high or low electrical resistance states across the magnetoresistive elements enable the write and read operations of the MRAM cell. After switching the magnetic orientation of the free region to achieve the one of the parallel configuration and the anti-parallel configuration associated with a desired logic state, the magnetic orientation of the free region is usually desired to be maintained, during a“storage” stage, until the MRAM cell is to be rewritten to a different configuration (i.e., to a different logic state).Switching of the magnetic orientation of the free region of a magnetic memory cell including a MTJ may be affected by the tunnel magnetoresistance (“TMR”) and the resistance area product (“RA”) of the cell. The TMR of a MTJ is a function of the resistance between a top electrode and a bottom electrode, between which the MTJ is disposed, in the high electrical resistance state and the low electrical resistance state. Specifically, the TMR measures the difference between a cell’s electrical resistance in the anti-parallel configuration (Rap) and its electrical resistance in the parallel configuration (Rp) to Rp(i.e., TMR = (Rap– Rp)/Rp). Thus, the TMR is equivalent to the change in resistance observed by changing the magnetic state of the free layer. Generally, a MTJ with a homogeneous crystal structure (e.g., a bcc (001) crystal structure), having few structural defects in the microstructure of its magnetic material, has a higher TMR than a MTJ with structural defects. A cell with high TMR may have a high read-out signal, which may speed the reading of the MRAM cell during operation. A higher TMR is preferred for reliable read operation as it will generate a larger signal difference between the on and off states of the cell. In other words, the higher the TMR, the more sensitive the device, and the easier to distinguish logic states of an associated memory cell.Another significant characteristic of a magnetic memory cell core includes the RA. The RA of a magnetic memory cell is an indication of the voltage used to switch the magnetic orientation of the free region during programming (e.g., the threshold switching voltage). An increase in the RA of a magnetic memory cell may degrade the performance of the cell by utilizing a higher threshold switching voltage, reducing the usable life of the cell. The RA may be decreased by decreasing a thickness of the tunnel barrier material. However, decreasing the thickness of the tunnel barrier material may also decrease the TMR. Thus, although a high TMR and a low RA are desired, in general, an increase in the TMR of a MTJ is obtained at the expense of a higher RA. A conventional MTJ exhibits a TMR of less than about 120% at an RA of greater than about 4 ohm μm2.Efforts to increase the TMR of a MTJ while maintaining a low RA include attempts to reduce structural defects in the crystal structure of the MTJ. For example, a magnesium oxide tunnel barrier material may be formed at elevated temperatures to produce the tunnel barrier material having stoichiometric proportions and minimal oxygen vacancies or interstitial oxygen. However, the elevated temperatures may undesirably cause an underlying magnetic material to crystalize in an undesired crystal orientation. A mismatch in crystal orientation of the magnetic material and the tunnel barrier material undesirably increases the RA and decreases the TMR of the MTJ. The increase in the RA increases the voltage required to switch the magnetic orientation of the free region during programming, increases the junction resistance, and increases the threshold switching voltage of the device. A decrease in the TMR reduces the effective spin-polarization of the electrons as they pass through the MTJ, reducing tunneling through the MTJ.Alternatively, the tunnel barrier material may be formed at lower temperatures.However, when the tunnel barrier material is formed at lower temperatures, defects, such as oxygen vacancies and interstitial oxygen atoms, within the tunnel barrier material increase. The atomic defects in the tunnel barrier material may degrade device performance by causing electrons to scatter as they travel through the MTJ and reducing the TMR of the MTJ. BRIEF DESCRIPTION OF THE DRAWINGS FIG.1 is a simplified cross-sectional view of a magnetic cell structure according to an embodiment of the disclosure; FIG.2 is a simplified cross-sectional view of a magnetic material including alternating portions of a magnetic material and a conductive material;FIG.3 is a simplified cross-sectional view of a magnetic cell structure according to another embodiment of the disclosure;FIG.4 is a simplified cross-sectional view of a magnetic cell structure according to an embodiment of the present disclosure, wherein the fixed region and the free region exhibit in-plane magnetic orientations;FIGS.5A through FIG.5C are simplified cross-sectional views illustrating different process stages for an embodiment of a method for forming the magnetic cell structure of FIG.1;FIG.6 is a schematic of an STT-MRAM system including a memory cell having a magnetic cell structure according to an embodiment of the disclosure;FIG.7 is a simplified block diagram of a semiconductor device including memory cells having a magnetic cell structure according to an embodiment of the present disclosure;FIG.8 is a simplified block diagram of a system implemented according to one or more embodiments of the present disclosure;FIG.9 is a graphical representation comparing the TMR vs. the RA of magnetic tunnel junctions formed according to embodiments of the present disclosure and magnetic tunnel junctions formed by conventional methods; andFIG.10 is a graphical representation of the TMR vs. the RA of magnetic tunnel junctions formed according to embodiments of the present disclosure. MODE(S) FOR CARRYING OUT THE INVENTION The illustrations included herewith are not meant to be actual views of any particular systems or semiconductor structures, but are merely idealized representations that are employed to describe embodiments described herein. Elements and features common between figures may retain the same numerical designation.The following description provides specific details, such as material types, material thicknesses, and processing conditions in order to provide a thorough description of embodiments described herein. However, a person of ordinary skill in the art will understand that the embodiments disclosed herein may be practiced without employing these specific details. Indeed, the embodiments may be practiced in conjunction with conventional fabrication techniques employed in the semiconductor industry. In addition, the description provided herein does not describe a complete process flow for manufacturing semiconductor devices, magnetic tunnel junctions, or magnetic memory cells, and the semiconductor devices, magnetic tunnel junctions, and magnetic memory cells described below do not form a complete semiconductor device, magnetic tunnel junction, or magnetic memory cell. Only those process acts and structures necessary to understand the embodiments described herein are described in detail below. Additional acts to form a complete semiconductor device and a magnetic memory cell including the semiconductor device may be performed by conventional techniques.According to some embodiments, a semiconductor device may include a magnetic cell structure comprising a MTJ. The MTJ may include a tunnel barrier material disposed between adjacent magnetic materials. A magnetic material may overlie a substrate and the tunnel barrier material may overlie the magnetic material. Another magnetic material may overlie the tunnel barrier material. The tunnel barrier material may exhibit the same crystal orientation as the adjacent magnetic materials. The semiconductor device including the MTJ may exhibit a high TMR, such as greater than about 180%, at a low RA, such as at less than about 8 ohm μm2.The tunnel barrier material according to embodiments of the disclosure may include at least two portions. A first portion of the tunnel barrier material may be formed over the magnetic material at a first temperature. The first portion of the tunnel barrier material and the magnetic material may be annealed to crystallize the magnetic material and orient the crystal structure of the magnetic material in alignment with the crystal structure of the first portion of the tunnel barrier material. The magnetic material and the first portion of the tunnel barrier material may be annealed at a temperature between about 300°C and about 600ºC for an amount of time sufficient to crystallize the magnetic material. After annealing, a second portion of the tunnel barrier material may be formed over the first portion at a second temperature, which is higher than the first temperature at which the first portion of the tunnel barrier material is formed. The tunnel barrier material including the first portion and the second portion may exhibit a higher TMR, such as greater than about 180%, at a low RA, such as at less than about 8 ohm μm2, than a conventional tunnel barrier material. The tunnel barrier material may also be thicker and exhibit the higher TMR than a conventional tunnel barrier material, while maintaining the low RA. In some embodiments, the RA of the tunnel barrier material is between about 4 ohm μm2and about 8 ohm μm2and the TMR of the tunnel barrier material is between about 180% and about 205%. Referring to FIG.1, a magnetic memory cell 100 including a magnetic cell core 101 according to some embodiments is illustrated. The magnetic cell core 101 may include a magnetic tunnel junction 150 and may be disposed between a lower electrode 104 and an upper electrode 126 over a substrate 102. The MTJ 150 may include a magnetic region and another magnetic region, for example, a“free region 110” and a“fixed region” 140, respectively. A tunnel barrier material 130 may be disposed between the free region 110 and the fixed region 140.The substrate 102 may include a base material or other construction upon which components, such as those within memory cells, are formed. The substrate 102 may be a semiconductor substrate, a base semiconductor material on a supporting substrate, a metal electrode, or a semiconductor substrate having one or more materials, structures, or regions formed thereon. The substrate 102 may be a conventional silicon substrate or other bulk substrate including semiconductor material. As used herein, the term“bulk substrate” means and includes not only silicon wafers, but also silicon-on-insulator (“SOI”) substrates, such as silicon-on-sapphire (“SOS”) substrates or silicon-on-glass (“SOG”) substrates, epitaxial layers of silicon on a base semiconductor foundation, or other semiconductor or optoelectronic materials, such as silicon-germanium (Si1-xGex, where x is, for example, a mole fraction between 0.2 and 0.8), germanium (Ge), gallium arsenide (GaAs), gallium nitride (GaN), or indium phosphide (InP), among others. Furthermore, when reference is made to a“substrate” in the following description, previous process stages may have been utilized to form materials, regions, or junctions in the base semiconductor structure or foundation.The lower electrode 104 may overlie the substrate 102. The lower electrode 104 may include a metal such as copper, tungsten, platinum, palladium, titanium, tantalum, nickel, titanium nitride (TiN), tantalum nitride (TaN), tungsten nitride (WN), polysilicon, a metal silicide, a metal alloy, or combinations thereof.One or more lower intermediary regions 106 may, optionally, be disposed under the magnetic regions (e.g., the free region 110 and the fixed region 140). The lower intermediary region 106, if included, may be configured to inhibit diffusion of species between the lower electrode 104 and materials overlying the lower electrode 104. The lower intermediary region 106 may include a conductive material such as one or more of copper, tantalum, titanium, tungsten, ruthenium, tantalum nitride, and titanium nitride.A seed material 108 may overlie the lower intermediary region 106, if present, or the lower electrode 104 if the lower intermediary region 106 is not present. The seed material 108 may include tantalum, platinum, ruthenium, iron, nickel, cobalt, chromium, titanium, zirconium, vanadium, copper, zinc, rhodium, silver, hafnium, tungsten, iridium, tantalum nitride, and combinations thereof. By way of non-limiting example, the seed material 108 may include tungsten and at least one of iron, cobalt, nickel, or another suitable material. In other embodiments, the seed material 108 may include iron and cobalt and may further include at least one transition element, such as tantalum, platinum, ruthenium, nickel, chromium, titanium, zirconium, vanadium, copper, zinc, rhodium, silver, hafnium, and tungsten. In yet other embodiments, the seed material 108 may include at least one of hafnium, zirconium, and tantalum and at least one of iron, cobalt, and nickel, such as FeHf. The seed material 108 may be a homogeneous composition of the seed material 108 or may include distinct portions of one or more of tantalum, platinum, ruthenium, iron, nickel, cobalt, chromium, titanium, zirconium, vanadium, copper, zinc, rhodium, silver, hafnium, tungsten, and iridium adjacent to a distinct portion of another of tantalum, platinum, ruthenium, iron, nickel, cobalt, chromium, titanium, zirconium, vanadium, copper, zinc, rhodium, silver, hafnium, tungsten, and iridium.The free region 110 may overlie the seed material 108. In some embodiments, the free region 110 directly overlies and contacts the seed material 108. The free region 110 may include a magnetic material exhibiting a switchable magnetic orientation, indicated by arrows 109, during use and operation of the magnetic memory cell 100. The switchable magnetic orientation may be switched between a parallel configuration and an anti-parallel configuration by the application of a current or applied field to the magnetic memory cell 100.In some embodiments, the free region 110 may be a conventional free region. In other embodiments, the free region 110 may include alternating portions of a magnetic material and a conductive material. However, the free region 110 is not so limited and may include other suitable magnetic materials that exhibit a switchable magnetic orientation.In some embodiments, the free region 110 may include a ferromagnetic material including at least one of cobalt (Co) and iron (Fe) (e.g., CoxFey, wherein x = 10 to 80 and y = 10 to 80) and, in some embodiments, also boron (B) (e.g., CoxFeyBz, wherein x = 10 to 80, y = 10 to 80, and z = 0 to 50). Thus, the free region 110 may include at least one of Co, Fe, and B (e.g., a CoFeB material, a CoFe material, a FeB material, a CoB material, etc.). As used herein, the term“CoFeB material” means and includes a material comprising cobalt, iron, and boron (e.g., CoxFeyBz, wherein x = 10 to 80, y = 10 to 80, and z = 0 to 50). A CoFeB material may or may not exhibit magnetism, depending on its configuration (e.g., its thickness). In other embodiments, the free region 110 may alternatively or additionally include nickel (Ni) (e.g., an NiB material). In some embodiments, the free region 110 may be substantially free of boron and may include, for example, CoFe. The CoFe may be formed as CoFeB and the boron may be diffused out of the free region 110 after formation thereof, or the CoFe may be formed (e.g., deposited) as CoFe, without any boron.The free region 110 may be homogeneous, or may include one or more sub-regions (e.g., a CoFeB material, with sub-regions having different relative atomic ratios of Co, Fe, and B).A tunnel barrier material 130 may overlie the free region 110. In some embodiments, the tunnel barrier material 130 directly overlies and contacts the free region 110. The tunnel barrier material 130 may include a nonmagnetic, crystalline material, such as magnesium oxide (MgO), aluminum oxide (Al2O3), titanium dioxide (TiO2), tantalum oxide (Ta2O5), ruthenium oxide (RuO2), boron oxide (B2O3), or combinations thereof. The tunnel barrier material 130 may be configured to induce interfacial magnetic anisotropy in the free region 110 and the fixed region 140 and may also be configured to function as a tunnel region of the MTJ 150 effected by interaction of the free region 110, the tunnel barrier material 130, and the fixed region 140.The tunnel barrier material 130 may include a first portion 112 and a second portion 114. The first portion 112 may overlie the free region 110. In some embodiments, the first portion 112 directly overlies and contacts the free region 110. The first portion 112 may be formed over the free region 110 to form an interface 111 between the free region 110 and the tunnel barrier material 130. A crystal orientation of the MTJ 150 may not change at the interface 111 between the first portion 112 and the free region 110. By way of example and not limitation, each of the free region 110 and the first portion 112 may exhibit a bcc (001) crystal structure. As described in more detail below, each of the first portion 112 and the free region 110 may be amorphous (e.g., not crystalline) as formed, with the desired crystal structure occurring following an anneal. In some embodiments, the first portion 112 is an oxide material and may include MgO, Al2O3, TiO2, Ta2O5, RuO2, B2O3, or combinations thereof.The second portion 114 may overlie the first portion 112. In some embodiments, the second portion 114 directly overlies and contacts the first portion 112. An interface 113 between the first portion 112 and the second portion 114 may be smooth and exhibit the same crystal orientation as the first portion 112 and the free region 110 (e.g., a bcc (001) crystal structure). The second portion 114 may be an oxide material and may include MgO, Al2O3, TiO2, Ta2O5, RuO2, B2O3, or combinations thereof. The first portion 112 and the second portion 114 may include the same material. In some embodiments, the first portion 112 and the second portion 114 include MgO. In some such embodiments, the second portion 114 includes a ratio of oxygen to magnesium closer to stoichiometric (e.g., 1:1) than the first portion 112. Thus, the second portion 114 may have less oxygen vacancies and less interstitial oxygen and also a higher density than the first portion 112. The second portion 114 may exhibit less structural defects than the first portion 112 and, in some embodiments, may exhibit a higher TMR and a lower RA than the first portion 112.The tunnel barrier material 130 may have a total thickness (i.e., a sum of a thickness of the first portion 112 and a thickness of the second portion 114) of between about 10 Å and about 30 Å, such as between about 10 Å and about 15 Å, between about 15 Å and about 20 Å, between about 20 Å and about 25 Å, or between about 25 Å and about 30 Å. The tunnel barrier material 130 may have a thickness of between about 10 Å and about 20 Å. In some embodiments, the thickness of the tunnel barrier material 130 is about 18 Å.The first portion 112 and the second portion 114 may have the same thickness, the first portion 112 may have a greater thickness than the second portion 114, or the second portion 114 may have a greater thickness than the first portion 112. The RA and the TMR of the MTJ 150 may be tailored by altering the thickness of the first portion 112 relative to the thickness of the second portion 114 of the tunnel junction material 130. A ratio of the thickness of the first portion 112 to the thickness of the second portion 114 may be between about 0.9 and about 2.0, such as between about 0.9 and about 1.0, between about 1.0 and about 1.25, between about 1.25 and about 1.5, between about 1.2 and about 1.8, or between about 1.5 and about 2.0. In some embodiments, the ratio is between about 1.0 and about 1.5 and the total thickness of the tunnel barrier material 130 is about 18 Å.The tunnel barrier material 130 having the first portion 112 and the second portion 114 may exhibit a TMR of between about 180% and about 600%, such as between about 180% and about 200%, between about 180% and about 225%, between about 180% and about 300%, between about 200% and about 220%, between about 220% and about 250%, between about 250% and about 300%, between about 300% and about 400%, or between about 400% and about 600%. In some embodiments, the TMR is between about 180% and about 300%. The tunnel barrier material 130 may exhibit a RA of between about 3 ohm μm2and about 8 ohm μm2, such as between about 3 ohm μm2and about 4 ohm μm2, between about 4 ohm μm2and about 5 ohm μm2, between about 5 ohm μm2and about 6 ohm μm2, between about 6 ohm μm2and about 7 ohmor between about 7 ohm μm2and about 8 ohm μm2. In some embodiments, the RA is between about 6 ohm μm2and about 7 ohm μm2. In other embodiments, the tunnel barrier material 130 exhibits an RA of between about 4 ohm μm2and about 8 ohm μm2and a TMR of between about 180% and about 205%. By way of non-limiting example, the tunnel barrier material 130 may exhibit a RA of about 4 ohm μm2and a TMR of about 180%, or a RA of about 8 ohm μm2and a TMR of about 205% at a thickness of between about 10 Å and about 20 Å.The fixed region 140 may overlie the tunnel barrier material 130. In some embodiments, the fixed region 140 directly overlies and contacts the second portion 114 of the tunnel barrier material 130.The fixed region 140 may include one or more magnetic materials and, optionally, one or more non-magnetic materials. For example, the fixed region 140 may be configured as a synthetic antiferromagnet including a sub-region of ruthenium or tantalum adjoined by magnetic sub-regions. The magnetic sub-regions may include a material including cobalt, and at least one of palladium and platinum, and combinations thereof, a CoFeB material, and combinations thereof. Alternatively, the fixed region 140 may be configured with structures of alternating sub-regions of magnetic material and coupler material. Each of the magnetic sub-regions may include one or more materials and one or more regions therein. As another example, the fixed region 140 may be configured as a single, homogeneous magnetic material. Accordingly, the fixed region 140 may have uniform magnetization, or sub-regions of differing magnetization that, overall, effect the fixed region 140 having a fixed magnetic orientation during use and operation of the magnetic memory cell 100.The fixed region 140 may include a first magnetic portion 116 over the second portion 114 of the tunnel barrier material 130, a coupling material 118 over the first magnetic portion 116, and a second magnetic portion 120 over the coupling material 118. In some embodiments, the first magnetic portion 116 includes a first magnetic sub-region 116a that may include a CoFeB material overlying the second portion 114, a spacer 116b that may include a tantalum material overlying the first portion 116a, and a second magnetic sub-region 116c that may include a material including cobalt and at least one of palladium and platinum (e.g., CoPd, CoPt) over the spacer 116b. The coupling material 118 may include a ruthenium material overlying the second magnetic sub-region 116c of the first magnetic portion 116. The second magnetic portion 120 may include a material including cobalt, palladium, platinum, and combinations thereof, such as cobalt and at least one of palladium and platinum. In some embodiments, the second magnetic portion 120 includes the same material as the second magnetic sub-region 116c of the first magnetic portion 116.In other embodiments, the first magnetic portion 116 includes an artificial superlattice structure and the second magnetic portion 120 includes another artificial superlattice structure overlying the coupling material 118. Referring to FIG.2, the artificial superlattice structure of the first magnetic portion 116 may include alternating portions of a magnetic material 117 and a conductive material 115. The conductive material 115 may enable the magnetic material 117 to exhibit a perpendicular anisotropy (i.e., a vertical magnetic orientation). The magnetic material 117 may include cobalt, iron, and combinations thereof. The conductive material 115 may include at least one of platinum, palladium, nickel, and iridium. In some embodiments, the magnetic material 117 includes cobalt and the conductive material 115 includes platinum. Although FIG.2 depicts six regions of magnetic material 117 and six regions of conductive material 115 in the first magnetic portion 116, the artificial superlattice structure of the first magnetic portion 116 is not so limited and may include any number (e.g., one, two, three, four, five, etc.) of alternating regions of magnetic material 117 and conductive material 115.In some embodiments, a region of the conductive material 115 of the first magnetic portion 116 may directly overlie and contact the second portion 114 of the tunnel barrier material 130. For example, a region of the conductive material 115 may directly overlie and contact the second portion 114 of the tunnel barrier material 130. In other embodiments, a region of the magnetic material 117 may directly overlie and contact the second portion 114 of the tunnel barrier material 130.Referring back to FIG.1, the coupling material 118 may overlie the first magnetic portion 116. In some embodiments, the coupling material 118 directly overlies and contacts the first magnetic portion 116 (e.g., the second magnetic sub-region 116c of the first magnetic portion 116). The coupling material 118 may include tantalum, ruthenium, rhodium, and combinations thereof. In some embodiments, the coupling material 118 is ruthenium. The coupling material 118 may have a thickness between about 1 Å and about 10 Å. In some embodiments, the coupling material 118 has a thickness between about 4 Å and about 5 Å.The second magnetic portion 120 may directly overlie the coupling material 118. The second magnetic portion 120 may include the same materials and may be substantially the same as at least a portion of the first magnetic portion 116. In some embodiments, the second magnetic portion 120 includes a material including cobalt and at least one of palladium and platinum and may include the same material as the second magnetic sub-region 116c of the first magnetic portion 116.The first magnetic portion 116 and the second magnetic portion 120 of the fixed region 140 may include a fixed magnetic orientation, indicated by arrows 119. The fixed magnetic orientation may be north, south, east, west, etc. The fixed magnetic orientation of the first magnetic portion 116 and the second magnetic portion 120 may be the same or may be different.One or more upper intermediary regions 124 may, optionally, be disposed over the fixed region 140. The upper intermediary region 124, if included, may be configured to inhibit diffusion of species between the upper electrode 126 and underlying materials during operation of the memory cell. The upper intermediary region 124 may include a conductive material (e.g., one or more materials such as copper, tantalum, titanium, tungsten, ruthenium, tantalum nitride, titanium nitride) that may form a conductive capping region.The upper electrode 126 may overlie the upper intermediary region 124. The upper electrode 126 may include copper, tungsten, platinum, palladium, titanium, tantalum, nickel, titanium nitride, tantalum nitride, tungsten nitride, polysilicon, a metal silicide, a metal alloy, or combinations thereof. In some embodiments, the upper electrode 126 includes the same materials as the lower electrode 104.The magnetic memory cell 100 of FIG.1 is configured as a“top pinned” memory cell (i.e., a memory cell in which the fixed region 140 is disposed over the free region 110). However, in other embodiments, such as that of FIG.3, a free region 110ƍ may overlie a fixed region 140ƍ. Thus, with reference to FIG.3, a magnetic memory cell 100ƍ including a MTJ 150ƍ may be configured as a“bottom pinned” memory cell. The magnetic memory cell 100ƍ may include a magnetic cell core 101ƍ disposed between the lower electrode 104 and the top electrode 126.The magnetic memory cell 100ƍ may include a lower intermediary region 106 overlying the lower electrode 104. The seed material 108 may overlie the lower intermediary region 106, if present. In other embodiments, the seed material 108 may directly overlie and contact the lower electrode 104. The seed material 108 may be the same as described above with reference to FIG.1.The fixed region 140ƍ may directly overlie and contact the seed material 108. The fixed region 140ƍ may include a fixed magnetic orientation, indicated by arrows 119. The fixed region 140ƍ may include the same materials described above with reference to fixed region 140. In some embodiments, the fixed region 140ƍ includes a second magnetic portion 120ƍ, a coupling material 118ƍ, and a first magnetic portion 116ƍ. The first magnetic portion 116ƍ may include a first magnetic sub-region 116aƍ, a spacer 116bƍ, and a second magnetic sub-region 116cƍ. The first magnetic sub-region 116aƍ, the spacer 116bƍ, and the second magnetic sub-region 116cƍ may be the same as the first magnetic sub-region 116a, the spacer 116b, and the second magnetic sub-region 116c, respectively, described above with reference to FIG.1. Each of the first magnetic portion 116ƍ, the coupling material 118ƍ, and the second magnetic portion 120ƍ may be the same as the first magnetic portion 116, the coupling material 118, and the second magnetic portion 120, respectively, described above with reference to FIG.1. However, the fixed region 140ƍ may not directly overlie the tunnel barrier material 130 as in the magnetic memory cell 100 of FIG.1. Rather, the second magnetic portion 120ƍ of the fixed region 140ƍ may directly overlie and contact the underlying seed material 108. The coupling material 118ƍ may overlie the second magnetic portion 120ƍ and the first magnetic portion 116ƍ may overlie the coupling material 118ƍ.The tunnel barrier material 130 may overlie the fixed region 140ƍ. The first portion 112 of the tunnel barrier material 130 may directly overlie and contact the fixed region 140ƍ. The first portion 112 may be formed over the fixed region 140ƍ to form an interface 111ƍ between the fixed region 140ƍ and the tunnel barrier material 130. The fixed region 140ƍ may exhibit a crystal structure that is aligned with a crystal structure of the first portion 112. By way of example and not limitation, each of the first portion 112 and the fixed region 140ƍ may exhibit a bcc (001) crystal structure without a change in the crystal structure of the MTJ 150ƍ at the interface 111ƍ.The tunnel barrier material 130 may include the same materials as described above with reference to FIG.1. Thus, each of the first portion 112 and the second portion 114 of the tunnel barrier material 130 may be the same as described above with reference to FIG.1. An interface 113ƍ between the first portion 112 and the second portion 114 may be smooth and exhibit the same crystal orientation as the interface 111ƍ on which the first portion 112 is formed. The tunnel barrier material 130 may be disposed directly between the fixed region 140ƍ and the free region 110ƍ.The free region 110ƍ may directly overlie and contact the tunnel barrier material 130. In some embodiments, the free region 110ƍ directly overlies and contacts the second portion 114 of the tunnel barrier material 130. The free region 110ƍ may include the same materials as described above with reference to FIG.1. The free region 110ƍ may include a switchable magnetic orientation, indicated by arrows 109.The optional upper intermediary region 124 may overlie the free region 110ƍ. The upper electrode 126 may overlie the upper intermediary region 124, if present.The memory cells of embodiments of the disclosure may be configured as “out-of-plane” STT-MRAM cells.“Out-of-plane” STT-MRAM cells may include magnetic regions exhibiting a magnetic orientation that is predominately oriented in a vertical direction (e.g., a direction that is perpendicular to a width and length of the respective region or a direction that is perpendicular to a primary surface of the substrate on which the STT-MRAM cell is located). For example, as illustrated in FIG.1 and FIG.3, an STT-MRAM cell (e.g., magnetic memory cell 100, magnetic memory cell 100ƍ) may be configured to exhibit a vertical magnetic orientation in at least one of the magnetic regions (e.g., the free region 110, 110ƍ and the fixed region 140, 140ƍ). As indicated in FIG.1 and FIG.3, each of the free region 110, 110ƍ and the fixed region 140, 140ƍ may exhibit a vertical magnetic orientation as indicated by the arrows 109 and the arrows 119. The magnetic orientation of the fixed region 140, 140ƍ may remain directed in essentially the same direction throughout use and operation of the STT-MRAM cell, for example, in the direction indicated by arrows 119. The magnetic orientation of the free region 110, 110ƍ, on the other hand, may be switched during use and operation of the cell, between a parallel configuration and an anti-parallel configuration, as indicated by the arrows 109. As another example, as illustrated in FIG.4, an in-plane magnetic memory cell 100Ǝ including a magnetic cell core 101Ǝ may be configured to exhibit a horizontal magnetic orientation in at least one of the magnetic regions (e.g., a free region 110Ǝ and a fixed region 140Ǝ) of an MTJ 150Ǝ, as indicated by arrow 109ƍ in the free region 110Ǝ and arrow 119ƍ in the fixed region 140Ǝ.A semiconductor device may include at least one magnetic memory cell including the magnetic cell cores 101, 101ƍ, 101Ǝ of the disclosure disposed between a pair of electrodes.Accordingly, a semiconductor device is disclosed. The semiconductor device comprises a magnetic tunnel junction over a seed material on a substrate, the magnetic tunnel junction exhibiting a tunnel magnetoresistance of between about 180% and about 300% and comprising a magnetic material over the seed material, an oxide material over the magnetic material, another oxide material over the oxide material, the oxide material and the another oxide material having a thickness of between about 10 Å and about 20 Å, and another magnetic material over the another oxide material. Referring to FIG.5A through FIG.5C, a method of forming the magnetic memory cell 100 of FIG.1 is shown. The method may include forming a magnetic memory cell 200 over a substrate 202. A lower electrode material 204 may be formed over the substrate 202. The lower electrode material may include any of the materials described above with reference to the lower electrode 104.An intermediary region material 206 may, optionally, be formed over the lower electrode material 204. The lower intermediary region material 206 may be formed from any of the materials described above with reference to the lower intermediary region 106. In some embodiments, the lower intermediary region material 206 may be integral with the conductive material of the lower electrode material 204. For example, the lower intermediary region material 206 may be an upper-most sub-region of the lower electrode material 204.A seed material 208 may be formed over the lower intermediary region material 206, if present, or the lower electrode material 204. The seed material 208 may be formed as described above with reference to FIG.1. Each portion of the seed material 208 may be formed by sputter deposition, such as by magnetron sputtering (e.g., high-power impulse magnetron sputtering (HIPIMS), dc magnetron sputtering, etc.), ion-beam sputtering, or other PVD methods. The seed material 208 may be also formed by at least one of atomic layer deposition (ALD), chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), low pressure chemical vapor deposition (LPCVD), or other film deposition processes.A free region material 210 may be formed over the seed material 208. The free region material 210 may be formed of any of the materials described above with reference to the free region 110. For example, the free region material 210 may include a CoFeB material. In other embodiments, the free region material 210 may include an artificial superlattice structure material formed of alternating portions of the magnetic material 117 and the conductive material 115, as described above with reference to the first magnetic portion 116 of FIG.2. The free region material 210 may be amorphous when formed and may be formed at a temperature such that the free region material 210 remains in an amorphous state. The free region material 210 may exhibit a switchable magnetic orientation, indicated by arrows 209.As shown in FIGs.5A and 5B, a tunnel barrier material 230 may be formed over the free region material 210. The tunnel barrier material 230 may include a first portion 212 and a second portion 214. The first portion 212 of the tunnel barrier material 230 may be formed over the free region material 210 to form an interface 211. The first portion 212 of the tunnel barrier material 230 may be formed from the same materials as described above with reference to the first portion 112 of the tunnel barrier material 130.The first portion 212 may be formed by at least one of ALD, CVD, PECVD, LPCVD, PVD, or other film deposition processes. In some embodiments, the first portion 212 is formed by sputter deposition, such as by magnetron sputtering (e.g., high-power impulse magnetron sputtering (HIPIMS), DC sputtering, etc.), RF sputtering, electron beam physical vapor deposition, ion-beam reactive sputtering, or other PVD methods. In someembodiments, the first portion 212 is formed from MgO. The first portion 212 may be formed as MgO, rather than formed as a magnesium portion that is subsequently oxidized to MgO by exposing the magnesium portion to oxidizing conditions. The source of the MgO may be a single crystal MgO or a multi-crystal MgO deposition source or sputtering target.The first portion 212 may be formed over the free region material 210 at a first temperature such that the underlying free region material 210 is not crystallized. In other words, the free region material 210 may remain amorphous during formation of the first portion 212 of the tunnel barrier material 230. As formed, the first portion 212 may be amorphous or crystalline. In some embodiments, the free region material 210 includes a CoFeB material that remains amorphous during formation of the first portion 212. In some embodiments, the first portion 212 is crystalline when initially formed. The first portion 212 may be formed at a temperature between about -150°C and about 150°C, such as between about -150°C and about 0°C, between about 0°C and about 25°C, between about 20°C and about 25°C, between about 25°C and about 50°C, or between about 50°C and about 150°C. In some embodiments, the first portion 212 is formed at room temperature (e.g., between about 20°C and about 25°C). If the first portion 212 is formed at room temperature, the underlying free region material 210 may remain in its amorphous state.After forming the first portion 212 of the tunnel barrier material 230 over the free region material 210, the free region material 210 and the first portion 212 may be annealed, such as by thermal annealing. Exposing the free region material 210 and the first portion 212 to annealing conditions may crystallize the free region material 210 from the interface 211 through the free region material 210. After annealing the first portion 212 and the free region material 210, the free region material 210 may have a crystal structure that is aligned with (i.e., matched to) a crystal structure of the first portion 212. In some embodiments, a CoFeB free region material 210 is crystallized from the interface 211 and includes the same crystal structure as the first portion 212 including MgO. Annealing the first portion 212 may also cause any oxygen vacancies within the first portion 212 to fill with oxygen, increasing the stoichiometry of the first portion 212 of the tunnel barrier material 230. By way of non-limiting example, where the first portion 212 includes MgO, annealing the first portion 212 may attract oxygen to the first portion 212, filling any oxygen vacancies that may have been formed during the low temperature formation of the first portion 212.To anneal the free region material 210 and the first portion 212, the free region material 210 and the first portion 212 may be exposed to a temperature sufficient to crystalize the free region material 210 and for a sufficient amount of time. Exposing the first portion 212 to annealing conditions may increase the crystal quality of the first portion 212 upon which the second portion 214 may be subsequently formed, as described in more detail below. The annealing may also form a smooth surface of the first portion 212, upon which the second portion 214 is formed. The free region material 210 and the first portion 212 may be exposed to a temperature of between about 300°C and about 600°C for between about 60 seconds and about one hour (1 hr.). The free region material 210 and the first portion 212 may be exposed to a temperature of between about 300°C and about 350°C, between about 350°C and about 400°C, between about 400°C and about 500°C, or between about 500°C and about 600°C. The exposure time may be between about 60 seconds and about five minutes, between about 5 minutes and 15 minutes, between about 15 minutes and about 30 minutes, or between about 30 minutes and about 60 minutes.Referring to FIG.5B, after annealing the free region material 210 and the first portion 212, the second portion 214 of the tunnel barrier material 230 may be formed. The second portion 214 may be formed directly over and in contact with the first portion 212. The second portion may be formed of the same materials described above with reference to the second portion 114. In some embodiments, the second portion 214 is formed of the same material as the first portion 212. The first portion 212 of the tunnel barrier material 230 that has been annealed may act as a seed upon which the second portion 214 is formed, such that the crystal structure of the second portion 214 matches the crystal structure of the first portion 212. An exposed surface of the first portion 212 may be a seed upon which the second portion 214 is formed to the same crystal orientation as the first portion 212 and the free region material 210. The first portion 212 and the second portion 214 may exhibit the same crystal orientation at an interface 213.The second portion 214 of the tunnel barrier material 230 may be formed at a second temperature that is higher than the first temperature at which the first portion 212 is formed. The second portion 214 may be formed by one of the same methods described above for forming the first portion 212. For example, the second portion 214 may be formed by sputter deposition, such as by at least one of ALD, CVD, PECVD, LPCVD, PVD, or other film deposition processes. In some embodiments, the second portion 214 is formed by sputter deposition, such as by magnetron sputtering (e.g., high-power impulse magnetron sputtering (HIPIMS), DC sputtering, etc.), RF sputtering, electron beam physical vapor deposition, ion-beam reactive sputtering, or other PVD methods. However, the second portion 214 may be formed at a different, higher temperature than the first portion 212 is formed. For example, the second portion 214 may be formed at a temperature between about 300°C and about 600°C, as described above. The second portion 214 may be formed at the same temperature as the temperature at which the first portion 212 and the free region material 210 are annealed. In other embodiments, the second portion 214 may be formed at a different temperature than the temperature at which the first portion 212 and the free region material 210 are annealed. By way of non-limiting example, the second portion 214 may be formed at a temperature between about 300°C and about 600°C, such as between about 300°C and about 350°C, between about 350°C and about 400°C, between about 400°C and about 500°C, or between about 500°C and about 600°C. Forming the second portion 214 at an elevated temperature may form a more stoichiometric material having an increased crystal quality. For example, where the second portion 214 includes MgO, the second portion 214 may include a stoichiometric amount of oxygen with less oxygen vacancies and less interstitial oxygen than the first portion 212. In some embodiments, each of the first portion 212 and the second portion 214 have a ratio of magnesium to oxygen of approximately one to one.The second portion 214 may be formed to the same thickness, a greater thickness, or a lesser thickness than the first portion 212. In some embodiments, the ratio is about 1.5 and the total thickness of the tunnel barrier material 130 is about 18 Å. The ratio of the thickness of the first portion 212 to the second portion 214 may be tailored to increase the TMR and decrease the RA of the tunnel barrier material 230. The tunnel barrier material 230 may be formed to exhibit a TMR of between about 180% and about 600% and an RA of between about 3 ohm μm2and about 8 ohm μm2, as described above with reference to the tunnel barrier material 130. In some embodiments, the thickness of the second portion 214 is less than a thickness of the first portion 212.With reference to FIG.5C, a fixed region material 240 may be formed over the second portion 214 of the tunnel barrier material 230. The fixed region material 240 may include a first magnetic material 216 over the second portion 214 of the tunnel barrier material 230, a coupling material 218 over the first magnetic material 216, and a second magnetic material 220 over the coupling material 218. The first magnetic material 216 may include a first magnetic sub-region 216a, a spacer material 216b, and a second magneticsub-region 216c. Each of the first magnetic sub-region 216a, the spacer material 216b, and the second magnetic sub-region 216c may be formed of the same materials as the first magnetic sub-region 116a, the spacer 116b, and the second magnetic sub-region 116c, respectively, described above. Each of the first magnetic material 216, the coupling material 218, and the second magnetic material 220 may be formed of the same materials as the first magnetic portion 116, the coupling material 118, and the second magnetic portion 120, respectively, described above. The first magnetic material 216 and the second magnetic material 220 of the fixed region material 240 may include a fixed magnetic orientation, indicated by arrows 219.The coupling material 218 may be formed over the first magnetic material 216 (e.g., over the second magnetic sub-region 216c of the first magnetic material 216). The coupling material 218 may be formed between the first magnetic material 216 and the second magnetic material 220. The coupling material 218 may be formed by at least one of ALD, CVD, PVD, PECVD, LPCVD, or other film deposition processes.The second magnetic material 220 may be formed directly over the coupling material 218. The second magnetic material 220 may be formed in the same manner and from the same materials as the first magnetic material 216.An upper intermediary region material 224 may optionally be formed over the second magnetic material 220 and may include the same materials as the lower intermediary region material 206. An upper electrode material 226 may be formed over the upper intermediary region material 224, if present, or over the second magnetic material 220. The upper electrode material 226 may be formed of the same materials as described above with reference to the upper electrode 126.The magnetic memory cell 200 may be processed to form the magnetic memory cell 100 as shown in FIG.1. The magnetic memory cell 200 structure may be processed by conventional photolithography, material removal, etching, or other processes that are not described in detail herein.Although the magnetic memory cell 200 described with reference to FIG.5A through FIG.5C describes forming the magnetic memory cell 100 of FIG.1, the magnetic memory cell 100ƍ of FIG.3 may be formed by similar methods. However, the fixed region material 240 would be formed over the seed material 208, the first portion 212 of the tunnel barrier material 230 would be formed over the fixed region material 240, and the free region material 210 would be formed over the second portion 214 of the tunnel barrier material 230, resulting in the magnetic memory cell of FIG.3. In other embodiments, the magnetic memory cell 100Ǝ of FIG.4 may be formed by forming the free region material 210 and the fixed region material 240 to exhibit a horizontal magnetic orientation.Forming the tunnel barrier material 230 from the first portion 212 and the second portion 214 may increase the TMR and decrease the RA of the magnetic tunnel junction. The MTJ 150 may be substantially free of defects such as oxygen vacancies or interstitial oxygen within the crystal structure of the tunnel barrier material 230. The tunnel barrier material 230 may, therefore, exhibit improved tunneling characteristics at a high TMR and a low RA.Accordingly, a method of forming a semiconductor device is disclosed. The method comprises forming a magnetic material over an electrode on a substrate, forming a first tunnel barrier material over the magnetic material, annealing the magnetic material and the first tunnel barrier material, forming a second tunnel barrier material over the annealed first tunnel barrier material, forming another magnetic material over the second tunnel barrier material, and forming another electrode over the another magnetic material.Accordingly, a method of forming a magnetic tunnel junction is disclosed. The method comprises forming at a first temperature a barrier material over a magnetic material, annealing the barrier material and the magnetic material, forming at a second temperature another barrier material over the annealed barrier material, and forming another magnetic material over the another barrier material.Accordingly, a method of forming a semiconductor device is disclosed. The method comprises forming a seed material over a substrate, forming a magnetic material over the seed material, forming at a first temperature an oxide material over the magnetic material, forming at a second temperature higher than the first temperature, another oxide material over the oxide material, and forming another magnetic material over the another oxide material.With reference to FIG.6, illustrated is an STT-MRAM system 600 that includes peripheral devices 612 in operable communication with an STT-MRAM cell 614, a grouping of which may be fabricated to form an array of memory cells in a grid pattern including a number of rows and columns, or in various other arrangements, depending on the system requirements and fabrication technology. The STT-MRAM cell 614 may include a magnetic cell core 601, an access transistor 603, a conductive material that may function as a data/sense line 604 (e.g., a bit line), a conductive material that may function as an access line 605 (e.g., a word line) and a conductive material that may function as a source line 606. The peripheral devices 612 of the STT-MRAM system may include read/write circuitry 607, a bit line reference 608, and a sense amplifier 609. The magnetic cell core 601 may be any one of the magnetic cell cores 101, 101ƍ, 101Ǝ described above. Due to the structure of the cell core 601, the method of fabrication, or both, the STT-MRAM cell 614 may have a high TMR and a low resistance (e.g., low RA product).In use and operation, when an STT-MRAM cell 614 is selected to be programmed, a programming current is applied to the STT-MRAM cell 614, and the current is spin-polarized by the fixed region of the magnetic cell core 601 and exerts a torque on the free region of the cell core 601, which switches the magnetization of the free region to“write to” or“program” the STT-MRAM cell 614. In a read operation of the STT-MRAM cell 614, a current is used to detect the resistance state of the magnetic cell core 601.To initiate programming of the STT-MRAM cell 614, the read/write circuitry 607 may generate a write current (i.e., a programming current) to the data/sense line 604 and the source line 606. The polarity of the voltage between the data/sense line 604 and the source line 606 determines the switch in magnetic orientation of the free region in the magnetic cell core 601. By changing the magnetic orientation of the free region with the spin polarity, the free region is magnetized according to the spin polarity of the programming current and the programmed logic state is written to the STT-MRAM cell 614.To read the STT-MRAM cell 614, the read/write circuitry 607 generates a read voltage to the data/sense line 604 and the source line 606 through the cell core 601 and the access transistor 603. The programmed state of the STT-MRAM cell 614 relates to the electrical resistance across the cell core 601, which may be determined by the voltage difference between the data/sense line 604 and the source line 606. In some embodiments, the voltage difference may be compared to the bit line reference 608 and amplified by the sense amplified 609.FIG.6 illustrates one example of a STT-MRAM system 600 including at least one magnetic memory cell. It is contemplated, however, that the magnetic cell cores 101, 101ƍ, 101Ǝ may be incorporated and utilized within any STT-MRAM system configured to incorporate a magnetic cell core having magnetic regions. It is also contemplated that the magnetic cell cores 101, 101ƍ, 101Ǝ may be used in other magnetic memory cells besides STT-MRAM cells.With reference to FIG.7, illustrated is a simplified block diagram of a semiconductor device 700 implemented according to one or more embodiments described herein. The semiconductor device 700 includes a memory array 702 and a control logic component 704. The memory array 702 may include a plurality of STT-MRAM cells 614 (FIG.6) including any of the magnetic cell cores (e.g., the magnetic cell cores 101, 101ƍ, 101Ǝ of FIG.1, FIG.3, and FIG.4, respectively) discussed above, which magnetic cell cores (e.g., the magnetic cell cores 101, 101ƍ, 101Ǝ) may have been formed according to a method described above and may be operated according to a method described above. The control logic component 704 may be configured to operatively interact with the memory array 702 so as to read from or write to any or all memory cells (e.g., STT-MRAM cell 614 (FIG.6)) within the memory array 702.Accordingly, a semiconductor device is disclosed. The semiconductor device comprises an array of magnetic cell structures, each magnetic cell structure comprising a magnetic tunnel junction over an electrode on a substrate, each magnetic tunnel junction comprising a magnetic material over the substrate, a first tunnel barrier material over the magnetic material, a second tunnel barrier material over the first tunnel barrier material, and another magnetic material over the second tunnel barrier material, each magnetic tunnel junction configured to exhibit a tunnel magnetoresistance of between about 180% and about 600% at a resistance area product of less than about 8 ohm μm2. The semiconductor device further comprises another electrode over the another magnetic material.With reference to FIG.8, depicted is a processor-based system 800. Theprocessor-based system 800 may include various electronic devices manufactured in accordance with embodiments of the present disclosure. The processor-based system 800 may be any of a variety of types such as a computer, pager, cellular phone, personal organizer, control circuit, or other electronic device. The processor-based system 800 may include one or more processors 802, such as a microprocessor, to control the processing of system functions and requests in the processor-based system 800. The processor 802 and other subcomponents of the processor-based system 800 may include magnetic memory devices manufactured in accordance with embodiments of the present disclosure.The processor-based system 800 may include a power supply 804 in operable communication with the processor 802. For example, if the processor-based system 800 is a portable system, the power supply 804 may include one or more of a fuel cell, a power scavenging device, permanent batteries, replaceable batteries, and rechargeable batteries. The power supply 804 may also include an AC adapter; therefore, the processor-based system 800 may be plugged into a wall outlet, for example. The power supply 804 may also include a DC adapter such that the processor-based system 800 may be plugged into a vehicle cigarette lighter or a vehicle power port, for example.Various other devices may be coupled to the processor 802 depending on the functions that the processor-based system 800 performs. For example, a user interface 806 may be coupled to the processor 802. The user interface 806 may include input devices such as buttons, switches, a keyboard, a light pen, a mouse, a digitizer and stylus, a touch screen, a voice recognition system, a microphone, or a combination thereof. A display 808 may also be coupled to the processor 802. The display 808 may include an LCD display, an SED display, a CRT display, a DLP display, a plasma display, an OLED display, an LED display, a three-dimensional projection, an audio display, or a combination thereof. Furthermore, an RF sub-system/baseband processor 810 may also be coupled to the processor 802. The RF sub-system/baseband processor 810 may include an antenna that is coupled to an RF receiver and to an RF transmitter (not shown). A communication port 812, or more than one communication port 812, may also be coupled to the processor 802. The communication port 812 may be adapted to be coupled to one or more peripheral devices 814, such as a modem, a printer, a computer, a scanner, or a camera, or to a network, such as a local area network, remote area network, intranet, or the Internet, for example.The processor 802 may control the processor-based system 800 by implementing software programs stored in the memory. The software programs may include an operating system, database software, drafting software, word processing software, media editing software, or media playing software, for example. The memory is operably coupled to the processor 802 to store and facilitate execution of various programs. For example, the processor 802 may be coupled to system memory 816, which may include one or more of spin torque transfer magnetic random access memory (STT-MRAM), magnetic random access memory (MRAM), dynamic random access memory (DRAM), static random access memory (SRAM), racetrack memory, and other known memory types. The system memory 816 may include volatile memory, non-volatile memory, or a combination thereof. The system memory 816 is typically large so that it can store dynamically loaded applications and data. In some embodiments, the system memory 816 may include semiconductor devices, such as the semiconductor device 700 of FIG.7, memory cells including any of the magnetic cell cores 101, 101ƍ, 101Ǝ of FIG.1, FIG.3, and FIG.4, respectively, described above, or a combination thereof.The processor 802 may also be coupled to non-volatile memory 818, which is not to suggest that system memory 816 is necessarily volatile. The non-volatile memory 818 may include one or more of STT-MRAM, MRAM, read-only memory (ROM) such as anEPROM, resistive read-only memory (RROM), and flash memory to be used in conjunction with the system memory 816. The size of the non-volatile memory 818 is typically selected to be just large enough to store any necessary operating system, application programs, and fixed data. Additionally, the non-volatile memory 818 may include a high capacity memory such as disk drive memory, such as a hybrid-drive including resistive memory or other types of non-volatile solid-state memory, for example. The non-volatile memory 818 may include semiconductor devices, such as the semiconductor device 700 of FIG.7, memory cells including any of the magnetic cell cores 101, 101ƍ, 101Ǝ of FIG.1, FIG.3, and FIG.4, respectively, or a combination thereof. EXAMPLESExampleFIG.9 is a graphical representation comparing the TMR vs. the RA of MTJs formed according to embodiments of the disclosure to MTJs formed by conventional methods. A MgO tunnel barrier material was formed by RF sputtering at about 20°C over a CoFeB magnetic material. The MgO and the CoFeB were annealed at a temperature of about 500°C to crystallize the CoFeB magnetic material in the same crystal orientation as the MgO. A second MgO tunnel barrier material was formed by RF sputtering at about 500°C over the annealed MgO. Another CoFeB magnetic material was formed over the second MgO tunnel barrier material. A tantalum material was formed over the CoFeB magnetic material and a cobalt/palladium magnetic material was formed over the tantalum to complete the MTJ structure. A conventional MTJ was formed by forming an MgO tunnel barrier material over a CoFeB magnetic material at room temperature. The MgO and the CoFeB were annealed at a temperature of about 500°C. Another CoFeB magnetic material was formed over the MgO. The TMR and the RA of the MTJ structures were measured using conventional techniques. The upper left line of FIG.9 shows the TMR and the RA of MTJs formed according to embodiments of the disclosure and the lower right line shows the TMR and the RA of the MTJs formed by conventional methods. As shown in the graph, the MTJs formed by the methods disclosed herein exhibit a higher TMR at a lower RA than MTJs formed by conventional methods.FIG.10 is a graphical representation of the TMR vs. the RA of MTJs formed according to the present disclosure. A first MgO tunnel barrier material was formed to a first thickness (“X”) over a CoFeB magnetic material. The MgO was formed by RF sputtering at about 20°C. The MgO and the CoFeB were annealed at a temperature of about 500°C to crystallize the CoFeB in the same crystal orientation as the MgO. A second MgO tunnel barrier material was formed by RF sputtering at about 500°C over the annealed MgO. The second MgO tunnel barrier material was formed to a second thickness (“Y”). Another CoFeB magnetic material was formed over the second MgO tunnel barrier material. A tantalum material was formed over the CoFeB magnetic material and a cobalt/palladium magnetic material was formed over the tantalum to complete the MTJ structure. The TMR and RA of the MTJ structure were measured by conventional techniques. FIG.10 graphs the TMR and the RA of MTJs having different ratios of the thickness of the first MgO tunnel barrier material to the thickness of the second MgO tunnel barrier material (i.e., X/Y). Accordingly, the ratio of X/Y may be tailored to form a MTJ exhibiting a desired TMR at a desired RA.While certain illustrative embodiments have been described in connection with the figures, those of ordinary skill in the art will recognize and appreciate that embodiments encompassed by the disclosure are not limited to those embodiments explicitly shown and described herein. Rather, many additions, deletions, and modifications to the embodiments described herein may be made without departing from the scope of embodimentsencompassed by the disclosure, such as those hereinafter claimed, including legal equivalents. In addition, features from one disclosed embodiment may be combined with features of another disclosed embodiment while still being encompassed within the scope of the disclosure. |
The present disclosure includes apparatuses, methods, and systems for validating data stored in memory using cryptographic hashes. An embodiment includes a memory, and circuitry configured to divide the memory into a plurality of segments, wherein each respective segment is associated with a different cryptographic hash, validate, during a powering of the memory, data stored in each respective one of a first number of the plurality of segments using the cryptographic hash associated with that respective segment, and validate, after the powering of the memory, data stored in a second number of the plurality of segments, data stored in each respective one of a second number of the plurality of segments using the cryptographic hash associated with that respective segment. |
What is Claimed is:1. An apparatus, comprising:a memory; andcircuitry configured to:divide the memory into a plurality of segments, wherein each respective segment is associated with a different cryptographic hash;validate, during a powering of the memory, data stored in each respective one of a first number of the plurality of segments using the cryptographic hash associated with that respective segment; andvalidate, after the powering of the memory, data stored in a second number of the plurality of segments, data stored in each respective one of a second number of the plurality of segments using the cryptographic hash associated with that respective segment.2. The apparatus of claim 1, wherein the circuitry is configured to validate the data stored in each respective one of the first number of the segments by: generating a different run-time cryptographic hash for the data stored in each respective one of the first number of the segments; andcomparing the run-time cryptographic hash generated for the data stored in each respective segment to the cryptographic hash associated with that respective segment.3. The apparatus of claim 1, wherein the circuitry is configured to validate the data stored in each respective one of the second number of the segments by: generating a different run-time cryptographic hash for the data stored in each respective one of the second number of the segments; andcomparing the run-time cryptographic hash generated for the data stored in each respective segment to the cryptographic hash associated with that respective segment.4. The apparatus of claim 1, wherein the circuitry is configured to:
send, after the powering of the memory, the data stored in each respective one of the first number of the segments to a host upon validating the data stored in that respective one of the first number of the segments; andsend the data stored in each respective one of the second number of the segments to the host upon validating the data stored in that respective one of the second number of the segments.5. The apparatus of any one of claims 1-4, wherein the memory comprises a secure array of memory cells.6. The apparatus of claim 5, wherein the circuitry includes:a register configured to define an address of the secure array; and a register configured to define a size of the secure array.7. The apparatus of any one of claims 1-4, wherein the circuitry includes a register configured to store the cryptographic hash associated with each respective segment, and wherein the register is inaccessible to a user of the memory.8. A method of operating memory, comprising:dividing a memory into a plurality of segments, wherein each respective segment is associated with a different cryptographic hash;generating, during a powering of the memory, a different run-time cryptographic hash for data stored in each respective one of a first number of the plurality of segments;validating, during the powering of the memory, the data stored in each respective one of the first number of the plurality of segments by comparing the run-time cryptographic hash generated for the data stored in that respective segment to the cryptographic hash associated with that respective segment; generating, after the powering of the memory, a different run-time cryptographic hash for data stored in each respective one of a second number of the plurality of segments; andvalidating, after the powering of the memory, the data stored in each respective one of the second number of the plurality of segments by comparing
the run-time cryptographic hash generated for the data stored in that respective segment to the cryptographic hash associated with that respective segment.9. The method of claim 8, wherein the method includes:validating the data stored in each respective one of the first number of the plurality of segments upon the comparison for the first number of the plurality of segments indicating the run-time cryptographic hash generated for the data stored in that respective segment matches the cryptographic hash associated with that respective segment; andvalidating the data stored in each respective one of the second number of the plurality of segments upon the comparison for the second number of the plurality of segments indicating the run-time cryptographic hash generated for the data stored in that respective segment matches the cryptographic hash associated with that respective segment.10. The method of claim 8, wherein the method includes:remediating the data stored in each respective one of the first number of the plurality of segments upon the comparison for the first number of the plurality of segments indicating the run-time cryptographic hash generated for the data stored in that respective segment does not match the cryptographic hash associated with that respective segment; andremediating the data stored in each respective one of the second number of the plurality of segments upon the comparison for the second number of the plurality of segments indicating the run-time cryptographic hash generated for the data stored in that respective segment does not match the cryptographic hash associated with that respective segment.11. The method of claim 10, wherein:remediating the data stored in each respective one of the first number of the plurality of segments comprises recovering the data from the memory; and remediating the data stored in each respective one of the second number of the plurality of segments comprises recovering the data from the memory.12. A method of operating memory, comprising:
dividing a memory into a plurality of segments, wherein each respective segment is associated with a different cryptographic hash;validating, during a powering of the memory, data stored in each respective one of a first number of the plurality of segments using the cryptographic hash associated with that respective segment;sending, after the powering of the memory, the data stored in each respective one of the first number of the plurality of segments to a host upon validating the data stored in that respective one of the first number of the plurality of segments; andvalidating, while sending the data stored in each respective one of the first number of the plurality of segments to the host, data stored in each respective one of a second number of the plurality of segments using the cryptographic hash associated with that respective segment.13. The method of claim 12, wherein the method includes sending, after sending the data stored in each respective one of the first number of the plurality of segments to the host, the data stored in each respective one of the second number of the plurality of segments to the host upon validating the data stored in that respective one of the second number of the plurality of segments.14. The method of any one of claims 12-13, wherein the method includes generating the cryptographic hash associated with each respective segment using authenticated commands received from the host.15. A system, comprising:a memory device having a memory, wherein:the memory is divided into a plurality of segments, wherein each respective segment is associated with a different cryptographic hash; andthe memory device is configured to:validate, during a powering of the memory, data stored in each respective one of a first number of the plurality of segments using the cryptographic hash associated with that respective segment; and
validate, after the powering of the memory, data stored in each respective one of a second number of the plurality of segments using the cryptographic hash associated with that respective segment; anda host, wherein the host is configured to:receive, while the memory device is validating the data stored in the second number of the plurality of segments, the data stored in each respective one of the first number of the plurality of segments from the memory device upon the memory device validating the data stored in that respective one of the first number of the plurality of segments; andreceive, after receiving the data stored in each respective one of the first number of the plurality of segments from the memory device, the data stored in each respective one of the second number of the plurality of segments from the memory device upon the memory device validating the data stored in that respective one of the second number of the plurality of segments.16. The system of claim 15, wherein the memory device includes:a register configured to define an address of each respective one of the plurality of segments; anda register configured to define a size of each respective one of the plurality of segments.17. The system of claim 15, wherein the memory device includes:a register configured to provide an indication of a status of the validation of the data stored in each respective one of the plurality of segments; anda register configured to provide an indication of a result of the validation of the data stored in each respective one of the plurality of segments.18. The system of claim 15, wherein the memory device includes:a register configured to provide an indication of whether a remediation of the data stored in each respective one of the plurality of segments is allowed; a register configured to define an address in the memory from which the data stored in each respective one of the plurality of segments can be recovered during the remediation; and
a register configured to provide an indication of a result of the remediation of the data stored in each respective one of the plurality of segments.19. The system of any one of claims 15-18, wherein the first number of the plurality of segments comprise a particular quantity of segments defined by the host, and wherein the memory device includes a register configured to store the particular quantity of segments.20. The system of any one of claims 15-18, wherein the first number of the plurality of segments comprise a quantity of segments that can be validated by the memory device in a particular amount of time, and wherein the memory device includes a register configured to store the particular amount of time. |
VALIDATING DATA STORED IN MEMORY USING CRYPTOGRAPHICHASHESTechnical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to validating data stored in memory using cryptographic hashes.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetic random access memory (MRAM), among others.[0003] Memory devices can be combined together to form a solid state drive (SSD), an embedded MultiMediaCard (e.MMC), and/or a universal flash storage (UFS) device. An SSD, e.MMC, and/or UFS device can include non volatile memory (e.g., NAND flash memory and/or NOR flash memory), and/or can include volatile memory (e.g., DRAM and/or SDRAM), among various other types of non-volatile and volatile memory. Non-volatile memory may be used in a wide range of electronic applications such as personal computers, portable memory sticks, digital cameras, cellular telephones, portable music players such as MP3 players, movie players, among others.[0004] Flash memory devices can include memory cells storing data in a charge storage structure such as a floating gate, for instance. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. Resistance variable memory devices can include resistive memory cells that can store data based on
the resistance state of a storage element (e.g., a resistive memory element having a variable resistance).[0005] Memory cells can be arranged into arrays, and memory cells in an array architecture can be programmed to a target (e.g., desired) state. For instance, electric charge can be placed on or removed from the charge storage structure (e.g., floating gate) of a flash memory cell to program the cell to a particular data state. The stored charge on the charge storage structure of the cell can indicate a threshold voltage (Vt) of the cell. A state of a flash memory cell can be determined by sensing the stored charge on the charge storage structure (e.g., the Vt) of the cell.[0006] Many threats can affect the data stored in the memory cells of a memory device. Such threats can include, for example, faults occurring in the memory device, and/or threats from hackers or other malicious users. Such threats can cause significant financial loss, and/or can present significant safety and/or security issues.Brief Description of the Drawings[0007] Figure 1 illustrates a diagram of a portion of a memory array having a number of physical blocks in accordance with an embodiment of the present disclosure.[0008] Figure 2 is a block diagram of a computing system including a host and an apparatus in the form of a memory device in accordance with an embodiment of the present disclosure.[0009] Figure 3A illustrates an example of registers used to define a secure memory array in accordance with an embodiment of the present disclosure.[0010] Figure 3B illustrates a diagram of a portion of a memory array that includes a secure memory array defined in accordance with an embodiment of the present disclosure.[0011] Figure 4 illustrates an example of registers used to divide data stored in a memory array into a plurality of segments, and validate and remediate the data stored in each respective segment, in accordance with an embodiment of the present disclosure.
[0012] Figure 5 illustrates a method of validating a segment of data stored in memory using cryptographic hashes in accordance with an embodiment of the present disclosure.[0013] Figure 6 is a block diagram of an example system including a host and a memory device in accordance with an embodiment of the present disclosure.[0014] Figure 7 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure.[0015] Figure 8 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure.[0016] Figure 9 is a block diagram of an example process to verify a certificate in accordance with an embodiment of the present disclosure.[0017] Figure 10 is a block diagram of an example process to verify a signature in accordance with an embodiment of the present disclosure.[0018] Figure 11 is a block diagram of an example memory device in accordance with an embodiment of the present disclosure.Detailed Description[0019] The present disclosure includes apparatuses, methods, and systems for validating data stored in memory using cryptographic hashes. An embodiment includes a memory, and circuitry configured to divide the memory into a plurality of segments, wherein each respective segment is associated with a different cryptographic hash, validate, during a powering of the memory, data stored in each respective one of a first number of the plurality of segments using the cryptographic hash associated with that respective segment, and validate, after the powering of the memory, data stored in a second number of the plurality of segments, data stored in each respective one of a second number of the plurality of segments using the cryptographic hash associated with that respective segment.[0020] Many threats can affect the data stored in a memory (e.g., in a memory device). For example, faults may occur in the array and/or circuitry of the memory, which can result in errors occurring in the data. As an additional
example, a hacker or other malicious user may attempt to perform activities to make unauthorized changes to the data for malicious purposes. For instance, a malicious user may attempt to alter the data stored in a memory in order to adversely affect (e.g., divert the flow of) a commercial transaction being performed using the memory (e.g., to falsely indicate that payment has been made for the service being provided by skipping the code that verifies the payment), a software license check being performed on the memory (e.g., to falsely indicate the software of the memory is properly licensed by skipping the code that verifies the license), or automotive control being performed using the memory (e.g., to skip a check of the genuineness of a part, an environmental check, or a check of a malfunctioning alarm), among other types of hacking activities. Such hacking activities (e.g., attacks) can cause significant financial loss, and/or can present significant safety and/or security issues.[0021] As such, in order to ensure a secure memory system, it is important to validate (e.g., authenticate and/or attest) that the data stored in the memory is genuine (e.g., is the same as originally programmed), and has not been altered by hacking activity or other unauthorized and/or unintended changes. Such data validation may be performed, for instance, during the powering of the memory (e.g., during the powering on and/or powering up of the memory, which may be referred to herein as“booting”). However, the performance of the data validation may increase the amount of time needed to power the memory (e.g., may increase the latency of the boot time), which can adversely affect the user’s experience of the memory system.[0022] Embodiments of the present disclosure, however, can effectively validate data stored in memory, and thereby ensure a secure memory system, during a powering of the memory, while reducing the amount of time needed to power the memory (e.g., decreasing the latency of the memory boot time). For instance, embodiments of the present disclosure can divide the memory into segments, and validate the data stored in only a portion (e.g. less than all) of the segments during the powering (e.g., the booting) of the memory, using different cryptographic hashes associated with each respective one of those segments.The data stored in the remaining segments of the memory can then be validated after the powering of the memory has been completed, using different cryptographic hashes associated with each respective one of those segments.
[0023] As used herein,“a”,“an”, or“a number of’ can refer to one or more of something, and“a plurality of’ can refer to two or more such things.For example, a memory device can refer to one or more memory devices, and a plurality of memory devices can refer to two or more memory devices.Additionally, the designators“R”,“B”,“S”,“N”, and“K”, as used herein, particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure. The number may be the same or different between designations.[0024] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 101 may reference element“01” in Figure 1, and a similar element may be referenced as 201 in Figure 2.[0025] Figure 1 illustrates a diagram of a portion of a memory array 101 having a number of physical blocks in accordance with an embodiment of the present disclosure. Memory array 101 can be, for example, a flash memory array such as a NAND flash memory array. As an additional example, memory array 101 can be a resistance variable memory array such as a PCRAM, RRAM, MMRAM, or spin torque transfer (STT) array, among others. However, embodiments of the present disclosure are not limited to a particular type of memory array. Further, memory array 101 can be a secure memory array, as will be further described herein. Further, although not shown in Figure 1, memory array 101 can be located on a particular semiconductor die along with various peripheral circuitry associated with the operation thereof.[0026] As shown in Figure 1, memory array 101 has a number of physical blocks 107-0 (BLOCK 0), 107-1 (BLOCK 1), . . ., 107-B (BLOCK B) of memory cells. The memory cells can be single level cells and/or multilevel cells such as, for instance, two level cells, triple level cells (TLCs) or quadruple level cells (QLCs). As an example, the number of physical blocks in memory array 101 may be 128 blocks, 512 blocks, or 1,024 blocks, but embodiments are not limited to a particular power of two or to any particular number of physical blocks in memory array 101.
[0027] A number of physical blocks of memory cells (e.g., blocks 107-0,107-1, 107-B) can be included in a plane of memory cells, and a number of planes of memory cells can be included on a die. For instance, in the example shown in Figure 1, each physical block 107-0, 107-1, . . ., 107-B can be part of a single die. That is, the portion of memory array 101 illustrated in Figure 1 can be a die of memory cells.[0028] As shown in Figure 1, each physical block 107-0, 107-1, . . ., 107-B includes a number of physical rows (e.g., 103-0, 103-1, . . ., 103-R) of memory cells coupled to access lines (e.g., word lines). The number of rows (e.g., word lines) in each physical block can be 32, but embodiments are not limited to a particular number of rows 103-0, 103-1, . . ., 103-R per physical block. Further, although not shown in Figure 1, the memory cells can be coupled to columns of sense lines (e.g., data lines and/or digit lines).[0029] As one of ordinary skill in the art will appreciate, each row 103-0,103-1, . . ., 103-R can include a number of pages of memory cells (e.g., physical pages). A physical page refers to a unit of programming and/or sensing (e.g., a number of memory cells that are programmed and/or sensed together as a functional group). In the embodiment shown in Figure 1, each row 103-0, 103- 1, . . ., 103-R comprises one physical page of memory cells. However, embodiments of the present disclosure are not so limited. For instance, in an embodiment, each row can comprise multiple physical pages of memory cells (e.g., one or more even pages of memory cells coupled to even-numbered data lines, and one or more odd pages of memory cells coupled to odd numbered data lines). Additionally, for embodiments including multilevel cells, a physical page of memory cells can store multiple pages (e.g., logical pages) of data (e.g., an upper page of data and a lower page of data, with each cell in a physical page storing one or more bits towards an upper page of data and one or more bits towards a lower page of data).[0030] As shown in Figure 1, a page of memory cells can comprise a number of physical sectors 105-0, 105-1, . . ., 105-S (e.g., subsets of memory cells). Each physical sector 105-0, 105-1, . . ., 105-S of cells can store a number of logical sectors of data. Additionally, each logical sector of data can correspond to a portion of a particular page of data. As an example, a first logical sector of data stored in a particular physical sector can correspond to a
logical sector corresponding to a first page of data, and a second logical sector of data stored in the particular physical sector can correspond to a second page of data. Each physical sector 105-0, 105-1, . . ., 105-S, can store system and/or user data, and/or can include overhead data, such as error correction code (ECC) data, logical block address (LBA) data, and metadata.[0031] Logical block addressing is a scheme that can be used by a host for identifying a logical sector of data. For example, each logical sector can correspond to a unique logical block address (LBA). Additionally, an LBA may also correspond (e.g., dynamically map) to a physical address, such as a physical block address (PBA), that may indicate the physical location of that logical sector of data in the memory. A logical sector of data can be a number of bytes of data (e.g., 256 bytes, 512 bytes, 1,024 bytes, or 4,096 bytes). However, embodiments are not limited to these examples.[0032] It is noted that other configurations for the physical blocks 107-0,107-1, . . ., 107-B, rows 103-0, 103-1, . . ., 103-R, sectors 105-0, 105-1, . . ., 105- S, and pages are possible. For example, rows 103-0, 103-1, . . ., 103-R of physical blocks 107-0, 107-1, . . ., 107-B can each store data corresponding to a single logical sector which can include, for example, more or less than 512 bytes of data.[0033] Figure 2 is a block diagram of a computing system 200 including a host 202 and an apparatus in the form of a memory device 206 in accordance with an embodiment of the present disclosure. As used herein, an“apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. Further, in an embodiment, computing system 200 can include a number of memory devices analogous to memory device 206.[0034] In the embodiment illustrated in Figure 2, memory device 206 can include a memory 212 having a memory array 201. Memory array 201 can be analogous to memory array 101 previously described in connection with Figure 1. Although one memory array 201 is illustrated in Figure 2, memory 212 can include any number of memory arrays analogous to memory array 201.[0035] In an embodiment, memory array 201 (e.g., a subset of array 201, or the whole array 201) can be a secure array (e.g., an area of memory 212 to be
kept under control). For example, the data stored in memory array 201 can include sensitive (e.g., non-user) data, such as host firmware and/or code to be executed for sensitive applications. In such an embodiment, one or more non volatile registers can be used to define the secure array. For example, in the embodiment illustrated in Figure 2, circuitry 210 includes a pair of registers 214- 1 and 214-2 that can be used to define the secure array. For instance, register 214-1 can define the address (e.g., the starting LBA of the data) of the secure array, and register 214-2 can define the size (e.g., the ending LBA of the data) of the secure array. An example of such registers, and their use in defining a secure array, will be further described herein (e.g., in connection with Figures 3A-3B).[0036] As illustrated in Figure 2, host 202 can be coupled to the memory device 206 via interface 204. Host 202 and memory device 206 cancommunicate (e.g., send commands and/or data) on interface 204. Host 202 and/or memory device 206 can be, or be part of, a laptop computer, personal computer, digital camera, digital recording and playback device, mobile telephone, PDA, memory card reader, interface hub, or Internet of Things (IoT) enabled device, such as, for instance, an automotive (e.g., vehicular and/or transportation infrastructure) IoT enabled device or a medical (e.g., implantable and/or health monitoring) IoT enabled device, among other host systems, and can include a memory access device (e.g., a processor). One of ordinary skill in the art will appreciate that“a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc.[0037] Interface 204 can be in the form of a standardized physical interface. For example, when memory device 206 is used for information storage in computing system 200, interface 204 can be a serial advanced technology attachment (SATA) physical interface, a peripheral component interconnect express (PCIe) physical interface, a universal serial bus (USB) physical interface, or a small computer system interface (SCSI), among other physical connectors and/or interfaces. In general, however, interface 204 can provide an interface for passing control, address, information (e.g., data), and other signals between memory device 206 and a host (e.g., host 202) having compatible receptors for interface 204.[0038] Memory device 206 includes controller 208 to communicate with host 202 and with memory 212 (e.g., memory array 201). For instance,
controller 208 can send commands to perform operations on memory array 201, including operations to sense (e.g., read), program (e.g., write), move, and/or erase data, among other operations.[0039] Controller 208 can be included on the same physical device (e.g., the same die) as memory 212. Alternatively, controller 208 can be included on a separate physical device that is communicatively coupled to the physical device that includes memory 212. In an embodiment, components of controller 208 can be spread across multiple physical devices (e.g., some components on the same die as the memory, and some components on a different die, module, or board) as a distributed controller.[0040] Host 202 can include a host controller (not shown Figure 2) to communicate with memory device 206. The host controller can send commands to memory device 206 via interface 204. The host controller can communicate with memory device 206 and/or the controller 208 on the memory device 206 to read, write, and/or erase data, among other operations. Further, in an embodiment, host 202 can be an IoT enabled device, as previously described herein, having IoT communication capabilities.[0041] Controller 208 on memory device 206 and/or the host controller on host 202 can include control circuitry and/or logic (e.g., hardware and firmware). In an embodiment, controller 208 on memory device 206 and/or the host controller on host 202 can be an application specific integrated circuit (ASIC) coupled to a printed circuit board including a physical interface. Also, memory device 206 and/or host 202 can include a buffer of volatile and/or non volatile memory and a number of registers.[0042] For example, as shown in Figure 2, memory device can include circuitry 210. In the embodiment illustrated in Figure 2, circuitry 210 is included in controller 208. However, embodiments of the present disclosure are not so limited. For instance, in an embodiment, circuitry 210 may be included in (e.g., on the same die as) memory 212 (e.g., instead of in controller 208).Circuitry 210 can comprise, for instance, hardware, firmware, and/or software, and can be used to validate (e.g., authenticate and/or attest) data stored in memory 212 (e.g., in memory array 201).[0043] For example, circuitry 210 can divide the data stored in memory array 201 into a plurality of segments, and associate a different cryptographic
hash with each respective segment. For instance, circuitry 210 can generate (e.g., calculate) a different cryptographic hash for each respective segment, using authenticated (e.g., secured) and antireplay protected commands received from host 202 (e.g., so that only memory device 206 knows these cryptographic hashes, and only memory device 206 is capable of generating and updating them). The cryptographic hash generated for each respective segment may be referred to herein as a golden hash for that segment, and can comprise, for instance, a SHA-256 cryptographic hash. These golden hashes may be stored in a non-volatile register 216-3 included in circuitry 210 that is inaccessible to a user of memory device 206 and/or host 202 (e.g., in a“hidden” region of memory device 206), and may be used during the process of validating the data stored in memory array 201, as will be further described herein.[0044] Further, as shown in Figure 2, circuitry 210 can include one or more non-volatile registers (e.g., registers 216-1 and 216-2) that can be used to define the plurality of segments. For instance, register 216-1 can define the address (e.g., the starting LBA of the data) of each respective one of the plurality of segments, and register 216-2 can define the size (e.g., the ending LBA of the data) of each respective one of the plurality of segments. The plurality of segments can each be the same size (e.g., store the same amount of data), or can be different sizes (e.g., store different amounts of data). An example of registers 216-1, 216-2, and 216-3 will be further described herein (e.g., in connection with Figure 4).[0045] During a powering (e.g., a powering on and/or powering up) of memory device 206, circuitry 210 can validate (e.g., determine whether to validate) the data stored in each respective one of a first number of the plurality of segments using the golden hash associated with that respective segment. As used herein, validating the data can include, and/or refer to, authenticating and/or attesting that the data is genuine (e.g., is the same as originally programmed), and has not been altered by hacking activity or other unauthorized and/or unintended changes.[0046] For example, circuitry 210 can generate (e.g., calculate) a different run-time cryptographic hash for the data stored in each respective one of the first number of segments, and compare the run-time cryptographic hash generated for the data stored in each respective segment to the golden hash for
that respective segment stored in register 216-3. Upon the comparison indicating the run-time cryptographic hash generated for the data stored in a respective segment matches the golden hash for that respective segment, it can be determined that the data stored in that respective segment has not been altered, and therefore the data stored in that respective segment can be validated (e.g., can be determined to be valid). As such, the data stored in each respective segment can be validated independently of the data stored in the other segments.[0047] The first number of the plurality of segments can comprise only a portion (e.g., less than all) of the plurality of segments into which the data stored in memory array 201 is divided. As an example, the first number of the plurality of segments can comprise a particular quantity of segments defined by host 202 (e.g., by a user of host 202). This quantity can be stored in a non-volatile register 218-1 included in circuitry 210. As an additional example, the first number of the plurality of segments can comprise the quantity of segments that can be validated by circuitry 210 in a particular amount of time. This amount of time can correspond to the amount of time for which the powering of memory device 206 lasts, which can be automatically determined by memory device 206 (e.g., by circuitry 210) and stored in a non-volatile register 218-2 included in circuitry 210.[0048] If the comparison, however, indicates the run-time cryptographic hash generated for the data stored in a respective segment does not match the golden hash for that respective segment, this may indicate that the data stored in that respective segment has been changed (e.g., due to a hacker or a fault in the memory), and therefore the data stored in that respective segment may not be valid (e.g., may be determined to not be valid). In such an instance, circuitry 210 can remediate (e.g., attempt to remediate) the data stored in that segment. Remediating the data stored in the segment can include, for instance, determining whether remediation of the data is allowed, and, if remediation is allowed, recovering (e.g., restoring) the data from memory 212 (e.g., from a remediation block included in the memory, such as remediation block 1117 further described in connection with Figure 11).[0049] As shown in Figure 2, circuitry 210 can include additional registers 216-4, 216-5, 216-6, 216-7, and 216-8, which can be used by circuitry 210 during the validation and remediation processes. Register 216-4 can be a
volatile register that can provide an indication of the status of the validation of the data stored in each respective one of the plurality of segments (e.g., an indication of whether the validation of the data has been done), and register 216- 5 can be a volatile register that can provide an indication of the result of the validation of the data stored in each respective segment (e.g., an indication of whether the data has been determined to be valid), which can be used by circuitry 210 to determine whether remediation of the data stored in each respective segment should be attempted.[0050] Register 216-6 can be a non-volatile register that can provide an indication of whether a remediation of the data stored in each respective one of the plurality of segments is allowed, which can be used by circuitry 210 to determine whether remediation of the data stored in a segment is allowed upon a determination that the data is not valid and remediation should be attempted. Register 216-7 can be a non-volatile register that can be used to define the address in memory 212 (e.g., in the remediation block) from which the data stored in each respective one of the plurality of segments can be recovered, which can be used by circuitry 210 to recover the data during a remediation of that data. Register 216-8 can be a volatile register that can provide an indication of the result of the remediation of the data stored in each respective one of the plurality of segments (e.g., whether the data has been remediated) if remediation of that data is allowed. An example of registers 216-4 through 216-8, and their use in the validation and remediation processes, will be further described herein (e.g., in connection with Figure 4).[0051] After the powering (e.g., booting) of memory device 206 is completed, circuitry 210 can validate (e.g., determine whether to validate) the data stored in each respective one of a second number of the plurality of segments using the golden hash associated with that respective segment. The second number of the plurality of segments can comprise the remaining segments (e.g., the segments that are not included in the first number of the plurality of segments) into which the data stored in memory array 201 is divided. However, embodiments of the present disclosure are not limited to a first and second number of segments (e.g., the plurality of segments can comprise more than the first and second number of segments).
[0052] The process of validating the data stored in each respective one of the second number of the plurality of segments can be analogous to the process of validating the data stored in each respective one of the first number of the plurality of segments previously described herein. For example, circuitry 210 can generate a different run-time cryptographic hash for the data stored in each respective one of the second number of segments, and compare the run-time cryptographic hash generated for the data stored in each respective segment to the golden hash for that respective segment stored in register 216-3, in a manner analogous to that previously described herein for the first number of segments. Further, if the data stored in one of the second number of the plurality of segments is determined to not be valid, circuitry 210 can remediate the data stored in that segment, in a manner analogous to that previously described herein for the first number of the plurality of segments. Further, circuitry 210 can use registers 216-4 through 216-8 during the validation and remediation processes for the data stored in the second number of the plurality of segments, in a manner analogous to that previously described herein for the first number of segments.[0053] Further, after the powering of memory device 206 is completed(e.g., while the data stored in the second number of the plurality of segments is being validated), circuitry 210 can send to host 202, via interface 204, the data stored in each respective one of the first number of segments (e.g., host 202 can receive the data from memory device 206) upon the data stored in that respective one of the first number of segments being validated or remediated. For instance, the data stored in each respective one of the first number of segments may not be sent to host 202 if the data stored in that respective segment has been determined to not be valid and has not been remediated; rather, the data stored in each respective segment may only be sent to host 202 if it has been determined to be valid or has been remediated. Circuitry 210 can determine whether the data stored in each respective one of the first number of segments has been determined to valid or has been remediated using registers 216-4 through 216-8, as previously described herein.[0054] After sending the data stored in each respective one of the first number of the plurality of segments, circuitry 210 can send to host 202, via interface 204, the data stored in each respective one of the second number of
segments (e.g., host 202 can receive the data from memory device 206) upon the data stored in that respective one of the second number of segments being validated or remediated. For instance, the data stored in each respective one of the second number of segments may not be sent to host 202 if the data stored in that respective segment has been determined to not be valid and has not been remediated; rather, the data stored in each respective segment may only be sent to host 202 if it has been determined to be valid or has been remediated.Circuitry 210 can determine whether the data stored in each respective one of the second number of segments has been determined to valid or has been remediated using registers 216-4 through 216-8, as previously described herein.[0055] The embodiment illustrated in Figure 2 can include additional circuitry, logic, and/or components not illustrated so as not to obscure embodiments of the present disclosure. For example, memory device 206 can include address circuitry to latch address signals provided over I/O connectors through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder, to access memory array 201. Further, memory device 206 can include a main memory, such as, for instance, a DRAM or SDRAM, that is separate from and/or in addition to memory array 201. An example further illustrating additional circuitry, logic, and/or components of memory device 206 will be further described herein (e.g., in connection with Figure 11).[0056] Figure 3A illustrates an example of registers 314-1 and 314-2 used to define a secure memory array in accordance with an embodiment of the present disclosure, and Figure 3B illustrates a diagram of a portion of a memory array 301 that includes a secure memory array defined using registers 314-1 and 314-2 in accordance with an embodiment of the present disclosure. Registers 314-1 and 314-2 can be, for instance, registers 214-1 and 214-2, respectively, previously described in connection with Figure 2, and secure memory array 301 can be, for instance, memory array 201 previously described in connection with Figure 2. For instance, as shown in Figure 3B, secure memory array 301 can include a number of physical blocks 307-0, 307-1, . . ., 307-B of memory cells, each including a number of physical rows 303-0, 303-1, . . ., 303-R having a number of sectors of memory cells, in a manner analogous to memory array 101 previously described in connection with Figure 1.
[0057] As shown in Figure 3A, register 314-1 can define addresses of the secure array (e.g., the addresses of different portions of the secure array), and register 314-2 can define sizes of the secure array (e.g., the sizes of the different portions of the secure array). The addresses of the secure array defined by register 314-1 can correspond to, for instance, starting points (e.g., starting LB As) of the secure array (e.g., the starting points of the different portions of the secure array), and the sizes of the secure array defined by register 314-2 can correspond to, for instance, ending points (e.g., ending LB As) of the secure array (e.g., the ending points of the different portions of the secure array).[0058] For example, as shown in Figure 3A, registers 314-1 and 314-2 can define N pairs of values, with each respective pair comprising an address value (e.g., addr) defined by register 314-1 and a size value (e.g., size) defined by register 314-2. For instance, in the example illustrated in Figure 3A, Pairo comprises address value addro and size value sizeo (e.g., Pairo = [addro, sizeo]), Pain comprises address value addri and size value sizei (e.g., Pain = [addri, sizei]), and so on, with Paim comprising address value addm and size value sizeN (e.g., Paim = [add , sizeN]). The address value of a pair can correspond to a starting point (e.g., starting LBA) of a portion of the secure array, and the sum of the address value and the size value of that pair can correspond to the ending point (e.g., ending LBA) of that portion of the secure array. As such, the entire secure array (e.g., the portions that comprise the entire secure array) can be given by: [addro, addro + sizeo] U [addri, addri + sizei] U ... U [addm, addm + sizeN].[0059] The first pair whose size value defined by register 314-2 is zero can stop the definition of the secure array. For instance, in the example illustrated in Figure 3 A, if the size value of Pain is zero, then the secure array would be given by: [addro, addro + sizeo] U [addri, addri + sizei][0060] An example of a secure array defined by registers 314-1 and 314-2 (e.g., with all size values defined by register 314-2 as non-zero) is illustrated in Figure 3B. For instance, as shown in Figure 3B, the address (e.g., LBA) associated with sector 305-0 of memory array 301 is addro, the address associated with sector 305-1 of memory array 301 is addro + sizeo, the address associated with sector 305-2 of memory array 301 is addri, the address associated with sector 305-3 of memory array 301 is addri + sizei, the address
associated with sector 305-4 of memory array 301 is addrx. and the address associated with sector 305-5 of memory array 301 is addm + sizeN. As such, the secure array comprises sectors (e.g., the data stored in sectors) 305-0 through 305-1, sectors 305-2 through 305-3, and 305-4 through 305-5. However, the sectors of memory array 301 that are before sector 305-0, and sectors 305-1 through 305-2 of memory array 301, are not part of the secure array (e.g., the secure array comprises a subset of array 301).[0061] Figure 4 illustrates an example of registers 416-1 through 416-8 used to divide data stored in a memory array into a plurality of segments, and validate and remediate the data stored in each respective segment, in accordance with an embodiment of the present disclosure. Registers 416-1 through 416-8 can be, for instance, registers 216-1 through 216-8 previously described in connection with Figure 2, and the memory array can be, for instance, memory array 201 previously described in connection with Figure 2.[0062] As shown in the example illustrated in Figure 4, and previously described herein, the data stored in the memory array can be divided into a plurality of (e.g., N) segments, five of which (e.g., segments 420-1, 420-2, 420- 3, 420-4, and 420-5) are illustrated in Figure 4. Further, as previously described herein (e.g., in connection with Figure 2), the plurality of segments can comprise a first number of (e.g., K) segments whose data can be validated and/or remediated during a powering of the memory, and a second number of (e.g., N- K) segments whose data can be validated and/or remediated after the powering of the memory. In the example illustrated in Figure 4, segments 420-1, 420-2, and 420-3 are included in the first number of the plurality of segments, and segments 420-4 and 420-5 are included in the second number of the plurality of segments.[0063] As shown in Figure 4, register 416-1 can define the address (e.g., address value) of each respective one of the plurality of segments, and register 416-2 can define the size (e.g., size value) of each respective one of the plurality of segments. The address of each respective segment defined by register 416-1 can correspond to, for instance, the starting point (e.g., starting LB A) of that segment, and the size of each respective segment defined by register 416-2 can correspond to, for instance, the ending point (e.g., ending LB A) of that segment. For instance, in the example illustrated in Figure 4, the address of segment 420-1
is defined by register 416-1 as Ox aabbcc, and the size of segment 420-1 is defined by register 416-2 as 0x10000. Similarly, the addresses of segments 420- 2, 420-3, 420-4, and 420-5 are defined by register 416-1 as Ox aal l22, Ox 123444, Ox ddeeff, and Ox aa55bb, respectively, and the sizes of segments 420-2, 420-3, 420-4, and 420-5 are defined by register 416-2 as 0x10000, 0x20000, 0x10000, and 0x20000, respectively, as illustrated in Figure 4.[0064] As previously described herein (e.g., in connection with Figure2), each respective one of the plurality of segments of data can have a different cryptographic hash (e.g., golden hash) associated therewith for use in validating the data stored in that segment. For instance, in the example illustrated in Figure 4, segment 420-1 has golden hash #1 associated therewith, segment 420-2 has golden hash #2 associated therewith, segment 420-3 has golden hash #K associated therewith, segment 420-4 has golden hash #K+1 associated therewith, and segment 420-5 has golden hash N associated therewith. As shown in Figure 4, the golden hash associated with each respective segment can be stored in register 416-3.[0065] As shown in Figure 4, register 416-4 can provide an indication of(e.g., a value indicating) the status of the validation of the data stored in each respective one of the plurality of segments. In the example illustrated in Figure 4, the data stored in the first number of the plurality of segments has been validated, but the data stored in the second number of the plurality of segments has not yet been validated (e.g., the powering of the memory is complete, but the validation of the data stored in the second number of segments has not yet been initiated). As such, register 416-4 can provide an indication that the validation of the data stored in segment 420-1 is done, an indication that the validation of the data stored in segment 420-2 is done, an indication that the validation of the data stored in segment 420-3 is done, an indication that the validation of the data stored in segment 420-4 is not done, and an indication that the validation of the data stored in segment 420-5 is not done, as illustrated in Figure 4.[0066] As shown in Figure 4, if the validation of the data stored in a segment is done (e.g., as indicated by the value for that segment provided by register 416-4), register 416-5 can provide an indication of (e.g., a value indicating) the result of the validation of the data stored in that segment. In the example illustrated in Figure 4, register 416-5 is providing an indication that the
data stored in segment 420-1 has been determined to be valid, an indication that the data stored in segment 420-2 has been determined to not be valid, and an indication that the data stored in segment 420-3 has been determined to not be valid, as illustrated in Figure 4. Further, because the data stored in segments 420-4 and 420-5 has not yet been validated (e.g., as indicated by the value for those segments provided by register 416-4), register 416-5 is not providing (e.g., does not include) a value for segment 420-4 or 420-5, as illustrated in Figure 4.[0067] As previously described herein (e.g., in connection with Figure2), if the result of the validation of the data stored in a segment is that the data has been determined not to be valid (e.g., as indicated by the value for that segment provided by register 416-5), the data stored in that segment can be remediated. As shown in Figure 4, register 416-6 can provide an indication of (e.g., a value indicating) whether a remediation of the data stored in each respective one of the plurality of segments is allowed. For instance, in the example illustrated in Figure 4, register 416-6 is providing an indication that a remediation of the data stored in segment 420-1 is allowed, an indication that a remediation of the data stored in segment 420-2 is allowed, an indication that a remediation of the data stored in segment 420-3 is not allowed, an indication that a remediation of the data stored in segment 420-4 is not allowed, and an indication that a remediation of the data stored in segment 420-5 is allowed.[0068] As shown in Figure 4, if remediation of the data stored in a segment is allowed (e.g., as indicated by the value for that segment provided by register 416-6), register 416-7 can define the address (e.g., address value) from which the data stored in that segment can be recovered during the remediation. The address defined by register 416-7 can correspond to, for instance, the location in the remediation block of the memory from which the data can be recovered. For instance, in the example illustrated in Figure 4, the address from which the data stored in segment 420-1 can be recovered is defined by register 416-7 as addrl, the address from which the data stored in segment 420-2 can be recovered is defined by register 416-7 as addr2, and the address from which the data stored in segment 420-5 can be recovered is defined by register 416-7 as addr3. Further, because remediation of the data stored in segments 420-3 and 420-4 is not allowed (e.g., as indicated by the value for those segments provided
by register 416-6), register 416-7 is not defining (e.g., does not include) an address value for segment 420-3 or 420-4, as illustrated in Figure 4.[0069] As shown in Figure 4, if remediation of the data stored in a segment is allowed (e.g., as indicated by the value for that segment provided by register 416-6), register 416-8 can provide an indication of (e.g., a value indicating) the result of the remediation. In the example illustrated in Figure 4, register 416-8 is providing an indication that the data stored in segment 420-1 has not been remediated (e.g., because the data stored in segment 420-1 was determined to be valid, and therefore no remediation of that data would be needed), an indication that the data stored in segment 420-2 has been remediated (e.g., because the data stored in segment 420-2 was determined to not be valid, but is allowed to be remediated), and an indication that the data stored in segment 420-5 has not been remediated (e.g., because the data stored in segment 420-5 has not yet been validated). Further, because remediation of the data stored in segments 420-3 and 420-4 is not allowed (e.g., as indicated by the value for those segments provided by register 416-6), register 416-7 is not providing (e.g., does not include) a value for segment 420-3 or 420-4, as illustrated in Figure 4.[0070] Figure 5 illustrates a method 525 of validating (e.g., determining whether to validate) a segment of data stored in memory using cryptographic hashes in accordance with an embodiment of the present disclosure. The memory can be, for instance, memory array 201 previously described in connection with Figure 2, and can be divided into a plurality of segments, as previously described herein. Method 525 can be performed by, for instance, memory device 206 (e.g., circuitry 210) previously described in connection with Figure 2.[0071] At block 527, method 525 includes retrieving the data stored in one of the plurality of memory segments from the memory. The data stored in the segment can be retrieved using the address and size of that segment defined in registers 216-1 and 216-2, as previously described herein (e.g., in connection with Figure 2).[0072] At block 529, method 525 includes generating a run-time cryptographic hash for the data stored in the memory segment, and at block 531, method 525 includes retrieving the golden hash associated with the memory
segment. The golden hash can be retrieved from register 216-3, as previously described herein (e.g., in connection with Figure 2).[0073] At block 533, method 525 includes comparing the run-time cryptographic hash to the golden hash, and at block 535, method 525 includes determining whether run-time cryptographic hash matches the golden hash. If it is determined the run-time cryptographic hash matches the golden hash, the data stored in the memory segment is validated (e.g., determined to be valid) at block 537. If it is determined the run-time cryptographic hash does not match the golden hash, method 525 proceeds to block 539.[0074] At block 539, method 525 includes determining whether remediation of the data stored in the memory segment is allowed. The determination of whether remediation of the data stored in the memory segment is allowed can be made using register 216-6, as previously described herein (e.g., in connection with Figure 2).[0075] If it is determined that remediation of the data stored in the memory segment is allowed, the data is remediated at block 541. The remediation of the data can include recovering the data from the memory using register 216-7, as previously described herein (e.g., in connection with Figure 2). If it is determined that remediation of the data stored in the memory segment is not allowed, the data stored in the memory segment is not validated (e.g., determined to not be valid) at block 543.[0076] Figure 6 is a block diagram of an example system including a host 602 and a memory device 606 in accordance with an embodiment of the present disclosure. Host 602 and memory device 606 can be, for example, host 202 and memory device 206, respectively, previously described in connection with Figure 2.[0077] A computing device can boot in stages using layers, with each layer authenticating and loading a subsequent layer and providing increasingly sophisticated runtime services at each layer. A layer can be served by a prior layer and serve a subsequent layer, thereby creating an interconnected web of the layers that builds upon lower layers and serves higher order layers. As is illustrated in Figure 6, Layer 0 (“Lo”) 651 and Layer 1 (“Li”) 653 are within the host. Layer 0 651 can provide a Firmware Derivative Secret (FDS) key 652 to Layer 1 653. The FDS key 652 can describe the identity of code of Layer 1 653
and other security relevant data. In an example, a particular protocol (such as robust internet of things (RIOT) core protocol) can use the FDS 652 to validate code of Layer 1 653 that it loads. In an example, the particular protocol can include a device identification composition engine (DICE) and/or the RIOT core protocol. As an example, an FDS can include Layer 1 firmware image itself, a manifest that cryptographically identifies authorized Layer 1 firmware, a firmware version number of signed firmware in the context of a secure boot implementation, and/or security-critical configuration settings for the device. A device secret 658 can be used to create the FDS 652 and be stored in memory of the host 602.[0078] The host can transmit data, as illustrated by arrow 654, to the memory device 606. The transmitted data can include an external identification that is public, a certificate (e.g., an external identification certificate), and/or an external public key. Layer 2 (“L2”) 655 of the memory device 606 can receive the transmitted data, and execute the data in operations of the operating system (“OS”) 657 and on a first application 659-1 and a second application 659-2.[0079] In an example operation, the host 602 can read the device secret658, hash an identity of Layer 1 653, and perform a calculation including:KLI = KDF [Fs(s), Hash (“immutable information”)] where KLI is an external public key, KDF (e.g., KDF defined in the National Institute of Standards and Technology (NIST) Special Publication 800-108) is a key derivation function (e.g., HMAC-SHA256), and Fs(s) is the device secret 658. FDS 652 can be determined by performing:FDS = HMAC-SHA256 [ Fs(s), SHA256(“immutable information”)] Likewise, the memory device 606 can transmit data, as illustrated by arrow 656, to the host 602.[0080] Figure 7 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure. Figure 7 is an example of a determination of the parameters including the external public identification, the external certificate, and the external public key that are then sent, indicated by arrow 754, to Layer 2 (e.g., Layer 2 655) of a memory device (e.g., 606 in Figure 6). Layer 0 (“Lo”) 751 in Figure 7 corresponds to Layer 0 651 in Figure 6 and likewise FDS 752 corresponds to
FDS 652, Layer 1 753 corresponds to Layer 1 653, and arrows 754 and 756 correspond to arrows 654 and 656, respectively.[0081] The FDS 752 from Layer 0 751 is sent to Layer 1 753 and used by an asymmetric ID generator 761 to generate a public identification (“IDik public”) 765 and a private identification 767. In the abbreviated“IDik public,” the “lk” indicates Layer k (in this example Layer 1), and the“public” indicates that the identification is openly shared. The public identification 765 is illustrated as shared by the arrow extending to the right and outside of Layer 1 753 of the host. The generated private identification 767 is used as a key input into an encryptor 773. The encryptor 773 can be any processor, computing device, etc. used to encrypt data.[0082] Layer 1 753 of a host can include an asymmetric key generator763. In at least one example, a random number generator (RND) 736 can optionally input a random number into the asymmetric key generator 763. The asymmetric key generator 763 can generate a public key (“Kuc public”) 769 (referred to as an external public key) and a private key (“KLK private”) 771 (referred to as an external private key) associated with a host such as host 602 in Figure 6. The external public key 769 can be an input (as“data”) into the encryptor 773. The encryptor 773 can generate a result K’ 775 using the inputs of the external private identification 767 and the external public key 769. The external private key 771 and the result K’ 775 can be input into an additional encryptor 777, resulting in output K” 779. The output K” 779 is the external certificate (“IDLI certificate”) 781 transmitted to the Layer 2 (655 of Figure 6). The external certificate 781 can provide an ability to verify and/or authenticate an origin of data sent from a device. As an example, data sent from the host can be associated with an identity of the host by verifying the certificate, as will be described further in association with Figure 9. Further, the external public key (“KLI public key”) 783 can be transmitted to Layer 2. Therefore, the public identification 765, the certificate 781, and the external public key 783 of a host can be transmitted to Layer 2 of a memory device.[0083] Figure 8 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure. Figure 8 illustrates a Layer 2 855 of a memory device (e.g., memory device 606 in Figure 6) generating a device identification (“IDL2 public”) 866, a
device certificate (“IDL2 Certificate”) 882, and a device public key (“KL2 public key”) 884.[0084] The external public key (“KLI public key”) 883 transmitted fromLayer 1 of the host to Layer 2 855 of a memory device, as described in Figure 7, is used by an asymmetric ID generator 862 of the memory device to generate a public identification (“IDik public”) 866 and a private identification 868 of the memory device. In the abbreviated“IDik public,” the“lk” indicates Layer k (in this example Layer 2), and the“public” indicates that the identification is openly shared. The public identification 866 is illustrated as shared by the arrow extending to the right and outside Layer 2 855. The generated private identification 868 is used as a key input into an encryptor 874.[0085] As shown in Figure 8, the external certificate 881 and public identification 865, along with the external public key 883, are used by a certificate verifier 899. The certificate verifier 899 can verify the external certificate 881 received from a host, and determine, in response to the external certificate 881 being verified or not being verified, whether to accept or discard data received from the host. Further details of verifying the external certificate 881 are further described herein (e.g., in connection with Figure 9).[0086] Layer 2 855 of the memory device can include an asymmetric key generator 864. In at least one example, a random number generator (RND) 838 can optionally input a random number into the asymmetric key generator 864. The asymmetric key generator 864 can generate a public key (“Kuc public”) 870 (referred to as a device public key) and a private key (“KLK private”) 872 (referred to as a device private key) associated with a memory device such as memory device 606 in Figure 6. The device public key 870 can be an input (as“data”) into the encryptor 874. The encryptor 874 can generate a result K’ 876 using the inputs of the device private identification 868 and the device public key 870. The device private key 872 and the result K’ 876 can be input into an additional encryptor 878, resulting in output K” 880. The output K” 880 is the device certificate (“IDL2 certificate”) 882 transmitted back to the Layer 1 (653 of Figure 6). The device certificate 882 can provide an ability to verify and/or authenticate an origin of data sent from a device. As an example, data sent from the memory device can be associated with an identity of the memory device by verifying the certificate, as will be described further in association with Figure 9. Further, the
device public key (“KL2 public key”) 884 can be transmitted to Layer 1. Therefore, the public identification 866, the certificate 882, and the device public key 884 of the memory device can be transmitted to Layer 1 of a host.[0087] In an example, in response to a host receiving a public key from a memory device, the host can encrypt data to be sent to the memory device using the device public key. Vice versa, the memory device can encrypt data to be sent to the host using the external public key. In response to the memory device receiving data encrypted using the device public key, the memory device can decrypt the data using its own device private key. Likewise, in response to the host receiving data encrypted using the external public key, the host can decrypt the data using its own external private key. As the device private key is not shared with another device outside the memory device and the external private key is not shared with another device outside the host, the data sent to the memory device and the host remains secure.[0088] Figure 9 is a block diagram of an example process to verify a certificate in accordance with an embodiment of the present disclosure. In the illustrated example of Figure 9, a public key 983, a certificate 981, and a public identification 965 is provided from a host (e.g., from Layer 1 653 of host 602 in Figure 6). The data of the certificate 981 and the external public key 983 can be used as inputs into a decryptor 985. The decryptor 985 can be any processor, computing device, etc used to decrypt data. The result of the decryption of the certificate 981 and the external public key 983 can be used as an input into a secondary decryptor 987 along with the public identification, result in an output. The external public key 983 and the output from the decryptor 987 can indicate, as illustrated at 989, whether the certificate is verified, resulting in ayes or no 991 as an output. In response to the certificate being verified, data received from the device being verified can be accepted, decrypted, and processed. In response to the certificate not being verified, data received from the device being verified can be discarded, removed, and/or ignored. In this way, nefarious devices sending nefarious data can be detected and avoided. As an example, a hacker sending data to be processed can be identified and the hacking data not processed.[0089] Figure 10 is a block diagram of an example process to verify a signature in accordance with an embodiment of the present disclosure. In the
instance where a device is sending data that may be verified in order to avoid subsequent repudiation, a signature can be generated and sent with data. As an example, a first device may make a request of a second device and once the second device performs the request, the first device may indicate that the first device never made such a request. An anti-repudiation approach, such as using a signature, can avoid repudiation by the first device and insure that the second device can perform the requested task without subsequent difficulty.[0090] A memory device 1006 (such as memory device 206 in Figure 2) can send data 1090 to a host (such as host 202 in Figure 2). The memory device 1006 can generate, at 1094, a signature 1096 using a device private key 1071. The signature 1096 can be transmitted to the host 1002. The host 1002 can verify, at 1098, the signature using data 1092 and the external public key 1069 previously received. In this way, the signature is generated using a private key and verified using a public key. In this way, the private key used to generate a unique signature can remain private to the device sending the signature while allowing the receiving device to be able to decrypt the signature using the public key of the sending device for verification. This is in contrast toencryption/decryption of the data, which is encrypted by the sending device using the public key of the receiving device and decrypted by the receiving device using the private key of the receiver. In at least one example, the device can verify the digital signature by using an internal cryptography process (e.g., Elliptical Curve Digital signature (ECDSA) or a similar process.[0091] Figure 11 is a block diagram of an example memory device 1106 in accordance with an embodiment of the present disclosure. Memory device 1106 can be, for example, memory device 206 previously described in connection with Figure 2.[0092] As shown in Figure 11, memory device 1106 can include a number of memory arrays 1101-1 through 1101-7. Memory arrays 1101-1 through 1101-7 can be analogous to memory array 101 previously described in connection with Figure 1. Further, in the example illustrated in Figure 10, memory array 1101-3 is a secure array, subset 1111 of memory array 1101-6 comprises a secure array, and subsets 1113 and 1115 of memory array 1101-7 comprise a secure array. Subsets 1111, 1113, and 1115 can each include, for instance, 4 kilobytes of data. However, embodiments of the present disclosure
are not limited to a particular number or arrangement of memory arrays or secure arrays.[0093] As shown in Figure 11, memory device 1106 can include a remediation (e.g., recovery) block 1117. Remediation block 1117 can be used as a source of data in case of errors (e.g., mismatches) that may occur during operation of memory device 1106 and/or if data stored in arrays 1101-1 through 1101-7 has been determined to not be valid, as previously described herein. Remediation block 1117 may be outside of the area of memory device 1106 that is addressable by a host.[0094] As shown in Figure 11, memory device 1106 can include a serial peripheral interface (SPI) 1104 and a controller 1108. Memory device 1106 can use SPI 1104 and controller 1108 to communicate with a host and memory arrays 1101-1 through 1101-7, as previously described herein (e.g., in connection with Figure 2).[0095] As shown in Figure 11, memory device 1106 can include a secure register 1119 for managing the security of memory device 1106. For example, secure register 1119 can configure, and communicate externally, to an application controller. Further, secure register 1119 may be modifiable by an authentication command.[0096] As shown in Figure 11, memory device 1106 can include keys1121. For instance, memory device 1106 can include eight different slots to store keys such as root keys, DICE-RIOT keys, and/or other external session keys.[0097] As shown in Figure 11, memory device 1106 can include an electronically erasable programmable read-only memory (EEPROM) 1123. EEPROM 1123 can provide a secure non-volatile area available for a host, in which individual bytes of data can be erased and programmed.[0098] As shown in Figure 11, memory device 1006 can include counters (e.g., monotonic counters) 1124. Counters 1124 can be used as an anti replay mechanism (e.g., freshness generator) for commands (e.g., to sign a command set or sequence) received from and/or sent to a host. For instance, memory device 1106 can include six different monotonic counters, two of which may be used by memory device 1106 for the authenticated commands, and four of which may be used by the host.
[0099] As shown in Figure 11, memory device 1106 can include anSHA-256 cryptographic hash function 1126, and/or an HMAC-SHA256 cryptographic hash function 1128. SHA-256 and/or HMAC-SHA256 cryptographic hash functions 1126 and 1128 can be used by memory device 1106 to generate cryptographic hashes, such as, for instance, run-time cryptographic hashes and/or golden hashes used to validate the data stored in memory arrays 1101-1 through 1101-7, as previously described herein. Further, memory device 1106 can support L0 and LI of DICE-RIOT 1130.[00100] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of a number of embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of ordinary skill in the art upon reviewing the above description. The scope of a number of embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of a number of embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[00101] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
PROBLEM TO BE SOLVED: To enforce application level restrictions on local and remote content rendered on a device.SOLUTION: A method for enforcing application level restrictions on local and remote content comprises: receiving a permissions list 120 associated with the content; receiving a content descriptor 120 that identifies the content; and receiving a modification detection indicator 120 created by an authority 108. The modification detection indicator binds the permissions list and the content descriptor. The method further comprises retrieving the content identified by the content descriptor 106 and rendering the content on a content viewer 116, and the content is restricted based on the permissions list 120. |
A method for use at a device to enforce restrictions on content rendered by the device, the method comprising: receiving an authorization list associated with the content; receiving a content descriptor identifying the content Receiving a change detection indicator generated by an authority, wherein the change detection indicator associates the authorization list with the content descriptor; retrieving the content identified by the content descriptor; And rendering the content on the device, where the content is restricted based on the authorization list.2. The method of claim 1, wherein the retrieving step comprises retrieving the content from a data network at a location identified by the content descriptor.2. The method of claim 1, wherein the content descriptor includes the content, and the retrieving step comprises retrieving the content from the content descriptor.2. The method of claim 1, wherein receiving the authorization list comprises receiving the authorization list from the authority.2. The method of claim 1, wherein receiving the content descriptor comprises receiving the content descriptor from the authority.The method of claim 1, wherein receiving the authorization list comprises receiving the authorization list from a content provider.The method of claim 1, wherein the change detection indicator is a digital signature.The method of claim 1, wherein the device is a wireless device.A device for rendering content, the device comprising: an authorization list, a content descriptor, and receiving logic operable to obtain a change detection indicator generated by an authority; verifying the change detection indicator; Rendering logic that operates to obtain the content identified by the content descriptor and render the content on the device, wherein the content is restricted based on the authorization list.10. The device of claim 9, wherein the device is a wireless device.10. The device of claim 9, wherein the change detection indicator is a digital signature.10. The device of claim 9, wherein the content descriptor includes the content, and the rendering logic operates to obtain the content from the content descriptor.A device that operates to enforce restrictions on downloadable content rendered on the device, the device comprising: means for receiving an authorization list associated with the content; a content descriptor identifying the content Means for receiving a change detection indicator generated by an authority, wherein the change detection indicator associates the authorization list with the content descriptor; the identified by the content descriptor Means for retrieving content; and means for rendering the content on the device, wherein the content is restricted based on the authorization list.14. The device of claim 13, wherein the means for retrieving comprises means for retrieving the content from a data network at a location identified by the content descriptor.14. The device of claim 13, wherein the content descriptor includes the content, and the means for retrieving comprises means for retrieving the content from the content descriptor.14. The device of claim 13, wherein the means for receiving the authorization list comprises means for receiving the authorization list from the authority.14. The device of claim 13, wherein the means for receiving the content descriptor comprises means for receiving the content descriptor from the authority.14. The device of claim 13, wherein the means for receiving the authorization list comprises means for receiving the authorization list from a content provider.14. The device of claim 13, wherein the change detection indicator is a digital signature.14. The device of claim 13, wherein the device is a wireless device.A computer readable medium comprising instructions that enforce restrictions on content rendered by the device when executed by a processor in the wireless device, the computer readable medium comprises: authorizations related to the content An instruction to receive a list; an instruction to receive a content descriptor identifying the content; an instruction to receive a change detection indicator generated by an authority, wherein the change detection indicator includes the authorization list and the content An instruction to retrieve the content identified by the content descriptor; and an instruction to render the content on the device, wherein the content is based on the authorization list Restricted That.23. The computer readable medium of claim 21, wherein the instructions for retrieving comprise instructions for retrieving the content from a data network at a location identified by the content descriptor.23. The computer readable medium of claim 21, wherein the content descriptor includes the content, and the instructions for retrieving comprise instructions for retrieving the content from the content descriptor.22. The computer readable medium of claim 21, wherein the instructions for receiving the authorization list comprise instructions for receiving the authorization list from the authority.23. The computer readable medium of claim 21, wherein the instructions for receiving the content descriptor comprise instructions for receiving the content descriptor from the authority.The computer-readable medium of claim 21, wherein the instructions for receiving the authorization list comprise instructions for receiving the authorization list from a content provider.The computer readable medium of claim 21, wherein the change detection indicator is a digital signature.A method for generating a content package that is used to enforce restrictions on content rendered on a device, the method comprises: certifying an authorization list associated with the content; Receiving a content descriptor to describe; and generating a change detection indicator that associates the authorization list with the content descriptor.30. The method of claim 28, wherein authorizing the authorization list comprises generating the authorization list.30. The method of claim 28, wherein receiving the content descriptor comprises receiving the content descriptor that includes the content.29. The method of claim 28, wherein generating the change detection indicator is generating a digital signature.An apparatus for generating a content package that is used to enforce restrictions on content rendered on a device, the apparatus comprises: an authorization that operates to certify an authorization list associated with the content A receiving logic circuit that operates to receive a content descriptor that describes the content; and a generating logic circuit that operates to generate a change detection indicator that associates the authorization list with the content descriptor.33. The apparatus of claim 32, wherein the authorization logic comprises logic that generates the authorization list.33. The apparatus of claim 32, wherein the content descriptor includes the content.33. The apparatus of claim 32, wherein the generation logic comprises a logic circuit that generates a digital signature as a detection change indicator.An apparatus for generating a content package used to enforce restrictions on content rendered on a device, the apparatus comprises: means for authorizing an authorization list associated with the content; Means for receiving a content descriptor describing the content; and means for generating a change detection indicator that associates the authorization list with the content descriptor.40. The apparatus of claim 36, wherein the means for authorizing the authorization list comprises means for generating the authorization list.38. The apparatus of claim 36, wherein the content descriptor includes the content.37. The apparatus of claim 36, wherein the means for generating the change detection indicator comprises means for generating a digital signature.A computer readable medium comprising instructions for generating a content package used to enforce restrictions on content rendered on a device when executed by a processor, the computer readable medium comprising: Do: an instruction to receive an authorization list related to the content; an instruction to receive a content descriptor identifying the content; and an instruction to generate a change detection indicator that links the authorization list and the content descriptor .41. The computer readable medium of claim 40, wherein the instructions for receiving the authorization list comprise instructions for generating the authorization list.41. The computer readable medium of claim 40, wherein the content descriptor includes the content.41. The computer readable medium of claim 40, wherein the instructions for generating the change detection indicator comprise instructions for generating a digital signature.41. The computer readable medium of claim 40, further comprising instructions for authorizing the authorization list. |
Method and apparatus for enforcing application level restrictions on local and remote contentThe present invention relates generally to data network operations, and more particularly to a method and apparatus for enforcing application level restrictions for local and remote content rendered on a device.Technological development has resulted in the development and deployment of large data networks. These networks include both public data networks such as the Internet and limited networks such as wireless telecommunication networks. Users of these networks have the ability to access a variety of information and services that are available as network resources.One example of an increasing demand for network resources is in a wireless network environment. In a wireless environment, various wireless devices such as wireless telephones, personal digital assistants (PDAs), and paging devices communicate via a wireless network. A wireless network can also include a network server, which operates to provide various network resources to the wireless device. Moreover, the wireless network can be connected to a public network such as the Internet, so that resources on the public network can be made available to wireless devices on the wireless network.Generally, a wireless device can download and store application programs or multimedia content using a wireless network. The application or content can be downloaded free of charge or purchased and downloaded by the user of the wireless device, who has the right to use the application or content for an unlimited, fixed or usage-based lifetime Get effectively.However, the downloaded content has the potential to damage the information or delete the information, or otherwise damage the continuously running device. For example, content may include scripting, animation, or other instructions that can delete files, generate pop-ups, generate loud sounds, or display inappropriate content. As such, the device user is completely untrustworthy that the downloaded application or content does not access files or other personal information on the user's device or perform other undesirable functions.One technique that has been used to limit downloaded content is to allow device users to set general controls for device operation. For example, a device user can block all scripting from what works on the device. Unfortunately, this technique forces the device user to make decisions about how and when to use these types of controls. In many cases, the device user is not given enough information or does not have enough knowledge to make these decisions. In addition, setting general device controls can prevent device users from accessing the content they want to receive as a result, or expose certain devices to potential dangers. Sex can not be obtained as a result.Therefore, what is needed is a system to enforce application level restrictions on applications or content available to devices over the network. The system allows device users to access a wide range of network resources without worrying about downloading unrestricted content that can damage the device or destroy valuable device information It should be possible. The system should operate without requiring the device user to make a determination as to the type of restriction being requested, or without knowing which content requires a particular restriction. As a result, device users can be confident that the content downloaded by the device users will not damage or destroy their devices or personal information stored on those devices.In one or more embodiments, a restriction system is provided to enforce application level restrictions on rendered local and remote content on the device. In one embodiment, the restriction system comprises a content descriptor, a permission list, and a modification detection indicator (ie, digital signature) that links the content descriptor and the authorization list. In one embodiment, the content descriptor comprises the actual content data that is about to be rendered on the device, and in another embodiment, the content descriptor will be downloaded and rendered on the device. Identify the location of the application or multimedia content you are trying to use. The authorization list is used by the restriction system to restrict the rendering, display and execution of downloaded applications or content. For example, authorization lists are used to restrict access rights and rights for applications or content, so that systems, features, settings, and information on the wireless device are unregistered by the application or content. Protected against unauthorized access. An authority, such as a device service provider or other entity, approves the authorization list and generates a change detection indicator that associates the authorization list with the content descriptor.In one embodiment, a method is provided for use in a device to enforce restrictions on the content that renders on the device. The method comprises: receiving an authorization list related to the content; receiving a content descriptor identifying the content; and receiving a change detection indicator generated by an authority, wherein the change detection indicator. Associates the authorization list with the content descriptor. The method further comprises retrieving the content identified by the content descriptor; and rendering the content on the device, wherein the content is restricted based on the authorization list. TheIn another embodiment, a device for rendering content is provided. The device comprises receiving logic that operates to obtain a change detection indicator generated by an authorization list, a content descriptor, and an authority. The apparatus further comprises rendering logic that operates to verify the change detection indicator, obtain the content identified by the content descriptor, and render the content on the device, The content is restricted based on the authorization list.In another embodiment, a device is provided for enforcing restrictions on rendered content. The device comprises means for receiving an authorization list related to the content; means for receiving a content descriptor identifying the content; and means for receiving a change detection indicator generated by the authority, The change detection indicator associates the authorization list with the content descriptor. The device further comprises means for retrieving the content identified by the content descriptor; and means for rendering the content on the device, wherein the content is the authorization list Limited based onIn another embodiment, a computer readable medium is provided that comprises instructions that, when executed by a processor in a wireless device, enforces restrictions on content rendered by the device. The computer-readable medium includes instructions for receiving an authorization list associated with the content; instructions for receiving a content descriptor identifying the content; and instructions for receiving a change detection indicator generated by the authority. Wherein the change detection indicator associates the authorization list with the content descriptor. The computer-readable medium further comprises instructions for retrieving the content identified by the content descriptor; and instructions for rendering the content on the device, wherein the content is , Based on the authorization list.In another embodiment, a method is provided for generating a content package that is used to enforce restrictions on content rendered on a device. The method comprises receiving an authorization list associated with the content; receiving a content descriptor describing the content; and generating a change detection indicator that associates the authorization list with the content descriptor.In another embodiment, an apparatus is provided for generating a content package that is used to enforce restrictions on content rendered on the device. The apparatus comprises receiving logic that operates to receive an authorization list related to the content; and a content descriptor that describes the content. The apparatus also comprises a generation logic circuit that operates to generate a change detection indicator that links the authorization list and the content descriptor.In another embodiment, an apparatus is provided for generating a content package that is used to enforce restrictions on content rendered on the device. The apparatus includes: means for receiving an authorization list associated with the content; means for receiving a content descriptor describing the content; and generating a change detection indicator that associates the authorization list with the content descriptor. These means are provided.In another embodiment, a computer readable medium is provided that, when executed by a processor, generates instructions for generating a content package that is used to enforce restrictions on the content rendered on the device. It has. The computer-readable medium includes instructions for receiving an authorization list associated with the content; instructions for receiving a content descriptor identifying the content; and a change detection indicator that associates the authorization list with the content descriptor. Instructions for generating.Other aspects, advantages, and features of the present invention will become apparent after review of the following brief description of the drawings, detailed description of the invention, and the claims.FIG. 1 illustrates a data network comprising one embodiment of a restriction system to enforce application level restrictions for local content and remote content rendered on a wireless device.FIG. 2 shows a functional diagram of one embodiment of a restriction system for use in an authority that operates to generate a content package that is downloaded to a device.FIG. 3 illustrates one embodiment of a content package for use in one or more embodiments of a restriction system.FIG. 4 shows a functional diagram of one embodiment of a restriction system for use in a device that operates to provide application level restrictions to applications and content rendered on the device.FIG. 5 shows a data network comprising one embodiment of a restriction system for use with wireless devices.FIG. 6 illustrates one embodiment of a method for enforcing application level restrictions on applications and content rendered on a wireless device.FIG. 7 shows one embodiment of an authority suitable for implementing one or more embodiments of a restriction system.FIG. 8 illustrates one embodiment of a device suitable for performing one or more embodiments of a restriction system.The above aspects and attendant advantages of the embodiments described herein will become more readily apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. Let's go.The detailed description below describes one or more embodiments of a restriction system that includes application level restrictions for rendered local content and remote content on the device. Including methods and apparatus for protection. In one embodiment, the restriction system comprises a content viewer on the device, allowing the device to access various network resources in an efficient and economical manner. The content viewer also enforces restrictions on downloaded content to prevent unauthorized operation of the device system or access to specific device information. The device can be any type of wired or wireless device, including a computer, wireless phone, pager, PDA, email device, tablet computer, or another type of wired or wireless device. Including, but not limited to.In one or more embodiments, the content viewer runs on a device that is used to simplify the operation of the device, for example, by providing a generalized call to device specific resources Interacts with the runtime environment. One such runtime environment is the Binary Runtime Environment for Wireless (BREW) software software developed by Qualcomm, Inc., San Diego, California. Platform. In the following description, it is assumed that the restriction system uses a content viewer provided on a wireless device running a runtime environment such as the BREW software platform. However, one or more embodiments of the restriction system may include another type of content viewer and / or to enforce application level restrictions for local and remote content rendered on wired and wireless devices. Or suitable for use with a runtime environment. Moreover, the term “content” is used herein to refer to any type of application, multimedia content, image file, executable, web page, script, document, presentation, message, or device. Used to describe any other type of information that can be rendered.In one embodiment, the restriction system operates to enforce application level restrictions on content rendered on the wireless device by performing one or more of the following steps.1. The wireless device downloads a content package related to the content that is about to be viewed on the device. A content package contains a permission list that describes its associated rights, restrictions, and privileges to be applied to the content. The content package similarly includes a content descriptor, which identifies the content and change detection indicator (ie, a digital signature), which associates the authorization list with the content descriptor.2. When the user tries to view the content, the content viewer application is launched. The content viewer application uses the digital signature to verify the authorization list and the authenticity of the content descriptor.3. The content viewer application uses the content descriptor to retrieve the content and renders the content on the wireless device.4). Rendered content is governed by rules imposed on the content viewer application given in the authorization list.In one embodiment, the content descriptor includes actual content data. For example, a content descriptor can be a document, an image file, a web page, or any other type of viewable content.In one embodiment, the content descriptor is a content locator. For example, the content viewer operates as a network browser, and the content descriptor is a content locator such as a universal resource locator (URL). The content viewer navigates to the network address given by the content descriptor and displays the content page retrieved from that location. In one embodiment, the content viewer operates to restrict the operation of the retrieved content page according to the restrictions in the authorization list.Authorization listIn one or more embodiments, the restriction system comprises an authorization list. An authorization list is a list of access rights, privileges, restrictions, or restrictions that apply to applications or content that are executed on or rendered on a device. For example, when content and the associated authorization list are installed on a device, the restriction system operates to allow the rendered content to access only resources authorized in the authorization list.In one embodiment, an application or content developer, system administrator, or other authority, such as a carrier or device manufacturer, can create or create an authorization list for that content. Can be given input. In another embodiment, the device server can be used to create an authorization list based on input from an authority, entity, or application or party involved in creating content.In one embodiment, the content developer submits the content to the authority. The authority examines or evaluates the content and determines what privileges are assigned to the content. The privilege then becomes part of the authorization list. In that way, the authority operates to approve the content and certifies the relevant rights granted in the authorization list.It will be appreciated by those skilled in the art that the device can further restrict or authorize access to device resources beyond the scope of the authorization list. For example, a user may not have rights to resources on a device whose application has been granted authorization for that device by an authorization list. As such, the device can grant additional rights or restrictions and can therefore consent to authorize access to the resource even if authorization has been authorized in the authorization list. Or you can refuse.By associating device resources with an application or content using an authorization list, multiple authorization lists may be generated for use in the same application or content. As a result, different resources may be granted access to the same application or content on different devices.BindingsIn one or more embodiments, the restriction system comprises a change detection indicator, which is used to provide a link between the authorization list and the content descriptor. For example, any technique can be used to generate a change detection indicator that associates an authorization list with a content descriptor. For example, in one embodiment, the change detection indicator is a digital signature generated using an authorization list and a content descriptor. However, any type of signature, encoding, or another change detection technique can be used to provide a connection between the authorization list and the content descriptor with which it is associated. Once the digital signature, authorization list, and content locator are transmitted to the wireless device, the device can use the signature to authenticate the authorization list and content descriptor. For the purposes of this description, the entity transmitting the information described above to the device is properly credentialed using any type of known credentialing or authentication technology. As a result, the receiving device can verify that it is receiving information from a trusted source.FIG. 1 shows a data network 100 that comprises one embodiment of a restriction system to enforce application level restrictions on local and remote content rendered on a wireless device. Network 100 includes a wireless device 102 that communicates with a data network 104 via a wireless communication channel 106. Data network 104 includes wired and wireless data networks that are private, public, or both. Network 100 similarly includes an authority 108 that operates to provide services to wireless device 102. For example, the wireless device 102 can be a wireless telephone and the authority 108 can be part of a national telephony network that provides telephony services to the device 102.Similarly, what is communicating with the network 104 is a content server 110. Content server 110 operates to provide content, such as multimedia content, to devices in communication with network 104.In one embodiment, the authority 108 comprises logic circuitry that generates a content package 120, which comprises an authorization list, a content descriptor, and a digital signature. The authorization list describes the rendering and resource access restrictions, which are applied to the application or content identified by the content descriptor. The content descriptor can comprise actual content data, such as an image file or a document. The content descriptor also comprises a content locator that identifies the location of the content. For example, the content descriptor can identify an application or multimedia content located at the content server 110.During operation of the system, the content package 120 is downloaded from the authority 108 to the device 102. The device 102 launches the content viewer 116, which operates to retrieve the content identified by the content descriptor and renders the content to the device 102 while being given in the authorization list Apply restrictions. For example, the content descriptor can be the actual content, which is rendered to the device by the content viewer 116. In another embodiment, the content descriptor is a content locator, which is used by the content viewer 116 to obtain content for rendering on the device 102.Because the authorization list is used to limit the rendered content, the restriction system operates to protect resources on the wireless device 102 from unauthorized access by downloaded content, and that Removes this concern from the device user. This can be done on the wireless device 102 without worrying that downloaded applications or content may interfere with the operation of the device or destroy important information stored on the device. Allows device users to download applications and content.The authorization list and content descriptor can be generated by the authority 108 and can be tied together using a digital signature. With respect to the secure transmission of the content package 120, as well as any other data transmission, the authority 108 can incorporate various security techniques, eg, encoding, encryption, credential, authentication signature, or content Other change detection / authentication techniques for transmitting package 120 to device 102. As such, the device can ensure that it has received the content package 120 from a trusted source.In one embodiment, authority 108 and server 110 are separate network servers located at different physical locations. In another embodiment, servers 108 and 110 are located at the same physical location, and in yet another embodiment, servers 108 and 110 are the same server. As such, in one or more embodiments, the restriction system uses substantially any network configuration with various servers operating to provide the functionality of the restriction system described herein. Can be given.FIG. 2 shows a functional diagram of one embodiment of a restriction system for use in an authority 108 that operates to generate a content package that is downloaded to a device. In one embodiment, authority 108 operates to approve an authorization list and generate a content package for downloading to a wireless device, eg, device 102. The authority includes a content receiver 202 that receives the content 212 from the content server 110. The authority also includes an authorization list receiver 204 that receives the proposed authorization list 214 from the content server 110 as well. The approval / generation logic 206 retrieves the content 212 and the received authorization list 214, evaluates the authorization list, and either approves or disapproves it. If no authorization list has been received, the logic circuit 206 operates to generate an authorization list based on the content itself and other parameters. For example, based on the content type or content source, the logic circuit 206 generates an associated authorization list. Once an approved authorization list is obtained, the authorization list and content go to the change detection generator 208. Generator 208 generates a change detection indicator that links the authorization list to the content. For example, the change detection indicator can be a digital signature. Finally, the package generator 210 generates a content package 216 that incorporates the content 214, the authorization list 212, and the change detection indicator.In one embodiment, the content 214 is a content descriptor, which identifies the content and its location. In another embodiment, the content 214 includes actual application or content data. Once the content package is generated, it is made available to the wireless device 102 that downloads and renders it.FIG. 3 illustrates one embodiment of a content package 300 for use in one or more embodiments of a restriction system. For example, the content package 300 shown in FIG. 3 may be the content package 120 shown in FIG. The content package comprises an authorization list 302, an actual content or content descriptor 306, a change detection indicator 308, and additional information 310.The authorization list 302 includes certification settings 304 that indicate which restrictions, certifications, or privileges are authorized to the described application or content. For example, the certification setpoint 304 comprises a series of bits that, when set to a value of “1”, authorize a particular certification for content based on the position of the bit. For example, a first bit position can grant or deny access to a selected device file, a second bit can grant or deny access to device hardware such as a modem, and The third bit can grant or deny access to specific device settings, and so on. As such, it is possible to grant or deny access to any type of device feature, function, setting or other information based on the bit setting in the authorization list 302.In one embodiment, the content section 306 comprises a content descriptor that describes the application or content. For example, the content descriptor may comprise actual application or content data downloaded to the device. For example, the content descriptor can include multimedia content such as an MPEG file or a MIDI file, or can include an application such as a simulation gaming program. In another embodiment, the content descriptor may include a content locator (ie, URL) that identifies the application or content and / or its location on the data network accessed by the device. . For example, a content descriptor could be a link (http://www.foo.com/videos/movie.mpg) that would cause “movie.mpg” to be downloaded to the device when accessed by the device. ). In another embodiment, the content descriptor describes a set of content pages or addresses, domain names, or any other type of information set. As such, the content descriptor may be actual application or content data, or a content locator, or a content group. The content locator identifies the location of the application or content, and the content group can be accessed and downloaded by the device.In one embodiment, the change detection indicator 308 may comprise other safety information that links the digital signature and / or authorization list with the content descriptor, so that their qualification can be verified. Virtually any type of change detection technique can be used to generate the change detection indicator 308.The additional information section 310 comprises additional information regarding the application or content associated with the content package. For example, the information section 310 may include file size, version, or other information related to the content package 120 or associated application or content. The additional information section 310 can also include license information associated with the application or content as well. For example, the license information may include the type of license granted, the date of grant, the duration of the license, the cost of the license, or other license information.In one embodiment, the content package is generated by the package generation logic 212 at the authority 108. However, it is possible to generate all or part of the content package at another location as well. For example, application or content developers can generate an authorization list for their application or content. In this case, the authorization list can be transmitted to the wireless device in multiple ways. For example, an application or content developer can transmit an authorization list to the authority 108 where the authorization list is evaluated, certified, and stored until the wireless device requests to download the associated content. . In another example, the authorization list certified by the authority is stored with the application or content at their respective server. When a wireless device attempts to download an application or content, the associated authorization list is downloaded to the wireless device as well. Regardless of the original location of the content descriptor and authorization list, the change detection indicator 308 generated by the authority to bind them and allow the device to authenticate them as unchanged originals. used. In addition, the authority operates to generate, evaluate, and / or certify the authorization list so that, regardless of where it is stored, the authorization list can be associated with the associated application or content. Just accept the authorized permission.FIG. 4 shows a functional diagram of one embodiment of a restriction system for use in a device 102 that operates to provide application level restrictions on applications and content rendered on the device. In one embodiment, content viewer 116 receives content package 120 via content receiver 402. Content package 120 is communicated to content viewer 116, which separates and removes the package and verifies the digital signature. If the content is not in the package, the content viewer 116 retrieves the content using the content request logic 404. For example, the content descriptor may be an address where the content is stored. The content request logic circuit 404 operates to transmit a request 408 to extract the content 410 from this address. Once content is available, the content viewer 116 operates to render the content on the device and limit the rendering operation based on the authorization list 402 in the content package 120. In this embodiment, the runtime / OS 406 is not directly involved and only supports the content viewer 116.In another embodiment, the content package is received by the receiver 402 and handed to the runtime / OS 406. The runtime / OS decomposes the package 120 and verifies the digital signature 408 therein. It also extracts the authorization list 402 as well. It then invokes the content viewer 116 that passes it to the content descriptor 406. It similarly restricts the operation of the content viewer 116 based on the authorization list 402.In the third embodiment, the restrictions in the authorization list are imposed in part by the content viewer 116 and partly by the runtime / OS 406.FIG. 5 shows a data network 500 comprising one embodiment of a restriction system for use with wireless devices. The network 500 includes a general purpose data network 502 that includes connections to an authority 504 and a content server 506. Data network 502 can be dedicated or public or both and can be wired or wirelessly connected or both. Authority 504 may be a carrier server, device server, or other authority. Network 502 communicates with wireless device 508 via wireless communication channel 510 as well. For purposes of this description, it is assumed that the wireless device 508 includes a runtime environment such as that provided by the BREW software platform.FIG. 6 illustrates one embodiment of a method 600 for enforcing application level restrictions on applications and content rendered on a wireless device. For example, the method 600 is suitable for use with the network 500 shown in FIG. Accordingly, for further clarity, the following detailed description of method 600 includes further references to network 500.Referring now to FIG. 6, when the content server submits a request to the restriction system to qualify content, the method 600 begins at block 602 so that the wireless device can do it without concern. Can be rendered. For example, the content server 506 presents the request and registers the content in the authority 504 as indicated by the path 5a. The request can include a content descriptor containing the actual content data, or a content locator, and can also include an authorization list for the content. In one embodiment, if no authorization list is given, authority 504 generates an authorization list for that content.At block 604, the authority operates to generate / evaluate the authorized authorization list. For example, in one embodiment, authority 504 evaluates the content and / or other information related to the content and generates a qualified authorization list associated with the content. In another embodiment, the content provider 506 provides an authorization list, and the authority operates to evaluate the provided authorization list and whether the authorization list should be certified. Operate to determine. As such, any privilege granted to the content via the authorization list is initially authorized by the authority 504.At block 606, the authority generates a change detection indicator that associates the content descriptor with the authorization list. For example, in one embodiment, authority 504 uses the content descriptor and authorization list to generate a digital signature. However, any other change detection technique could be used. In one embodiment, the content descriptor, authorization list and digital signature form a content package, which can be transmitted over the network 502 to a wireless device or any other entity. The content descriptor can be the actual content or a content locator.At block 608, an indication is provided to the wireless device that content is available for download. For example, device 508 can browse a catalog of available content provided by authority 504. In one embodiment, authority 504 transmits an icon for display on wireless device 508, as indicated by path 5b, which can be selected for the user to access the content. In one embodiment, the runtime environment running on the wireless device 508 receives and displays an icon for the device user.At block 610, the wireless device user submits a request to the authority to download the application or multimedia content. For example, the device user selects an icon displayed on the device 508 and the runtime environment running on the device 508 uses the network 502 to the authority 504, as shown by path 5c. Transmit the request and download the application or multimedia content associated with the displayed icon.At block 612, in response to the request for content, the content package is transmitted to the device. For example, authority 504 responds to device 508's request by transmitting the content package to device 508 (as indicated by path 5d), which content package includes a content descriptor, an authorization list, and a digital signature. including. The content package may also contain additional information about the content or additional security information used such as a credential or key used to verify that the device has received the content package from the authority 504. Can do. For example, credentials can allow a device to verify that it has received a content package from a trusted source.At block 614, the runtime environment running on the wireless device launches a content viewer that operates to process a content package that allows the device user to view the requested content. For example, a BREW runtime environment running on the wireless device 508 launches the content viewer 116.At block 616, the content viewer uses the digital signature to verify the authenticity of the authorization list and the content descriptor. For example, the content viewer 116 uses an authorization list and content descriptor to generate a second digital signature, which is compared with the digital signature received from the authority 504 in the content package. The Assuming that the authorization list and content descriptor are authenticated, the method proceeds to block 616.At block 618, the content viewer processes the content package and determines that it includes a content descriptor that identifies the content data. For example, the content descriptor is an address (URL) to the content located at the content server 506.At block 620, the content viewer transmits a request to the content server to receive the content. For example, the content viewer 514 transmits the request to the content server 506 via the wireless network 502 as indicated by the path 5e. The request is a request to receive the content pointed to by the content descriptor.In block 622, in response to the request, the content server transmits the content to the wireless device. For example, the content server 506 receives the request and transmits in response the content identified by the content descriptor to the wireless device 508, as indicated by path 5f.At block 624, the content is then rendered on the device. When content is rendered, the content viewer uses the restrictions provided in the authorization list to apply to the content, so that the content is selected feature, feature, device setting, and / or Access to specific information stored on the device is restricted. Virtually any type of resource limit or operational limit can be provided based on authorizations in the authorization list. As such, the restriction system allows the device 508 to download content from a remote server and render the content so that the device resource or device information is not accessed without proper authorization. Know that you are limiting content. Content limitations occur without bothering device users who need to decide when and how to limit content.The method 600 describes the use of a content package that includes an authorization list, a content descriptor, and a digital signature, but in one or more embodiments, no content package is used. For example, authorization lists, content descriptors, and change detection indicators may be transmitted to the wireless device from the same source or different sources. As such, the content provider can transmit content descriptors, the data server can transmit authorization lists, and the authority can transmit change detection indicators. In another embodiment, the change detection indicator is incorporated into the authorization list and / or content descriptor. Virtually any combination of information is possible, and information can be transmitted from one or any number of transmission sources to the device.In one embodiment, the wireless device operates to authenticate that the change detection indicator has been generated by the proper authority. For example, any type of encoding, encryption, credential, etc. can be used to authenticate the change detection indicator. Once the change detection indicator is authenticated, it is used to authenticate the authorization list and content descriptor. As such, no matter how the information is transmitted to the device, the authentication process allows the device to verify that it has the authentication information, and the authentication information renders content securely on the device. Can be used for.Method 600 is intended to be exemplary and is not intended to limit the operation of the various embodiments described herein. For example, it will be apparent to those skilled in the art to make minor changes, additions, or deletions to any of the methods described. Moreover, the described method steps may be integrated, rearranged, or reordered without departing from the scope of the described embodiments.FIG. 7 illustrates one embodiment of an authority 700 suitable for performing one or more embodiments of a restriction system, as described herein. Authority 700 and all its functional blocks can be provided as software, hardware, or both. In one embodiment, the functional blocks are provided as instructions stored in memory 708 and executed by processing logic 702. In another embodiment, some of the functional blocks, such as package generator 712, are application specific hardware (ie, a gate array) or any other hardware, logic circuit, or described function It can be provided as a circuit that can provide sex.Network interface 706 operates to provide communication 714 between the authority and the data network. Network interface 706 allows authority 700 to communicate with content servers, devices, and other network entities.User interface 710 operates to provide information exchange between authority 700 and the user via user input 716. User interface 710 is used to allow a user to communicate operating parameters to processing logic 702.In one embodiment, the package generator logic 712 operates to receive content and an authorization list, to evaluate the authorization list, and to approve or disapprove it. In another embodiment, the package logic 712 operates to generate an authorization list based on received content and other parameters. Once the authorized authorization list is obtained, the logic circuit 712 operates to associate the authorization list with the content using a change detection indicator such as a digital signature. The content, authorization list, and digital signature are then integrated into the content package, which is transmitted to the device via the network interface 706.It should be noted that device 700 describes only one embodiment of an authority suitable for providing a restriction system as described herein. Similarly, it is possible to use another functional element, rearrange the element, or use another type of device to provide the restriction system. As such, the embodiments described herein are not limited to the means shown in FIG.FIG. 8 illustrates one embodiment of a device 800 suitable for performing one or more embodiments of a restriction system as described herein. Device 800 includes processing logic 802, internal bus 804, network interface 806, rendering logic 812, memory 808, and user interface 810. In one embodiment, all functional blocks of device 800 are provided as instructions stored in memory 808 and executed by processing logic 802. In another embodiment, some of the functional blocks, such as content viewer 116, provide application functionality as application specific hardware (ie, a gate array) connected to bus 804. It can be provided as any other hardware circuit that can. The network interface 806 is a means of communicating, storing, or copying means including network connections 816 that can be connected to a local or remote network, device, or system. Either can be used.In one embodiment, processing logic 802 executes program instructions stored in memory 808 that activates runtime environment 814. The runtime environment 814 processes the content package received via the network interface 806 and launches the content viewer 116 in response. Content viewer 116 operates to render content contained in the content package using rendering logic 812. The content viewer renders the content using restrictions based on the authorization list given in the content package. In one embodiment, the content package includes a content descriptor that identifies the location of the content that is about to be rendered. The content viewer 116 uses the content descriptor to obtain content from a location specified via the network interface 806. Once acquired, the content is rendered via the rendering logic 812.It should be noted that the device 800 describes only one embodiment of a device suitable for implementing a restriction system as described herein. It is equally possible to use another functional element, rearrange multiple elements, or use a different type of device to provide the restriction system. As such, the embodiments described herein are not limited to the means shown in FIG.Restriction OverrideIn one embodiment, the device user can release the access rights or restrictions granted in the authorization list. For example, by providing specific user input, the user can revoke the access rights granted in the authorization list, since the application or content is accessed with specific device resources or stored information. Can be protected. As such, the device user retains the ability to control access to device resources even if access to those resources is not granted in the authorization list.A restriction system has been described that includes a method and apparatus for enforcing application level restrictions on local and remote applications and content rendered on a wireless device. The system is suitable for use with all types of wireless devices and, in particular, provides a wide range of networks while providing features, functions, settings, information and limitations to protect other device systems. -Suitable for use with mobile phones to provide access to resources.Thus, while one or more embodiments of methods and apparatus for enforcing application level restrictions have been illustrated and described herein, various modifications can be made from its spirit or essential characteristics. It will be appreciated that it can be performed on multiple embodiments without departing. Accordingly, the disclosure and description herein are intended to illustrate rather than limit the scope of the invention as set forth in the claims. |
A portable computing device (PCD) docking station is disclosed and may include an upper housing portion, a lower housing portion hingedly connected to the upper housing portion, and a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station. Further, the portable computing device may include a wired dock connection formed in the lower housing portion, the upper housing portion, or a combination thereof. The wired dock connection may be configured to provide connectivity between the PCD and the PCD docking station. |
A portable computing device (PCD) docking station, the PCD docking station comprising: an upper housing portion; a lower housing portion hingedly connected to the upper housing portion; a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station; and a wireless dock connection configured to provide connectivity between the PCD and the PCD docking station. The PCD docking station of claim 2, wherein the wireless dock connection includes a wireless connection module having a Bluetooth chip, a broadband wireless interface, and a Wi-Fi chip. The PCD docking station of claim 3, wherein the Bluetooth chip comprises an. 1 chip operating at a frequency of. 4 GHz. The PCD docking station of claim 4, wherein the Wi-Fi chip comprises an. x chip operating at frequency of. 4/. 7 GHz. The PCD docking station of claim 5, wherein the broadband wireless interface comprises a sixty GigaHertz (60 GHz) chip operating at a frequency of 60 GHz. The PCD docking station of claim 5, wherein the wireless dock connection is configured to provide connectivity between a system-on-chip within the PCD and a battery, a first universal serial bus-high speed (USB-HS) port, a second USB-HS port, a display, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 5, wherein the wireless dock connection is configured to provide connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a Gigabit Ethernet Media Access Controller 39 WO 2010/110961 PCT/US2010/024439 (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 5, wherein the wireless dock connection is configured to provide connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 5, wherein the wireless dock connection is configured to provide connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. A portable computing device (PCD) docking station, the PCD docking station comprising: an upper housing portion; a lower housing portion hingedly connected to the upper housing portion; a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station; and a wireless connection means for providing connectivity between the PCD and the PCD docking station. The PCD docking station of claim 10, wherein the wireless connection means includes a wireless connection module having a Bluetooth chip, a broadband wireless interface, and a Wi-Fi chip. WO 2010/110961 PCT/US2010/02443. The PCD docking station of claim 11, wherein the Bluetooth chip comprises an. 1 chip operating at a frequency of. 4 GHz. The PCD docking station of claim 12, wherein the Wi-Fi chip comprises an. x chip operating at frequency of. 4/. 7 GHz. The PCD docking station of claim 13, wherein the broadband wireless interface comprises a sixty GigaHertz (60 GHz) chip operating at a frequency of 60 GHz. The PCD docking station of claim 14, wherein the wireless connection means is configured for providing connectivity between a system-on-chip within the PCD and a battery, a first universal serial bus-high speed (USB-HS) port, a second USB-HS port, a display, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 14, wherein the wireless connection means is configured for providing connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 14, wherein the wireless connection means is configured for providing connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 14, wherein the wireless connection means is configured for providing connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a 41 WO 2010/110961 PCT/US2010/024439 second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. A portable computing device (PCD) docking station, the PCD docking station comprising: an upper housing portion; a lower housing portion hingedly connected to the upper housing portion; a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station; and a wireless dock connection configured to provide connectivity between a system-on-chip within the PCD and a battery, a first universal serial bus-high speed (USB-HS) port, a second USB-HS port, a display, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 19, wherein the wireless dock connection includes a wireless connection module having a Bluetooth chip, a broadband wireless interface, and a Wi-Fi chip. The PCD docking station of claim 20, wherein the Bluetooth chip comprises an. 1 chip operating at a frequency of. 4 GHz. The PCD docking station of claim 21, wherein the Wi-Fi chip comprises an. x chip operating at frequency of. 4/. 7 GHz. The PCD docking station of claim 22, wherein the broadband wireless interface comprises a sixty GigaHertz (60 GHz) chip operating at a frequency of 60 GHz. A portable computing device (PCD) docking station, the PCD docking station comprising: an upper housing portion; a lower housing portion hingedly connected to the upper housing portion; 42 WO 2010/110961 PCT/US2010/024439 a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station; and a wireless dock connection configured to provide connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 24, wherein the wireless dock connection includes a wireless connection module having a Bluetooth chip, a broadband wireless interface, and a Wi-Fi chip. The PCD docking station of claim 25, wherein the Bluetooth chip comprises an. 1 chip operating at a frequency of. 4 GHz. The PCD docking station of claim 26, wherein the Wi-Fi chip comprises an. x chip operating at frequency of. 4/. 7 GHz. The PCD docking station of claim 27, wherein the broadband wireless interface comprises a sixty GigaHertz (60 GHz) chip operating at a frequency of 60 GHz. A portable computing device (PCD) docking station, the PCD docking station comprising: an upper housing portion; a lower housing portion hingedly connected to the upper housing portion; a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station; and a wireless dock connection configured to provide connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller 43 WO 2010/110961 PCT/US2010/024439 (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. The PCD docking station of claim 29, wherein the wireless dock connection includes a wireless connection module having a Bluetooth chip, a broadband wireless interface, and a Wi-Fi chip. The PCD docking station of claim 30, wherein the Bluetooth chip comprises an. 1 chip operating at a frequency of. 4 GHz. The PCD docking station of claim 31, wherein the Wi-Fi chip comprises an. x chip operating at frequency of. 4/. 7 GHz. The PCD docking station of claim 32, wherein the broadband wireless interface comprises a sixty GigaHertz (60 GHz) chip operating at a frequency of 60 GHz. A portable computing device (PCD) docking station, the PCD docking station comprising: an upper housing portion; a lower housing portion hingedly connected to the upper housing portion; a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station; and a wireless dock connection configured to provide connectivity between a system-on-chip within the PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. 44 WO 2010/110961 PCT/US2010/02443. The PCD docking station of claim 34, wherein the wireless dock connection includes a wireless connection module having a Bluetooth chip, a broadband wireless interface, and a Wi-Fi chip. The PCD docking station of claim 35, wherein the Bluetooth chip comprises an. 1 chip operating at a frequency of. 4 GHz. The PCD docking station of claim 36, wherein the Wi-Fi chip comprises an. x chip operating at frequency of. 4/. 7 GHz. The PCD docking station of claim 37, wherein the broadband wireless interface comprises a sixty GigaHertz (60 GHz) chip operating at a frequency of 60 GHz. 45. |
WO 2010/110961 PCT/US2010/024439 SYSTEM AND METHOD OF PROVIDING WIRELESS CONNECTIVITY BETWEEN A PORTABLE COMPUTING DEVICE AND A PORTABLE COMPUTING DEVICE DOCKING STATION RELATED APPLICATIONS [0001] The present application claims priority to U.S. Provisional Patent Application Serial Number 61/164,139, entitled SYSTEM AND METHOD OF PROVIDING WIRELESS CONNECTIVITY BETWEEN A PORTABLE COMPUTING DEVICE AND A PORTABLE COMPUTING DEVICE DOCKING STATION, filed on March 27, 2009. FIELD [0002] The present invention generally relates to portable computing devices, and more particularly, to portable computing device docking stations. DESCRIPTION OF THE RELATED ART [0003] Portable computing devices (PCDs) are ubiquitous. These devices may include cellular telephones, portable digital assistants (PDAs), portable game consoles, palmtop computers, and other portable electronic devices. As technology increases, PCDs are becoming increasingly powerful and rival laptop computers and desktop computers in computing power and storage capabilities. [0004] One drawback to using a PCD, however, is the small form factor typically associated therewith. As the PCD gets smaller and is made more easily portable, using the PCD may become increasingly difficult. Further, the small form factor of a PCD may limit the amount of ports, or connections, that may be incorporated in the shell, or housing, of the PCD. As such, even as PCDs become more powerful and have increased capabilities, access to the power and capabilities may be limited by the sizes of the PCDs. [0005] Accordingly, what is needed is an improved for system and method for taking advantage of the computing capabilities provided by a PCD. 1 WO 2010/110961 PCT/US2010/024439 SUMMARY OF THE DISCLOSURE [0006] A portable computing device (PCD) docking station is disclosed and may include an upper housing portion, a lower housing portion hingedly connected to the upper housing portion, and a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof, wherein the PCD engagement mechanism is configured to removably engage a PCD when the PCD is docked with the PCD docking station. Further, the portable computing device may include a wired dock connection formed in the lower housing portion, the upper housing portion, or a combination thereof. The wired dock connection may be configured to provide connectivity between the PCD and the PCD docking station. [0007] In a particular aspect, the wireless dock connection may include a wireless connection module having a Bluetooth chip, a sixty GigaHertz (60 GHz) chip, and a Wi-Fi chip. The Bluetooth chip may include an 802.15.1 chip operating at a frequency of 2.4 GHz. The Wi-Fi chip may include an 802.1 1.x chip operating at frequency of 2.4/5.7 GHz. Further, the 60 GHz chip may operate at a frequency of 60 GHz. [0008] In a particular aspect, the wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, a first universal serial bus-high speed (USB-HS) port, a second USB-HS port, a display, a ground connection, or a combination thereof within the PCD docking station. [0009] In another aspect, the wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0010] In yet another aspect, the wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0011] In still another aspect, the wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media 2 WO 2010/110961 PCT/US2010/024439 Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0012] In another aspect, a portable computing device (PCD) docking station is disclosed and may include an upper housing portion, a lower housing portion hingedly connected to the upper housing portion, and a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof. The PCD engagement mechanism may be configured to removably engage a PCD when the PCD is docked with the PCD docking station. The portable computing device may also include a wired connection means for providing connectivity between the PCD and the PCD docking station. The wired connection means may be disposed in the lower housing portion, the upper housing portion, or a combination thereof. [0013] In a particular aspect, the wireless connection means may include a wireless connection module having a Bluetooth chip, a sixty GigaHertz (60 GHz) chip, and a Wi-Fi chip. The Bluetooth chip may include an 802.15.1 chip operating at a frequency of 2.4 GHz. The Wi-Fi chip may include an 802.1 1.x chip operating at frequency of 2.4/5.7 GHz. Further, the 60 GHz chip may operate at a frequency of 60 GHz. [0014] In a particular aspect, the wired connection means may be configured for providing connectivity between a system-on-chip within a PCD and a battery, a first universal serial bus-high speed (USB-HS) port, a second USB-HS port, a display, a ground connection, or a combination thereof within the PCD docking station. [0015] In another aspect, a wired connection means may be configured for providing connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0016] In yet another aspect, the wired connection means may be configured for providing connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0017] In still another aspect, the wired connection means may be configured for providing connectivity between a system-on-chip within a PCD and a battery, an audio 3 WO 2010/110961 PCT/US2010/024439 input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0018] In another aspect, a portable computing device (PCD) docking station is disclosed and may include an upper housing portion, a lower housing portion hingedly connected to the upper housing portion, and a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof. The PCD engagement mechanism may be configured to removably engage a PCD when the PCD is docked with the PCD docking station. The portable computing device may also include a wired dock connection formed in the lower housing portion, the upper housing portion, or a combination thereof. The wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, a first universal serial bus-high speed (USB-HS) port, a second USB-HS port, a display, a ground connection, or a combination thereof within the PCD docking station. [0019] In a particular aspect, the wireless dock connection may include a wireless connection module having a Bluetooth chip, a sixty GigaHertz (60 GHz) chip, and a Wi-Fi chip. The Bluetooth chip may include an 802.15.1 chip operating at a frequency of 2.4 GHz. The Wi-Fi chip may include an 802.1 1.x chip operating at frequency of 2.4/5.7 GHz. Further, the 60 GHz chip may operate at a frequency of 60 GHz. [0020] In another aspect, a portable computing device (PCD) docking station is disclosed and may include an upper housing portion, a lower housing portion hingedly connected to the upper housing portion, and a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof. The PCD engagement mechanism may be configured to removably engage a PCD when the PCD is docked with the PCD docking station. The portable computing device may also include a wired dock connection formed in the lower housing portion, the upper housing portion, or a combination thereof. The wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0021] In a particular aspect, the wireless dock connection may include a wireless connection module having a Bluetooth chip, a sixty GigaHertz (60 GHz) chip, and a 4 WO 2010/110961 PCT/US2010/024439 Wi-Fi chip. The Bluetooth chip may include an 802.15.1 chip operating at a frequency of 2.4 GHz. The Wi-Fi chip may include an 802.1 1.x chip operating at frequency of 2.4/5.7 GHz. Further, the 60 GHz chip may operate at a frequency of 60 GHz. [0022] In another aspect, a portable computing device (PCD) docking station is disclosed and may include an upper housing portion, a lower housing portion hingedly connected to the upper housing portion, and a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof. The PCD engagement mechanism may be configured to removably engage a PCD when the PCD is docked with the PCD docking station. The portable computing device may also include a wired dock connection formed in the lower housing portion, the upper housing portion, or a combination thereof. The wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. [0023] In a particular aspect, the wireless dock connection may include a wireless connection module having a Bluetooth chip, a sixty GigaHertz (60 GHz) chip, and a Wi-Fi chip. The Bluetooth chip may include an 802.15.1 chip operating at a frequency of 2.4 GHz. The Wi-Fi chip may include an 802.1 1.x chip operating at frequency of 2.4/5.7 GHz. Further, the 60 GHz chip may operate at a frequency of 60 GHz. [0024] In another aspect, a portable computing device (PCD) docking station is disclosed and may include an upper housing portion, a lower housing portion hingedly connected to the upper housing portion, and a PCD engagement mechanism formed in the lower housing portion, the upper housing portion, or a combination thereof. The PCD engagement mechanism may be configured to removably engage a PCD when the PCD is docked with the PCD docking station. The portable computing device may also include a wired dock connection formed in the lower housing portion, the upper housing portion, or a combination thereof. The wired dock connection may be configured to provide connectivity between a system-on-chip within a PCD and a battery, an audio input/output, a mobile display digital interface (MDDI), a Gigabit Ethernet Media Access Controller (GbE MAC), a first USB-HS port, a second USB-HS port, a the third USB-HS port, a display, an RGB(A) connector, a ground connection, or a combination thereof within the PCD docking station. WO 2010/110961 PCT/US2010/024439 [0025] In a particular aspect, the wireless dock connection may include a wireless connection module having a Bluetooth chip, a sixty GigaHertz (60 GHz) chip, and a Wi-Fi chip. The Bluetooth chip may include an 802.15.1 chip operating at a frequency of 2.4 GHz. The Wi-Fi chip may include an 802.1 1.x chip operating at frequency of 2.4/5.7 GHz. Further, the 60 GHz chip may operate at a frequency of 60 GHz. BRIEF DESCRIPTION OF THE DRAWINGS [0026] In the figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. [0027] FIG. 1 is a front plan view of a portable computing device (PCD) in a closed position; [0028] FIG. 2 is a front plan view of a PCD in an open position; [0029] FIG. 3 is a bottom plan view of a PCD; [0030] FIG. 4 is a side plan view of a PCD; [0031] FIG. 5 is a block diagram of a first aspect of a PCD; [0032] FIG. 6 is a front plan view of a first aspect of a PCD docking station in a closed configuration; [0033] FIG. 7 is a rear plan view of a first aspect of a PCD docking station in a closed configuration; [0034] FIG. 8 is a first side plan view of a first aspect of a PCD docking station in a closed configuration; [0035] FIG. 9 is a second side plan view of a first aspect of a PCD docking station in a closed configuration; [0036] FIG. 10 a front plan view of a first aspect of a PCD docking station in an open configuration; [0037] FIG. 11 is a front plan view of a first aspect of a PCD docking station in an open configuration with a PCD docked therewith; [0038] FIG. 12 is a side plan view of a second aspect of a PCD docking station in a closed configuration; [0039] FIG. 13 is a front plan view of a second aspect of a PCD docking station in an open configuration; [0040] FIG. 14 is a front plan view of a second aspect of a PCD docking station in an open configuration with a PCD partially docked therewith; 6 WO 2010/110961 PCT/US2010/024439 [0041] FIG. 15 is a front plan view of a second aspect of a PCD docking station in an open configuration with a PCD docked therewith; [0042] FIG. 16 is a side plan view of a third aspect of a PCD docking station in a closed configuration; [0043] FIG. 17 is a front plan view of a third aspect of a PCD docking station in an open configuration with a PCD partially docked therewith; [0044] FIG. 18 is a side plan view of a fourth aspect of a PCD docking station in a closed configuration; [0045] FIG. 19 is a front plan view of a fourth aspect of a PCD docking station in an open configuration with a PCD docking tray in an open position; [0046] FIG. 20 is a front plan view of a fourth aspect of a PCD docking station in an open configuration with a PCD docking tray in an open position; [0047] FIG. 21 is a front plan view of a fourth aspect of a PCD docking station in an open configuration with a PCD docking tray in an open position and with a PCD docked therewith; [0048] FIG. 22 is a side plan view of a fourth aspect of a PCD docking station in an open configuration with a PCD docking tray in an open position and with a PCD docked therewith; [0049] FIG. 23 is a side plan view of a fifth aspect of a PCD docking station in a closed configuration; [0050] FIG. 24 is a front plan view of a fifth aspect of a PCD docking station in an open configuration with a PCD docking tray in an open position; [0051] FIG. 25 is a front plan view of a fifth aspect of a PCD docking station in an open configuration with a PCD docking tray in an open position and with a PCD docked therewith; [0052] FIG. 26 is a front plan view of a sixth aspect of a PCD docking station in an open configuration; [0053] FIG. 27 is a front plan view of a sixth aspect of a PCD docking station in an open configuration with a PCD docked therewith; [0054] FIG. 28 is a block diagram of a first aspect of a wired PCD/PCD docking station system; [0055] FIG. 29 is a block diagram of a second aspect of a wired PCD/PCD docking station system; 7 WO 2010/110961 PCT/US2010/024439 [0056] FIG. 30 is a block diagram of a third aspect of a wired PCD/PCD docking station system; [0057] FIG. 31 is a block diagram of a fourth aspect of a wired PCD/PCD docking station system; [0058] FIG. 32 is a block diagram of a second aspect of a PCD; [0059] FIG. 33 is a block diagram of a wireless PCD/PCD docking station system; and [0060] FIG. 34 is a block diagram of wireless connector protocol stack. DETAILED DESCRIPTION [0061] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. [0062] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. [0063] The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. [0064] As used in this description, the terms "component," "database," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media 8 WO 2010/110961 PCT/US2010/024439 having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal). [0065] Referring initially to FIG. 1 through FIG. 4, an exemplary portable computing device (PCD) is shown and is generally designated 100. As shown, the PCD 100 may include a housing 102. The housing 102 may include an upper housing portion 104 and a lower housing portion 106. FIG. 1 shows that the upper housing portion 104 may include a display 108. In a particular aspect, the display 108 may be a touchscreen display. The upper housing portion 104 may also include a trackball input device 110. Further, as shown in FIG. 1, the upper housing portion 104 may include a power on button 112 and a power off button 114. As shown in FIG. 1, the upper housing portion 104 of the PCD 100 may include a plurality of indicator lights 116 and a speaker 118. Each indicator light 116 may be a light emitting diode (LED). [0066] In a particular aspect, as depicted in FIG. 2, the upper housing portion 104 is movable relative to the lower housing portion 106. Specifically, the upper housing portion 104 may be slidable relative to the lower housing portion 106. As shown in FIG. 2, the lower housing portion 106 may include a multi-button keyboard 120. In a particular aspect, the multi-button keyboard 120 may be a QWERTY keyboard. The multi-button keyboard 120 may be revealed when the upper housing portion 104 is moved relative to the lower housing portion 106. FIG. 2 further illustrates that the PCD 100 may include a reset button 122 on the lower housing portion 106. [0067] As shown in FIG. 3, the PCD 100 may include a multi-pin connector array 130 established, or otherwise disposed, in a short end of the PCD 100, e.g., a bottom of the PCD 100. Alternatively, as illustrated in FIG. 4, the PCD 100 may include a multi-pin connector array 132 established, or otherwise disposed, in a long end of the PCD 100, e.g., a left side of the PCD 100 or a right side of the PCD 100. In a particular aspect, the multi-pin connector array 130, 132 may provide connectivity between the PCD 100 and an aspect of a PCD docking station, described in detail below. [0068] Referring to FIG. 5, an exemplary, non-limiting aspect of a portable computing device (PCD) is shown and is generally designated 520. As shown, the PCD 520 includes an on-chip system 522 that includes a digital signal processor 524 and an analog signal processor 526 that are coupled together. The on-chip system 522 may 9 WO 2010/110961 PCT/US2010/024439 include more than two processors. For example, the on-chip system 522 may include four core processors and an ARM 11 processor, i.e., as described below in conjunction with FIG. 32. It may be appreciated that the on-chip system 522 may include other types of processors, e.g., a CPU, a multi-core CPU, a multi-core DSP, a GPU, a multi core GPU, or any combination thereof. [0069] As illustrated in FIG. 5, a display controller 528 and a touchscreen controller 530 are coupled to the digital signal processor 524. In turn, a touchscreen display 532 external to the on-chip system 522 is coupled to the display controller 528 and the touchscreen controller 530. [0070] FIG. 5 further indicates that a video encoder 534, e.g., a phase alternating line (PAL) encoder, a sequential couleur a memoire (SECAM) encoder, or a national television system(s) committee (NTSC) encoder, is coupled to the digital signal processor 524. Further, a video amplifier 536 is coupled to the video encoder 534 and the touchscreen display 532. Also, a video port 538 is coupled to the video amplifier 536. As depicted in FIG. 5, a universal serial bus (USB) controller 540 is coupled to the digital signal processor 524. Also, a USB port 542 is coupled to the USB controller 540. A memory 544 and a subscriber identity module (SIM) card 546 may also be coupled to the digital signal processor 524. Further, as shown in FIG. 5, a digital camera 548 may be coupled to the digital signal processor 524. In an exemplary aspect, the digital camera 548 is a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera. [0071] As further illustrated in FIG. 5, a stereo audio CODEC 550 may be coupled to the analog signal processor 526. Moreover, an audio amplifier 552 may coupled to the stereo audio CODEC 550. In an exemplary aspect, a first stereo speaker 554 and a second stereo speaker 556 are coupled to the audio amplifier 552. FIG. 5 shows that a microphone amplifier 558 may be also coupled to the stereo audio CODEC 550. Additionally, a microphone 560 may be coupled to the microphone amplifier 558. In a particular aspect, a frequency modulation (FM) radio tuner 562 may be coupled to the stereo audio CODEC 550. Also, an FM antenna 564 is coupled to the FM radio tuner 562. Further, stereo headphones 566 may be coupled to the stereo audio CODEC 550. [0072] FIG. 5 further indicates that a radio frequency (RF) transceiver 568 may be coupled to the analog signal processor 526. An RF switch 570 may be coupled to the RF transceiver 568 and an RF antenna 572. As shown in FIG. 5, a keypad 574 may be coupled to the analog signal processor 526. Also, a mono headset with a microphone WO 2010/110961 PCT/US2010/024439 576 may be coupled to the analog signal processor 526. Further, a vibrator device 578 may be coupled to the analog signal processor 526. FIG. 5 also shows that a power supply 580 may be coupled to the on-chip system 522. In a particular aspect, the power supply 580 is a direct current (DC) power supply that provides power to the various components of the PCD 520 that require power. Further, in a particular aspect, the power supply is a rechargeable DC battery or a DC power supply that is derived from an alternating current (AC) to DC transformer that is connected to an AC power source. [0073] As shown in FIG. 5, the PCD 520 may also include a global positioning system (GPS) module 582. The GPS module 582 may be used to determine the location of the PCD 520. Further, the GPS module 582 may be used to determine whether the PCD 520 is in motion by determining successive location information. Also, based on the successive location information the rate at which the PCD 520 is moving may be determined. [0074] FIG. 5 indicates that the PCD 520 may include a management module 584, e.g., within the memory 544. The management module 584 may be used to manage the power of the PCD, the power of a PCD docking station, or a combination thereof. [0075] Further, in another aspect, the management module 584 may be used to manage the memory 544 within the PCD 520, a memory within a PCD docking station, or a combination thereof. Specifically, the management module 584 may be used to manage one or more applications stored within the PCD 520, one or more content items stored within the PCD 520, one or more applications stored within a PCD docking station, one or more content items stored within a PCD docking station, one or more application download requests received from a PCD 520, one or more content item download requests received from a PCD 520, one or more application download requests received from a PCD docking station, one or more content item download requests received from a PCD docking station, or a combination thereof. [0076] In yet another aspect, the management module 584 may also be used to manage security between the PCD 520 and a PCD docking station, e.g., a mated PCD docking station, an unmated PCD docking station, or a combination thereof. Further, the management module 584 may also be used to manage the display 532 within the PCD 520, a display within a PCD docking station, or a combination thereof. Additionally, the management module 584 may be used to manage calls received at the PCD 520, e.g., while the PCD 520 is docked or undocked with a PCD docking station. The management module 584 may be used to manage calls transmitted from the PCD 11 WO 2010/110961 PCT/US2010/024439 520, e.g., while the PCD 520 is docked or undocked with a PCD docking station. The management module 584 may also be used to manage other data transmission to and from the PCD 520 while the PCD 520 is docked or undocked, e.g., via a Wi-Fi network, a WPAN, a cellular network, or any other wireless data network. [0077] In still another aspect, the management module 584 may be used to manage processors within the PCD 520, e.g., when the PCD 520 is docked with a PCD docking station, when the PCD 520 is undocked with a PCD docking station, or a combination thereof The management module 584 may also be used to manage the execution of applications within the PCD 520 when the PCD is docked or undocked with a PCD docking station. For example, the management module 584 may manage the execution of primary application versions, secondary application versions, standard application versions, enhanced application versions, or a combination thereof. [0078] FIG. 5 indicates that the PCD 520 may further include a sensor 586 connected to the DSP 524. The sensor 586 may be a motion sensor, a tilt sensor, a proximity sensor, a shock sensor, or a combination thereof. The sensor 586 may be used for situational awareness applications. For example, the sensor 586 may be used to detect the motion of a user lifting the PCD 520 to his or her ear and at the apex of the motion automatically connecting an incoming call. Further, the sensor 586 may detect a prolonged lack of motion of the PCD 520 whereas the PCD 520 may be automatically powered down, or placed in a sleep mode. The sensor 586 may remain powered so that when motion is once again detected, the PCD 520 may be switched from the sleep mode, or an off mode, into an active mode. [0079] The sensor 586 may be used with tilt sensing applications. For example, the sensor 586 may be used for user interface applications in which movement is relevant. The sensor 586 may be used to sense picture, or screen, orientation. Further, the sensor 586 may be used to navigate, scroll, browse, zoom, pan, or a combination thereof based on tilt sensing. The sensor 586 may also be used in conjunction with gaming applications. In another application, the sensor 586 may be used for shock detection in order to protect a hard disk drive within the PCD 520 or a hard disk drive within a PCD docking station in which the PCD 520 is docked, or otherwise, engaged. Further, the sensor 586 may be used for tap detection. [0080] FIG. 5 further indicates that the PCD 520 may also include a network card 588 that may be used to access a data network, e.g., a local area network, a personal area network, or any other network. The network card 588 may be a Bluetooth network 12 WO 2010/110961 PCT/US2010/024439 card, a WiFi network card, a personal area network (PAN) card, a personal area network ultra-low-power technology (PeANUT) network card, or any other network card well known in the art. Further, the network card 588 may be incorporated into a chip, i.e., the network card 588 may be a full solution in a chip, and may not be a separate network card 588. [0081] As depicted in FIG. 5, the touchscreen display 532, the video port 538, the USB port 542, the camera 548, the first stereo speaker 554, the second stereo speaker 556, the microphone 560, the FM antenna 564, the stereo headphones 566, the RF switch 570, the RF antenna 572, the keypad 574, the mono headset 576, the vibrator 578, and the power supply 580 are external to the on-chip system 522. [0082] In a particular aspect, one or more of the method steps described herein may be stored in the memory 544 as computer program instructions. These instructions may be executed by a processor 524, 526 in order to perform the methods described herein. Further, the processors, 524, 526, the display controller 528, the touchscreen controller 530, the memory 544, the management module 584, the network card 588, or a combination thereof may serve as a means for performing one or more of the method steps described herein. [0083] Referring now to FIG. 6 through FIG. 11, a first aspect of a PCD docking station is shown and is generally designated 600. As shown, the PCD docking station 600 may include a housing 602 having a generally flat, boxed shaped lower housing portion 604 and a generally flat, boxed shaped upper housing portion 606. In a particular aspect, the upper housing portion 606 may be connected to the lower housing portion 604 by a first hinge 608 and a second hinge 610. The upper housing portion 606 of the housing 602 may rotate around the hinges 608, 610 with respect to the lower housing portion 604 of the housing 602. Accordingly, the upper housing portion 606 may be rotated, or otherwise moved, relative to the lower housing portion 604 of the housing 602 between a closed position, or closed configuration, shown in FIG. 6 through FIG. 9, and an open position, or open configuration, shown in FIG. 10 and FIG. 11. It may be appreciated that the open position may include a plurality of open positions in which the upper housing portion 606 of the housing 602 is rotated away from the lower housing portion 604 of the housing 602 and disposed at a plurality of angles with respect to the lower housing portion 604 of the housing 602. [0084] Although, the PCD docking station 600 is shown with hinges 608, 610 coupling the upper housing portion 606 to the lower housing portion 604. It may be 13 WO 2010/110961 PCT/US2010/024439 appreciated that the upper housing portion 606 may be coupled, or otherwise connected, to the lower housing portion 604 via a slide assembly (not shown). The upper housing portion 606 may slide relative to the lower housing portion 604 in order to reveal one or more components within the lower housing portion 604, the upper housing portion 606, or a combination thereof. Further, the upper housing portion 606 and the lower housing portion 604 may snap together or be coupled, or otherwise connected, via various other coupling mechanisms well known in the art. [0085] As shown in FIG. 6 through FIG. 9, the PCD docking station 600 may include a first front foot 612 and a second front foot 614. Further, the PCD docking station 600 may also include a first rear foot 616 and a second rear foot 618. Each foot 612, 614, 616, 618 may be made from a polymer, rubber, or other similar type of material to support the PCD docking station 600 when placed on a desk or table and to prevent the PCD docking station 600 from slipping with respect to the desk or table. [0086] As illustrated in FIG. 6, FIG. 10, and FIG. 11, the PCD docking station 600 may include a latch assembly 620. The latch assembly 620 may include a first hook 622 and a second hook 624 extending from the upper housing portion 606 of the housing 602. The first hook 622 and the second hook 624 may be connected to each other and a slider 626. The latch assembly 620 may also include a first hook pocket 628 and a second hook pocket 630 formed within the lower housing portion 604 of the housing 602. The first hook pocket 628 and the second hook pocket 630 may be sized and shaped to receive and engage the first hook 622 and the second hook 624. The slider 626 may be moved, or otherwise slid, relative to the upper housing portion 606 of the housing 602 in order to release the hooks 624, 626 from the hook pockets 628, 630 and unlock the PCD docking station 600 in order to allow the upper housing portion 606 of the housing 602 to be rotated with respect to the lower housing portion 604 of the housing 602. [0087] FIG. 9 illustrates that the lower housing portion 604 of the housing 602 may include a plurality of external device connections 640. For example, the lower housing portion 604 of the housing 602 may include an IEEE 1284 connection 642, a first universal serial bus (USB) connection 644, a second USB connection 646, a registered jack (RJ) 11 connection 648, an RJ-45 connection 650, a microphone jack 652, and a headphone/speaker jack 654. Further, the lower housing portion 604 of the housing 602 may include an S-video connection 656, a video graphics array (VGA) connection 658, and an alternating current (AC) power adapter connection 660. The lower housing 14 WO 2010/110961 PCT/US2010/024439 portion 604 of the housing 602 may include other connections, described elsewhere herein. [0088] Referring now to FIG. 10 and FIG. 11, the upper housing portion 606 of the PCD docking station 600 may include a display 670 incorporated therein. For example, the display 670 may be a liquid crystal display (LCD), a light emitting diode (LED) display, a backlit-LED display, an organic light emitting diode (OLED) display, or any other type of display. The lower housing portion 604 of the PCD docking station 600 may include a keyboard 672 incorporated therein. The keyboard 672 may be a fully QWERTY keyboard. The lower housing portion 604 of the PCD docking station 600 may include a touch pad mouse 674 incorporated therein. Further, the lower housing portion 604 of the PCD docking station 600 may include a first mouse button 676 and a second mouse button 678 incorporated therein. The mouse buttons 676, 678 may be proximal to the touch pad mouse 674. Additionally, as shown in FIG. 10 and FIG. 11, the lower housing portion 604 of the housing 602 may include a first speaker 680 and a second speaker 682 incorporated therein. The lower housing portion 604 of the housing 602 may also include a fingerprint reader 684 incorporated therein. [0089] As illustrated in FIG. 10, the lower housing portion 604 of the housing 602 may include an open-faced, closed-ended PCD docking pocket 690 formed in the surface thereof. In this aspect, the open-faced, closed-ended PCD docking pocket 690 may be sized and shaped to receive a correspondingly sized and shaped PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4. The open-faced, closed-ended PCD docking pocket 690 may be a depression or hole formed in the lower housing portion 604 of the housing 602. As shown, the open-faced, closed-ended PCD docking pocket 690 may be an open space, or a volume, formed within a left side wall 692, a right side wall 694, a rear side wall 696, a front side wall 698, and a bottom surface 700. [0090] FIG. 10 indicates that the open-faced, closed-ended PCD docking pocket 690 may include a multi-pin connector array 702. The multi-pin connector array 702 may be formed in, extend from (or a combination thereof), one of the side walls 692, 694, 696, 698. In the aspect as shown in FIG. 10, the multi-pin connector 702 may extend from the left side wall 692 of the open-faced, closed-ended PCD docking pocket 690. The multi-pin connector array 702 may be sized and shaped to removably engage a correspondingly sized and shaped multi-pin connector array, e.g., the multi-pin connector array 130 illustrated in FIG. 3, the multi-pin connector array 132 illustrated in WO 2010/110961 PCT/US2010/024439 FIG. 4, a combination thereof, or some other type of multi-pin connector array known in the art. [0091] As shown in FIG. 10 and FIG. 11, the open-faced, closed-ended PCD docket pocket 690 may also include a latch assembly 704 that extends over an edge of one of the side walls 692, 694, 696, 698. In the aspect as shown in FIG. 10 and FIG. 11, the latch assembly 704 may extend over the edge of the right side wall 694 of the open faced, closed-ended PCD docking pocket 690 opposite the left side wall 692 of the open-faced, closed-ended PCD docking pocket 690. The latch assembly 704 may be spring loaded and slidably disposed in the surface of the lower housing portion 604 of the housing 602. In the aspect as shown, the latch assembly 704 may be moved in a direction, e.g., to the right, in order to allow a PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4, to be inserted into the open-faced, closed-ended PCD docking pocket 690. Thereafter, when released, the latch assembly 704 may move in the opposite direction, e.g., to the left. The latch assembly 704 may then engage an upper surface of the PCD 100 in order to maintain the PCD 100 within the PCD docking pocket 690. FIG. 11 illustrates the PCD 100 engaged with the PCD docking station 600. [0092] As shown in FIG. 11, the PCD 100 may be installed within the open-faced, closed-ended docking pocket 690 as described herein. Depending on the orientation of the multi-pin connector array 702, the PCD 100 may be installed face up or face down within the open-faced, closed-ended docking pocket 690. When the PCD 100 is installed within the docking pocket 690, the multi-pin connector array 130 of the PCD 100 may be engaged with the multi-pin connector array 702 formed in the open-faced, closed-ended docking pocket 690. Further, when the PCD 100 is installed face up within the docking pocket 690, the display 670 within the PCD docking station 600 may operate as a primary display and the PCD 100 may operate as a secondary display. [0093] For example, an executing application may be displayed on the primary display and one or more commands may be displayed on the secondary display. In another aspect, in a video mode, video may be displayed on the primary display and a video list and one or more video controls may be displayed on the secondary display. In yet another aspect, in an audio player mode, album art may be displayed on the primary display and one or more audio controls may be displayed in the secondary display. [0094] In a phone mode, a contacts list, a call history, a caller photo, a call number, or a combination thereof may be displayed on the primary display and a numeric keypad may be displayed on the secondary display. When a call occurs, an application 16 WO 2010/110961 PCT/US2010/024439 manager, e.g., within the PCD 100 may switch from the current application displayed on the secondary display to a phone application displayed on the secondary display. The call may be answered through the PCD 100 by undocking the PCD 100. Alternatively, the call may be answered through the PCD docking station 600, e.g., through the speakers 680, 682 and a microphone connected to the PCD docking station. Moreover, the call may be answered through a headset, e.g., a Bluetooth headset coupled to the PCD 100. [0095] In yet another aspect, in an email application, a current email may be displayed on the primary display and a list of other emails may be displayed on the secondary display. In a game application, the executing game may be displayed on the primary display and the game controls may be displayed on the secondary display. [0096] It may be appreciated that when the PCD 100 is docked with the PCD docking station 600 the combination may be considered a mobile computing device (MCD), e.g., a laptop computing device. Further, the combination of the PCD 100 and the PCD docking station 600 is portable and the housing 602 of the PCD docking station 600 may be closed while the PCD 100 is docked with the PCD docking station 600. Also, the PCD docking station 600 may include a switch, e.g., a push button switch, within the open-faced, closed-ended docking pocket 690. When the PCD 100 is installed within the open-faced, closed-ended docking pocket 690, the PCD 100 can close the switch and cause the PCD docking station 600 to be powered on, e.g., energized. When the PCD 100 is ejected, or otherwise removed, from the open-faced, closed-ended docking pocket 690, the PCD docking station 600 may be powered off. In another aspect, simply engaging the PCD 100 with the multi-pin connector array 702 may cause the PCD docking station 600 to be powered on. Disengaging the PCD 100 from the multi-pin connector array 702 may cause the PCD docking station 600 to be powered off. [0097] Referring now to FIG. 12 through FIG. 15, a second aspect of a PCD docking station is shown and is generally designated 1200. In general, the PCD docking station 1200 shown in FIG. 12 through FIG. 15 is configured in a manner similar to the PCD docking station 600 described in conjunction with FIG. 6 through FIG. 11. However, the PCD docking station 1200 shown in FIG. 12 through FIG. 15 does not include a open-faced, closed-ended PCD docking pocket 690 (FIG. 10). [0098] As illustrated in FIG. 13 and FIG. 14, the PCD docking station 1200 may include a housing 1202 having a lower housing portion 1204 and an upper housing 17 WO 2010/110961 PCT/US2010/024439 portion 1206. In this aspect, the lower housing portion 1204 may include an open faced, open-ended PCD docking pocket 1210 formed therein. The open-faced, open ended PCD docking pocket 1210 may be sized and shaped to receive a correspondingly sized and shaped PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4. The open faced, open-ended PCD docking pocket 1210 may be a depression or hole formed in the lower housing portion 1204 of the housing 1202. As shown, the open-faced, open ended PCD docking pocket 1210 may be an open space, or a volume, formed within a left side wall 1212, a rear side wall 1214, a front side wall 1216, and a bottom surface 1218. Further, the open-faced, open-ended PCD docking pocket 1210 is open on one side, e.g., the right side, in order to allow a PCD to be slid, or otherwise moved, into the open-faced, open-ended PCD docking pocket 1210. [0099] FIG. 12 through FIG. 14 indicate that the open-faced, open-ended PCD docking pocket 1210 may include a multi-pin connector array 1222. The multi-pin connector array 1222 may be formed in, extend from (or a combination thereof), one of the side walls 1212, 1214, 1216. In the aspect as shown in FIG. 12 through FIG. 14, the multi-pin connector 1222 may extend from the left side wall 1212 of the open-faced, open-ended PCD docking pocket 1210. The multi-pin connector array 1222 may be sized and shaped to removably engage a correspondingly sized and shaped multi-pin connector array, e.g., the multi-pin connector array 130 illustrated in FIG. 3, the multi pin connector array 132 illustrated in FIG. 4, a combination thereof, or some other type of multi-pin connector array known in the art. [00100] As shown in FIG. 14 and FIG. 15, a PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4, may be slid into the open-faced, open-ended PCD docking pocket 1210 from the open, right side of the open-faced, open-ended PCD docking pocket 1210. The PCD may be moved to the left until a multi-pin connector array on the PCD engages the multi-pin connector array 1222 that extends into the open-faced, open-ended PCD docking pocket 1210. When fully engaged with the open-faced, open-ended PCD docking pocket 1210, as depicted in FIG. 15, a touchscreen display within the PCD may be accessible to the user. [00101] Depending on the orientation of the multi-pin connector array 1222, the PCD 100 may be installed face up or face down within the open-faced, open-ended docking pocket 1210. When the PCD 100 is installed face up within the docking pocket 1210, the display within the PCD docking station 1200 may operate as a primary display and the PCD 100 may operate as a secondary display. 18 WO 2010/110961 PCT/US2010/024439 [00102] It may be appreciated that when the PCD 100 is docked with the PCD docking station 1200 the combination may be considered a mobile computing device (MCD), e.g., a laptop computing device. Further, the combination of the PCD 100 and the PCD docking station 1200 is portable and the housing 1202 of the PCD docking station 1200 may be closed while the PCD 100 is docked with the PCD docking station 1200. Also, the PCD docking station 1200 may include a switch, e.g., a push button switch, within the open-faced, open-ended docking pocket 1210. When the PCD 100 is installed within the open-faced, open-ended docking pocket 1210, the PCD 100 can close the switch and cause the PCD docking station 1200 to be powered on, e.g., energized. When the PCD 100 is ejected, or otherwise removed, from the open-faced, open-ended docking pocket 1210, the PCD docking station 1200 may be powered off. In another aspect, simply engaging the PCD 100 with the multi-pin connector array 1222 may cause the PCD docking station 1200 to be powered on. Disengaging the PCD 100 from the multi-pin connector array 1222 may cause the PCD docking station 1200 to be powered off. [00103] FIG. 16 and FIG. 17, illustrate a third aspect of a PCD docking station, generally designated 1600. In general, the PCD docking station 1600 shown in FIG. 16 and FIG. 17 is configured in a manner similar to the PCD docking station 600 described in conjunction with FIG. 6 through FIG. 11. However, the PCD docking station 1600 shown in FIG. 16 and FIG. 17 does not include a open-faced, closed-ended PCD docking pocket 690 (FIG. 10). [00104] As illustrated in FIG. 16 and FIG. 17, the PCD docking station 1600 may include a housing 1602 having a lower housing portion 1604 and an upper housing portion 1606. In this aspect, the lower housing portion 1604 may include a closed faced, open-ended PCD docking pocket 1610 formed therein. The closed-faced, open ended PCD docking pocket 1610 may be sized and shaped to receive a correspondingly sized and shaped PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4. The closed faced, open-ended PCD docking pocket 1610 may be a depression or hole formed in the lower housing portion 1604 of the housing 1602. As shown, the closed-faced, open ended PCD docking pocket 1610 may be an open space, or a volume, formed within a left side wall 1612, a rear side wall 1614, a front side wall 1616, a bottom surface 1618, and a top surface 1620. Further, the closed-faced, open-ended PCD docking pocket 1610 may be open on one side, e.g., the right side, in order to allow a PCD to be slid, or otherwise moved, into the closed-faced, open-ended PCD docking pocket 1610. 19 WO 2010/110961 PCT/US2010/024439 [00105] FIG. 16 and FIG. 17 indicate that the closed-faced, open-ended PCD docking pocket 1610 may include a multi-pin connector array 1622. The multi-pin connector array 1622 may be formed in, extend from (or a combination thereof), one of the side walls 1612, 1614, 1616. In the aspect as shown in FIG. 16 and FIG. 17, the multi-pin connector 1622 may extend from the left side wall 1612 of the closed-faced, open-ended PCD docking pocket 1610. The multi-pin connector array 1622 may be sized and shaped to removably engage a correspondingly sized and shaped multi-pin connector array, e.g., the multi-pin connector array 130 illustrated in FIG. 3, the multi-pin connector array 132 illustrated in FIG. 4, a combination thereof, or some other type of multi-pin connector array known in the art. [00106] As shown in FIG. 17, a PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4, may be slid into the closed-faced, open-ended PCD docking pocket 1610 from the open, right side of the closed-faced, open-ended PCD docking pocket 1610. The PCD 100 may be moved to the left until a multi-pin connector array on the PCD 100 engages the multi-pin connector array 1622 that extends into the closed-faced, open-ended PCD docking pocket 1610. When fully engaged with the closed-faced, open-ended PCD docking pocket 1610, the PCD 100 may not be accessible to the user. [00107] As shown in FIG. 16, the PCD docking station 1600 may further include an eject button 1624. When the eject button 1624 is pressed, the PCD 100 may be ejected from the PCD docking pocket 1610 and the PCD docking station 1600 for retrieval by a user. Depending on the orientation of the multi-pin connector array 1622, the PCD 100 may be installed face up or face down within the closed-faced, open-ended docking pocket 1610. When the PCD 100 is installed within the docking pocket 1610, the multi pin connector array 130 of the PCD 100 may be engaged with the multi-pin connector array 1622 formed in the closed-faced, open-ended docking pocket 1610. [00108] It may be appreciated that when the PCD 100 is docked with the PCD docking station 1600 the combination may be considered a mobile computing device (MCD), e.g., a laptop computing device. Further, the combination of the PCD 100 and the PCD docking station 1600 is portable and the housing 1602 of the PCD docking station 1600 may be closed while the PCD 100 is docked with the PCD docking station 1600. Also, the PCD docking station 1600 may include a switch, e.g., a push button switch, within the closed-faced, open-ended docking pocket 1610. When the PCD 100 is installed within the closed-faced, open-ended docking pocket 1610, the PCD 100 can close the switch and cause the PCD docking station 1600 to be powered on, e.g., energized. WO 2010/110961 PCT/US2010/024439 When the PCD 100 is ejected, or otherwise removed, from the closed-faced, open-ended docking pocket 1610, the PCD docking station 1600 may be powered off. In another aspect, simply engaging the PCD 100 with the multi-pin connector array 1622 may cause the PCD docking station 1600 to be powered on. Disengaging the PCD 100 from the multi-pin connector array 1622 may cause the PCD docking station 1600 to be powered off. [00109] Referring to FIG. 18 through FIG. 22, a fourth aspect of a PCD docking station is shown and is generally designated 1800. In general, the PCD docking station 1800 shown in FIG. 18 through FIG. 22 is configured in a manner similar to the PCD docking station 600 described in conjunction with FIG. 6 through FIG. 11. However, the PCD docking station 1800 shown in FIG. 18 through FIG. 22 does not include a open-faced, closed-ended PCD docking pocket 690 (FIG. 10). [00110] As illustrated in FIG. 18 through FIG. 22, the PCD docking station 1800 may include a housing 1802 having a lower housing portion 1804 and an upper housing portion 1806. In this aspect, the lower housing portion 1804 may include a PCD docking tray 1810 extending therefrom. In particular, the PCD docking tray 1810 may be slidably engaged with the lower housing portion 1804 of the PCD docking station 1800. The PCD docking tray 1810 may extend from a side of the lower housing portion 1804, e.g., a left side, a right side, or a front side. In a particular aspect, as shown, the PCD docking tray 1810 may extend outwardly from the right side of the lower housing portion 1804 of the PCD docking station 1800. Further, the PCD docking tray 1810 may be movable between an open position, or extended position, in which the PCD docking tray 1810 is extended from the PCD docking station 1800 and a closed position, or retracted position, in which the PCD is retracted into the PCD docking station 1800. [00111] The PCD docking tray 1810 may include a generally flat, generally rectangular support plate 1812 having a proximal end 1814 and a distal end 1816. A face plate 1818 may be attached to, or formed with, the distal end 1816 of the support plate 1812. As shown, in a particular aspect, the face plate 1818 may be perpendicular to the support plate 1812. FIG. 19 and FIG. 20 further show that the PCD docking tray 1810 may be formed with a central opening 1820. In a particular aspect, the central opening 1820 may be generally rectangular and may be oriented so that a long axis of the central opening 1820 is substantially parallel to the proximal end 1814 and the distal end 1816 of the support plate 1812. 21 WO 2010/110961 PCT/US2010/024439 [00112] As shown, the PCD docking tray 1810 may also include a support arm 1822 that is sized and shaped to fit into the central opening 1820 formed in the support plate 1812. The support arm 1822 may be generally rectangular and may include a proximal end 1824 and a distal end 1826. The proximal end 1824 of the support arm 1822 may be connected to the support plate 1812 via a rod or pin (not shown) that passes through the proximal end 1824 of the support arm 1822 and into the support plate 1812 on each side of the central opening 1820 flanking the support arm 1822. [00113] Further, as depicted, the support plate 1812 may include a multi-pin connector array 1828 adjacent to the central opening 1820 and the support arm 1822. In a particular aspect, the multi-pin connector array 1828 may be located adjacent to the proximal end 1824 of the support arm 1822. The multi-pin connector array 1828 may be sized and shaped to removably engage a correspondingly sized and shaped multi-pin connector array on a PCD, e.g., the multi-pin connector array 130 illustrated in FIG. 3, the multi-pin connector array 132 illustrated in FIG. 4, a combination thereof, or some other type of multi-pin connector array known in the art. [00114] In a particular aspect, the PCD docking tray 1810 is movable between an open position, shown in FIG. 19, in which the PCD docking tray 1810 extends fully from within the housing 1802, and a closed position in which the PCD docking tray 1810 is retracted into the housing 1802. In the closed position, the face plate 1818 of the PCD docking tray 1810 may be flush with the side of the housing 1802. [00115] Moreover, in a particular aspect, the support arm 1822 may pivot within the central opening 1820 of the support plate 1812 between a first position and a second position. In the first position, shown in FIG. 19, in which the support arm 1822 fits into the central opening 1820 of the support plate 1812 and the support arm 1822 is flush with the support plate 1812, i.e., an upper surface of the support arm 1822 is even with an upper surface of the support plate 1812, a lower surface of the support arm 1822 is even with a lower surface of the support plate 1812, or a combination thereof. [00116] In the second position, the support arm 1822 may form an angle with respect to the support plate 1812. In a particular aspect, the support arm 1822, the support plate 1812, or a combination thereof may include a detent (not shown), spring (not shown), or other similar mechanism to hold the support arm 1822 in the second position. By applying pressure on the distal end 1826 of the support arm 1822 the force of detent, or spring, may be overcome and the support arm 1822 may be returned to the first position. 22 WO 2010/110961 PCT/US2010/024439 [00117] As shown in FIG. 21 and FIG. 22, in the second position, a PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4 may rest on the support arm 1822 and a multi pin connector array on the PCD 100 may engage the multi-pin connector array 1828 on the PCD docking tray 1810. The support arm 1822 may support the PCD 100 at an angle to facilitate viewing of the PCD 100 during operation of the PCD 100 and the PCD docking station 1800. [00118] In a particular aspect, as shown in FIG. 18, the PCD docking station 1800 may further include an eject button 1830. The eject button 1830 may be incorporated into the PCD docking tray 1810. Alternatively, the eject button 1830 may be incorporated into the PCD docking station 1800 adjacent to the PCD docking tray 1810. When the eject button 1830 is pressed, the PCD docking tray 1810 may be moved from the closed position to the open position. In the open position, the PCD 100 may be docked with and supported by the PCD docking tray 1810. [00119] When the PCD 100 is engaged within the PCD docking tray 1810, the display within the PCD docking station 1800 may operate as a primary display and the PCD 100 may operate as a secondary display. [00120] It may be appreciated that when the PCD 100 is docked with the PCD docking station 1800 the combination may be considered a mobile computing device (MCD), e.g., a laptop computing device. Further, the combination of the PCD 100 and the PCD docking station 1800 is portable. [00121] Referring to FIG. 23 through FIG. 25, a fifth aspect of a PCD docking station is shown and is generally designated 2300. In general, the PCD docking station 2300 shown in FIG. 23 through FIG. 25 is configured in a manner similar to the PCD docking station 600 described in conjunction with FIG. 6 through FIG. 11. However, the PCD docking station 2300 shown in FIG. 23 through FIG. 25 does not include a open-faced, closed-ended PCD docking pocket 690 (FIG. 10). [00122] As illustrated in FIG. 23 through FIG. 25, the PCD docking station 2300 may include a housing 2302 having a lower housing portion 2304 and an upper housing portion 2306. In this aspect, the upper housing portion 2306 may include a PCD docking tray 2310 extending therefrom. In particular, the PCD docking tray 2310 may be slidably engaged with the upper housing portion 2306 of the PCD docking station 2300. The PCD docking tray 2310 may extend from a side of the upper housing portion 2306, e.g., a left side, a right side, or a front side (i.e., a top side when the upper housing portion 2306 is open). In a particular aspect, as shown, the PCD docking tray 2310 may 23 WO 2010/110961 PCT/US2010/024439 extend outwardly from the right side of the upper housing portion 2306 of the PCD docking station 2300. [00123] The PCD docking tray 2310 may include a generally flat, generally rectangular support plate 2312 having a proximal end 2314 and a distal end 2316. A face plate 2318 may be attached to, or formed with, the distal end 2316 of the support plate 2312. In a particular aspect, the face plate 2318 may be perpendicular to the support plate 2312. FIG. 24 and FIG. 25 further show that the PCD docking tray 2310 may include a support lip 2320 formed along a bottom edge of the support plate 2312. In a particular aspect, the support lip 2320 may be generally "L" shaped and provide a pocket between the support lip 2320 and the support plate 2312 in which an end of a PCD may fit and rest during use. [00124] Further, as depicted in FIG. 23, the upper housing portion 2306 of the PCD docking station 2302 may include a multi-pin connector array 2328 adjacent to the PCD docking tray 2310. In a particular aspect, the multi-pin connector array 2328 may be located adjacent to the proximal end 2314 of the support plate 2312. The multi-pin connector array 2328 may be sized and shaped to removably engage a correspondingly sized and shaped multi-pin connector array on a PCD, e.g., the multi-pin connector array 130 illustrated in FIG. 3, the multi-pin connector array 132 illustrated in FIG. 4, a combination thereof, or some other type of multi-pin connector array known in the art. [00125] In a particular aspect, the PCD docking tray 2310 is movable between a open position, or extended position, shown in FIG. 24, in which the PCD docking tray 2310 extends fully from within the housing 2302, e.g., the upper housing portion 2306, and a closed position, or retracted position, in which the PCD docking tray 2310 is retracted into the housing 2302, e.g., the upper housing portion 2306. In the retracted position, the face plate 2318 of the PCD docking tray 2310 may be flush with the side of the upper housing portion 2306. [00126] In the extended position, as shown in FIG. 25, the PCD 100 may rest on the PCD docking tray 2310 and a multi-pin connector array on the PCD 100 may engage the multi-pin connector array 2328 on the upper housing portion 2306. The PCD docking tray 2310 may support the PCD 100 at the same angle as the upper housing portion 2306 is relative to the lower housing portion 2304 to facilitate viewing of the PCD 100 during operation of the PCD 100 and the PCD docking station 2300. [00127] In a particular aspect, as shown in FIG. 23, the PCD docking station 2300 may further include an eject button 2330. The eject button 2330 may be incorporated into 24 WO 2010/110961 PCT/US2010/024439 the PCD docking station 2300 adjacent to the PCD docking tray 2310. Alternatively, the eject button 2330 may be incorporated into the PCD docking tray 2310. When the eject button 2330 is pressed, the PCD docking tray 2310 may be moved from the closed position to the open position. In the open position, the PCD 100 may be docked with and supported by the PCD docking tray 2310. [00128] When the PCD 100 is engaged within the PCD docking tray 2310, the display within the PCD docking station 2300 may operate as a primary display and the PCD 100 may operate as a secondary display. [00129] It may be appreciated that when the PCD 100 is docked with the PCD docking station 2300 the combination may be considered a mobile computing device (MCD), e.g., a laptop computing device. Further, the combination of the PCD 100 and the PCD docking station 2300 is portable. [00130] Referring now to FIG. 26 and FIG. 27, a sixth aspect of a PCD docking station is shown and is generally designated 2600. In general, the PCD docking station 2600 shown in FIG. 26 and FIG. 27 is configured in a manner similar to the PCD docking station 600 described in conjunction with FIG. 6 through FIG. 11. However, the PCD docking station 2600 shown in FIG. 26 and FIG. 27 does not include a touch pad mouse 674, a first mouse button 676, a second mouse button 678, or a combination thereof. [00131] As illustrated in FIG. 26 and FIG. 27, the PCD docking station 2600 may include a housing 2602 having a lower housing portion 2604 and an upper housing portion 2606. The lower housing portion 2604 of the housing 2602 may include an open-faced, closed-ended PCD docking pocket 2610 formed in the surface thereof. In this aspect, the open-faced, closed-ended PCD docking pocket 2610 may be sized and shaped to receive a correspondingly sized and shaped PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4. [00132] In a particular aspect, the open-faced, closed-ended PCD docking pocket 2610 may be a depression or hole formed in the lower housing portion 2604 of the housing 2602. As shown, the open-faced, closed-ended PCD docking pocket 2610 may be an open space, or a volume, formed within a left side wall 2612, a right side wall 2614, a rear side wall 2616, a front side wall 2618, and a bottom surface 2620. [00133] FIG. 26 indicates that the open-faced, closed-ended PCD docking pocket 2610 may include a multi-pin connector array 2622. The multi-pin connector array 2622 may be formed in, extend from (or a combination thereof), one of the side walls 2612, 2614, 2616, 2618. In the aspect as shown in FIG. 26, the multi-pin connector 2622 may WO 2010/110961 PCT/US2010/024439 extend from the left side wall 2612 of the open-faced, closed-ended PCD docking pocket 2610. The multi-pin connector array 2622 may be sized and shaped to removably engage a correspondingly sized and shaped multi-pin connector array, e.g., the multi-pin connector array 130 illustrated in FIG. 3, the multi-pin connector array 132 illustrated in FIG. 4, a combination thereof, or some other type of multi-pin connector array known in the art. [00134] As shown in FIG. 26 and FIG. 27, the open-faced, closed-ended PCD docking pocket 2610 may also include a latch assembly 2624 that extends over an edge of one of the side walls 2612, 2614, 2616, 2618. In the aspect as shown in FIG. 26 and FIG. 27, the latch assembly 2624 may extend over the edge of the right side wall 2614 of the open-faced, closed-ended PCD docking pocket 2610 opposite the left side wall 2612 of the open-faced, closed-ended PCD docking pocket 2610. The latch assembly 2624 may be spring loaded and slidably disposed in the surface of the lower housing portion 2604 of the housing 2602. In the aspect as shown, the latch assembly 2624 may be moved in a direction, e.g., to the right, in order to allow a PCD, e.g., the PCD 100 shown in FIG. 1 through FIG. 4, to be inserted into the open-faced, closed-ended PCD docking pocket 2610. Thereafter, when released, the latch assembly 2624 may move in the opposite direction, e.g., to the left. The latch assembly 2624 may then engage an upper surface of the PCD 100 in order to maintain the PCD 100 within the PCD docking pocket 2610. FIG. 27 illustrates the PCD 100 engaged with the PCD docking station 2600. [00135] As shown, the PCD 100 may be installed within the open-faced, closed-ended docking pocket 2610 as described herein. When the PCD 100 is installed within the docking pocket 2610, the multi-pin connector array 130 of the PCD 100 may be engaged with the multi-pin connector array 2622 formed in the open-faced, closed ended docking pocket 2610. [00136] In a particular aspect, when the PCD 100 is docked with the PCD docking station 2600, the PCD 100 may be used as a supplemental display. Further, the PCD 100 may be used as an input device, e.g., the PCD 100 may be used as a mouse pad and may include a first mouse button and a second mouse button. Also, the PCD 100 may be used as a supplemental display and as a mouse pad with corresponding mouse buttons. [00137] It may be appreciated that when the PCD 100 is docked with the PCD docking station 2600 the combination may be considered a mobile computing device (MCD), e.g., a laptop computing device. Further, the combination of the PCD 100 and the PCD 26 WO 2010/110961 PCT/US2010/024439 docking station 2600 is portable and the housing 2602 of the PCD docking station 2600 may be closed while the PCD 100 is docked with the PCD docking station 2600. Also, the PCD docking station 2600 may include a switch, e.g., a push button switch, within the open-faced, closed-ended docking pocket 2610. When the PCD 100 is installed within the open-faced, closed-ended docking pocket 2610, the PCD 100 can close the switch and cause the PCD docking station 2600 to be powered on, e.g., energized. When the PCD 100 is ejected, or otherwise removed, from the open-faced, closed-ended docking pocket 2610, the PCD docking station 2600 may be powered off. In another aspect, simply engaging the PCD 100 with the multi-pin connector array 2622 may cause the PCD docking station 2600 to be powered on. Disengaging the PCD 100 from the multi-pin connector array 2622 may cause the PCD docking station 2600 to be powered off. [00138] FIG. 28 depicts a first aspect of a PCD system, generally designated 2800. As shown, the PCD system 2800 may include a PCD 2802 and a PCD docking station 2804. In a particular aspect, the PCD 2802 may be removably engaged with the PCD docking station 2804 via a dock connector 2806. The dock connector 2806 may provide electronic connectivity between one or more components within the PCD 2802 and one or more components within the PCD docking station 2804. Additionally, the dock connector 2806 may be a multi-pin dock connector 2806. Further, the dock connector 2806 may be one of the multi-pin connector arrays described herein. [00139] As shown in FIG. 28, the PCD 2802 may include a printed circuit board (PCB) 2808 that may include the PCD electronic components. The PCD electronic components may be packaged as a system-on-chip (SOC) or some other appropriate device that integrates and connects the electronic components in order to control the PCD 2802. The PCB 2808 may include one or more of the components described in conjunction with FIG. 5. A battery 2810 may be coupled to the PCB 2808. [00140] FIG. 28 indicates that the PCD docking station 2804 may include a battery 2820 connected to the dock connector 2806. A power management module 2822 may be connected to the battery 2820. Further, an alternating current (AC) power connection 2824 may be connected to the power management module 2822. The AC power connection 2824 may be connected to an AC power source (not shown). [00141] FIG. 28 further shows that a first universal serial bus-high speed (USB-HS) port 2838 may be connected to the dock connector 2806. A first USB connector 2840 may be connected to the first USB-HS port 2838. As depicted in FIG. 28, the PCD 27 WO 2010/110961 PCT/US2010/024439 docking station 2804 may also include a second USB-HS port 2848. A keyboard 2856 may be connected to the second USB-HS port 2838. In particular, the keyboard 2856 may be a keyboard/ touchpad combination. [00142] FIG. 28 indicates that the PCD docking station 2804 may also include a display 2860 connected to the dock connector 2806. As shown, the dock connector 2806 may be further connected to a ground connection 2868. [00143] In a particular aspect, the dock connector 2806 may include forty-four (44) pins. For example, the dock connector 2806 may include eight (8) pins for the battery 2820, four (4) pins for the first USB-HS port 2838, four (4) pins for the second USB-HS port 2848, twenty (20) pins for the display 2860, and eight (8) pins for the ground connection 2868. [00144] Referring to FIG. 29, a second aspect of a PCD system is shown and is generally designated 2900. As shown, the PCD system 2900 may include a PCD 2902 and a PCD docking station 2904. In a particular aspect, the PCD 2902 may be removably engaged with the PCD docking station 2904 via a dock connector 2906. The dock connector 2906 may provide electronic connectivity between one or more components within the PCD 2902 and one or more components within the PCD docking station 2904. [00145] As shown in FIG. 29, the PCD 2902 may include a printed circuit board (PCB) 2908 that may include the PCD electronic components. The PCD electronic components may be packaged as a system-on-chip (SOC) or some other appropriate device that integrates and connects the electronic components in order to control the PCD 2802. Further, the PCB 2908 may include one or more of the components described in conjunction with FIG. 5. A battery 2910 may be coupled to the PCB 2908. [00146] FIG. 29 indicates that the PCD docking station 2904 may include a battery 2920 connected to the dock connector 2906. A power management module 2922 may be connected to the battery 2920. Further, an alternating current (AC) power connection 2924 may be connected to the power management module 2922. The AC power connection 2924 may be connected to an AC power source (not shown). An audio input/output (I/O) 2926 may be connected to the dock connector 2906 and one or more speakers 2928 may be connected to the audio I/O 2926. [00147] As illustrated, a Gigabit Ethernet Media Access Controller (GbE MAC) 2934 may also be connected to the dock connector 2906. An Ethernet port 2936 may be 28 WO 2010/110961 PCT/US2010/024439 connected to the GbE MAC 2934. In a particular aspect, the Ethernet port 2936 may be an RJ45 jack. [00148] FIG. 29 further shows that a first universal serial bus-high speed (USB-HS) port 2938 may be connected to the dock connector 2906. A first USB connector 2942 may be connected to the first USB-HS port 2938. As depicted in FIG. 29, the PCD docking station 2904 may also include a second USB-HS port 2948. A second USB connector 2950 may be connected to the second USB-HS port 2948. Moreover, as depicted, a third USB-HS port 2954 may be connected to the dock connector 2906. A keyboard 2956 may be connected to the third USB-HS port 2954. In particular, the keyboard 2956 may be a keyboard/ touchpad combination. [00149] FIG. 29 indicates that the PCD docking station 2904 may also include a display 2960. Additionally, the PCD docking station 2904 may include an RGB(A) connector 2962 coupled to the dock connector 2906. A D-sub connector 2964 may be connected to the RGB(A) connector 2962. As shown, the dock connector 2906 may be connected to a ground connection 2968. [00150] In a particular aspect, the dock connector 2906 may include one hundred nineteen (119) pins. For example, the dock connector 2906 may include ten (10) pins for the battery 2920, three (3) pins for the audio I/O 2926, thirty-six (36) pins for the GbE MAC 2934, four (4) pins for the first USB-HS port 2938, four (4) pins for the second USB-HS port 2948, four (4) pins for the third USB-HS port 2954, twenty (20) pins for the display 2960, twenty-eight (28) pins for the RGB(A) connector 2962, and ten (10) pins for the ground connection 2968. [00151] FIG. 30 illustrates a third aspect of a PCD system, generally designated 3000. As shown, the PCD system 3000 may include a PCD 3002 and a PCD docking station 3004. In a particular aspect, the PCD 3002 may be removably engaged with the PCD docking station 3004 via a dock connector 3006. The dock connector 3006 may provide electronic connectivity between one or more components within the PCD 3002 and one or more components within the PCD docking station 3004. [00152] As shown in FIG. 30, the PCD 3002 may include a printed circuit board (PCB) 3008 that may include the PCD electronic components. The PCD electronic components may be packaged as a system-on-chip (SOC) or some other appropriate device that integrates and connects the electronic components in order to control the PCD 3002. Further, the PCB 3008 may include one or more of the components described in conjunction with FIG. 5. A battery 3010 may be coupled to the PCB 3008. 29 WO 2010/110961 PCT/US2010/024439 [00153] FIG. 30 indicates that the PCD docking station 3004 may include a battery 3020 connected to the dock connector 3006. A power management module 3022 may be connected to the battery 3020. Further, an alternating current (AC) power connection 3024 may be connected to the power management module 3022. The AC power connection 3024 may be connected to an AC power source (not shown). An audio input/output (I/O) 3026 may be connected to the dock connector 3006 and one or more speakers 3028 may be connected to the audio I/O 3026. [00154] As further illustrated in FIG. 30, a mobile display digital interface (MDDI) 3030 may be connected to the dock connector 3006. A camera 3032 may be connected to the MDDI 3030. Further, a Gigabit Ethernet Media Access Controller (GbE MAC) 3034 may also be connected to the dock connector. An Ethernet port 3036 may be connected to the GbE MAC 3034. In a particular aspect, the Ethernet port 3036 may be an RJ45 jack. [00155] FIG. 30 further shows that a first universal serial bus-high speed (USB-HS) port 3038 may be connected to the dock connector 3006. A USB hub 3040 may be connected to the first USB-HS port 3038. A first USB connector 3042 and a second USB connector 3044 may be connected to the USB hub 3040. Additionally, a keyboard 3046 may be connected to the USB hub 3040. In particular, the keyboard 3046 may be a keyboard/ touchpad combination. [00156] As depicted in FIG. 30, the PCD docking station 3004 may also include a second USB-HS port 3048. A first serial advanced technology attachment (SATA) to USB converter 3050 may be connected to the second USB-HS port 3048. A digital video disk (DVD) drive 3052 may be connected to the first SATA-USB converter 3050. Further, the PCD docking station 3004 may include a third USB-HS port 3054. A second SATA-USB converter 3056 may be connected to the third USB-HS port 3054 and a hard disk drive (HDD) 3058 may be connected to the third USB-HS port 3054. [00157] FIG. 30 indicates that the PCD docking station 3004 may also include a display 3060. Additionally, the PCD docking station 3004 may include an RGB(A) connector 3062 coupled to the dock connector 3006. A D-sub connector 3064 may be connected to the RGB(A) connector 3062. As shown, the dock connector 3006 may be connected to a ground connection 3068. [00158] In a particular aspect, the dock connector 3006 may include one hundred twenty-seven (127) pins. For example, the dock connector 3006 may include ten (10) pins for the battery 3020, five (5) pins for the audio I/O 3026, six (6) pins for the MDDI WO 2010/110961 PCT/US2010/024439 3030, thirty-six (36) pins for the GbE MAC 3034, four (4) pins for the first USB-HS port 3038, four (4) pins for the second USB-HS port 3048, four (4) pins for the third USB-HS port 3054, twenty (20) pins for the display 3060, twenty-eight (28) pins for the RGB(A) connector 3062, and ten (10) pins for the ground connection 3068. The dock connector 3006 may also include an additional three (3) pins for the SATA 3050 connected to the second USB-HS port 3048. [00159] Referring now to FIG. 31, a fourth aspect of a PCD system is shown and is generally designated 3100. As shown, the PCD system 3100 may include a PCD 3102 and a PCD docking station 3104. In a particular aspect, the PCD 3102 may be removably engaged with the PCD docking station 3104 via a dock connector 3106. The dock connector 3106 may provide electronic connectivity between one or more components within the PCD 3102 and one or more components within the PCD docking station 3104. [00160] As shown in FIG. 31, the PCD 3102 may include a printed circuit board (PCB) 3108 that may include the PCD electronic components. The PCD electronic components may be packaged as a system-on-chip (SOC) or some other appropriate device that integrates and connects the electronic components in order to control the PCD 3102. Further, the PCB 3108 may include one or more of the components described in conjunction with FIG. 5. A battery 3110 may be coupled to the PCB 3108. [00161] FIG. 31 indicates that the PCD docking station 3104 may include a battery 3120 connected to the dock connector 3106. A power management module 3122 may be connected to the battery 3120. Further, an alternating current (AC) power connection 3124 may be connected to the power management module 3122. The AC power connection 3124 may be connected to an AC power source (not shown). An audio input/output (I/O) 3126 may be connected to the dock connector 3106 and one or more speakers 3128 may be connected to the audio I/O 3126. [00162] As further illustrated in FIG. 31, a mobile display digital interface (MDDI) 3130 may be connected to the dock connector 3106. A camera 3132 may be connected to the MDDI 3130. Further, a Gigabit Ethernet Media Access Controller (GbE MAC) 3134 may also be connected to the dock connector. An Ethernet port 3136 may be connected to the GbE MAC 3134. In a particular aspect, the Ethernet port 3136 may be an RJ45 jack. [00163] FIG. 31 further shows that a first universal serial bus-high speed (USB-HS) port 3138 may be connected to the dock connector 3106. A USB hub 3140 may be 31 WO 2010/110961 PCT/US2010/024439 connected to the first USB-HS port 3138. A first USB connector 3142 and a second USB connector 3144 may be connected to the USB hub 3140. Additionally, a keyboard 3146 may be connected to the USB hub 3140. In particular, the keyboard 3146 may be a keyboard/ touchpad combination. [00164] As depicted in FIG. 31, the PCD docking station 3104 may also include a second USB-HS port 3148. A first serial advanced technology attachment (SATA) to USB converter 3150 may be connected to the second USB-HS port 3148. A digital video disk (DVD) drive 3152 may be connected to the first SATA-USB converter 3150. Further, the PCD docking station 3104 may include a third USB-HS port 3154. A second SATA-USB converter 3156 may be connected to the third USB-HS port 3154 and a hard disk drive (HDD) 3158 may be connected to the third USB-HS port 3154. [00165] FIG. 31 indicates that the PCD docking station 3104 may also include a display 3160. Additionally, the PCD docking station 3104 may include an RGB(A) connector 3162 coupled to the dock connector 3106. A D-sub connector 3164 may be connected to the RGB(A) connector 3162. A high-definition multimedia interface (HDMI) 3166 may also be connected to the dock connector 3106. As shown, the dock connector 3106 may be connected to a ground connection 3168. [00166] In a particular aspect, the dock connector 3106 may include one hundred forty six (146) pins. For example, the dock connector 3106 may include ten (10) pins for the battery 3120, five (5) pins for the audio I/O 3126, six (6) pins for the MDDI 3130, thirty-six (36) pins for the GbE MAC 3134, four (4) pins for the first USB-HS port 3138, four (4) pins for the second USB-HS port 3148, four (4) pins for the third USB HS port 3154, twenty (20) pins for the display 3160, twenty-eight (28) pins for the RGB(A) connector 3162, nineteen (19) pins for the HDMI 3166, and ten (10) pins for the ground connection 3168. The dock connector 3106 may also include an additional three (3) pins for the SATA 3150 connected to the second USB-HS port 3148. [00167] Referring to FIG. 32, a PCD processor system is shown and is generally designated 3200. As shown, the PCD processor system 3200 may include a first core processor 3202, a second core processor 3204, a third core processor 3206, and a fourth core processor 3208. Further, the PCD processor system 3200 may include a 32-bit processor 3210, e.g., an ARM 11 processor. [00168] As shown, one or more hardware peripherals 3212 may be connected to the first core processor 3202, the second core processor 3204, the third core processor 3206, the fourth core processor 3208, the 32-bit processor 3210, or a combination thereof. In 32 WO 2010/110961 PCT/US2010/024439 a particular aspect, a process monitor and load leveler 3214 may be connected to the first core processor 3202, the second core processor 3204, the third core processor 3206, and the fourth core processor 3208. As described herein, the process monitor and load leveler 3214 may act as a processor manager to turn the core processors 3202, 3204, 3206, 3208 on and off depending on operational requirements, whether a PCD is docked, whether a PCD is undocked or a combination thereof. The process monitor and load leveler 3214 may act as a means for executing one or more of the method steps described herein. [00169] FIG. 32 further indicates that a first process 3216 and a second process 3218 may be executed by the 32-bit processor 3210. A third process 3220, a fourth process 3222, a fifth process 3224, a sixth process 3226, a seventh process 3228, and an Nth process 3230 may be executed by the first core processor 3202, the second core processor 3204, the third core processor 3206, the fourth core processor 3208, or a combination thereof via the process monitor and load leveler 3214. [00170] The PCD processor system 3200 may further include a modem real-time operating system (RTOS) 3232 that may operate above the first process 3216 and the second process 3218. An application RTOS 3234 may operate above the third process 3220, the fourth process 3222, the fifth process 3224, the sixth process 3226, the seventh process 3228, and the Nth process 3230. In a particular aspect, the application RTOS may be an RTOS provided by LinuxTM. A plurality of applications 3236 may be executed by the modem RTOS 3232 and the application RTOS 3234. [00171] Referring now to FIG. 33, a wireless PCD/PCD docking station system is shown and is generally designated 3300. As shown, the system 3300 may include a PCD 3302 and a PCD docking station 3304. When docked, or placed near the PCD docking station 3304, the PCD 3302 may be wirelessly connected to the PCD docking station 3304 via a wireless dock connection 3306. [00172] As shown in FIG. 33, the PCD 3302 may include a system-on-chip (SOC) 3310. A wireless connection module 3312 may be connected to the SOC 3310. Further, a near field communication (NFC) transceiver 3314 and a battery 3316 may be connected to the SOC 3310. A pair of electrical contacts 3318 may be connected to the battery 3316. [00173] The PCD docking station 3304 may include a battery 3320. A power management module 3322 may be connected to the battery. Further, an alternating current (AC) power connection 3324 may be connected to the power management 33 WO 2010/110961 PCT/US2010/024439 module 2822. The AC power connection 3324 may be connected to an AC power source (not shown). As shown, a pair of electrical contacts 3326 may be connected to the battery 3320 within the PCD docking station 3304. [00174] In a particular aspect, as indicated in FIG. 33, a wireless connection module 3328 may be connected to the battery 3320. Further, an NFC transceiver 3330 and a switch 3332 may be connected to the battery 3320. The switch 3332 may be a push button switch or some other type of switch. The PCD docking station 3304 may include one or more of the components described in conjunction with the PCD docking stations 2804, 2904, 3004, 3104 illustrated in FIG. 28 through FIG. 31. [00175] In a particular aspect, when the PCD 3302 is placed near the PCD docking station 3304, or docked with the PCD docking station 3304, the PCD 3302 and the PCD docking station 3304 may communicate with each other and transfer information there between via the wireless dock connection 3306 established by the wireless connection module 3312 within the PCD 3302 and the wireless connection module 3328 within the PCD docking station 3304. Further, in a particular aspect, the NFC transceiver 3314 within the PCD 3302 may communicate with the NFC transceiver 3330 in order to energize the PCD docking station 3304 and the components therein, e.g., the wireless connection module 3328. Once the PCD docking station 3304 is energized, data transfer may occur between the PCD 3302 and the PCD docking station 3304. [00176] In another aspect, when the PCD 3302 is docked with the PCD docking station 3304, as described herein, the PCD 3302 may toggle, or otherwise press, the switch 3332 on the PCD docking station 3304 in order to energize the PCD docking station 3304. Once the PCD docking station 3304 is energized, data transfer may occur between the PCD 3302 and the PCD docking station 3304. [00177] In a particular aspect, the PCD 3302 may be used as a handheld controller for a video system. One or more video controls may be displayed at the PCD 3302 and associated video may be displayed at the PCD docking station 3304. A user may wirelessly control the operation of the video displayed at the PCD docking station 3304 using the controls presented at the PCD 3302. Further, the PCD 3302 may be used as a handheld controller for an audio system. One or more audio controls may be displayed at the PCD 3302 and associated audio may be broadcast at the PCD docking station 3304. A user may wirelessly control the operation of the audio broadcast by the PCD docking station 3304 using the controls presented at the PCD 3302. In another aspect, the PCD 3302 may be used as a handheld controller for a gaming system. On or more 34 WO 2010/110961 PCT/US2010/024439 game controls may be displayed at the PCD 3302 and associated game content may be presented at the PCD docking station 3304. A user may wireless control the operation of the game presented at the PCD docking station 3304 using the controls presented at the PCD 3302. [00178] FIG. 34 illustrates a particular aspect of a wireless connection module, generally designated 3400. As shown, the wireless connection module 3400 may include a mobile station modem (MSM) 3402. The MSM 3402 may include a connection manager 3406. Further, the MSM 3402 may include one or more connection profiles 3408, e.g., one or more Bluetooth (BT) connection profiles. As shown, the MSM 3402 may include an application (APP) layer 3410. [00179] FIG. 34 also indicates that the MSM 3402 may include an operating system (OS) 3412, a logical link control and adaptation protocol (L2CAP) stack 3414, and a transmission control protocol/user datagram protocol (TCP/UDP) stack 3416. Further, the MSM 3402 may include an alternate MAC/PHY (AMP) manager 3418 and an internet protocol (IP) stack 3420. The MSM 3402 may also include a protocol adaptation layer/logical link control layer (PAL/LLC) processing unit 3422. Further, the MSM 3402 may include a first upper media access control (MAC) layer 3424 and a second upper MAC layer 3426. [00180] As illustrated in FIG. 34, a host interface 3428 may be connected to the MSM 3402. In a particular aspect, the host interface 3428 may be a smart peripheral subsystem (SPS) provided by QUALCOMM TM . The host interface 3428 may connect a Bluetooth chip 3430 to the MSM 3402. The Bluetooth chip 3430 may be an 802.15.1 chip operating at a frequency of 2.4 GHz. The Bluetooth chip 3430 may also include a basic rate (BR) or an enhanced data rate (EDR). [00181] Further, the host interface 3428 may connect a broadband wireless interface 3432 to the MSM 3402. The broadband wireless interface 3432 may include a sixty GigaHertz (60 GHz) chip. Further, the 60 GHz chip may operate at a frequency of approximately 60 GHz. The host interface 3428 may also connect a Wi-Fi chip 3434 to the MSM 3402. The Wi-Fi chip 3434 may be an 802.11.x chip operating at 2.4/5.7 GHz. FIG. 34 indicates that the Bluetooth chip 3430 may include a link manager 3440. The broadband wireless interface 3432 may include a first lower MAC layer 3442. The Wi-Fi chip 3434 may also include a second lower MAC layer 3444. [00182] In a particular aspect, the Bluetooth chip 3430 may provide a data transfer rate of approximately three megabits per second (3 Mbps). The Wi-Fi chip 3434 may WO 2010/110961 PCT/US2010/024439 provide a data transfer rate of approximately three hundred megabits per second (300 Mbps). Further, the broadband wireless interface 3432 may provide a data transfer rate of approximately three thousand megabits per second (3000 Mbps). The wireless connection module 3400 may provide peer-to-peer connectivity, digital layer network alliance (DLNA) connectivity, or a combination thereof. Further, the wireless connection module 3400 may be used to transmit high definition video content, audio content, data content, or a combination thereof. The wireless connection module 3400 may also provide for rapid syncing between devices, e.g., a pair of wireless connection modules 3400. [00183] With the configuration described herein, the PCD/PCD docking station combination provides feature segmentation between the PCD and the PCD docking station. A PCD may be engaged with a PCD docking station in one of the manners described herein. For example, a PCD may be engaged with a PCD engagement mechanism, e.g., a PCD docking pocket, a PCD docking tray, or a similar mechanism. Further, dual display usage is provided, e.g., by a display in a PCD and a display in a PCD docking station. When engaged with a PCD docking station, a PCD may be charged by the PCD docking station. Moreover, seamless user interface and application transition may be provided as the PCD is docked or undocked. [00184] In a particular aspect, user interface features may be provided when a PCD is docked or undocked. One such aspect, is a "fish-eye" bubble that may be provided across all applications displayed on the PCD. Additionally, application layer scaling may be provided. For example, a primary application version may be executed when a PCD is docked and a secondary application version may be executed when a PCD is undocked. Alternatively, a standard application version may be executed when a PCD is undocked and an enhanced application version may be executed when a PCD is docked. In an undocked mode, a PCD may execute less computational intensive, smaller footprint applications. In a docked mode, full functionality applications may be executed by the PCD. Whether a PCD is docked or undocked may be automatically detected and the appropriate application versions may be executed when available. [00185] When a PCD is undocked, two low power processors may be used for small screen applications and the PCD operating system (OS). Further, two high performance processors may be used to execute larger applications when the PCD is docked with a PCD docking station. In another aspect, when the PCD is docked, one processor may be used for mouse controls and graphical user interface controls, i.e., touch screen 36 WO 2010/110961 PCT/US2010/024439 controls; one processor may be used for shared input/output controls; one processor be used for a PCD OS; and one processor may be used for a desktop OS stored on a PCD docking station. In yet another aspect, each processor may run a different OS and framework. [00186] A PCD docking station may be connected to a home network and when a PCD is docked with the PCD docking station, the PCD may, in turn, be connected to the home network. Moreover, data, e.g., applications, content, or a combination thereof, may be automatically backed up to a PCD docking station when a PCD is docked with the PCD docking station. A PCD docking station may include a display, a display buffer, a HDD, additional memory, LAN capabilities, WLAN capabilities, one or more USB ports, printer connections, a keyboard, a mouse, etc. The PCD docking station may include a large screen application memory. A large screen application and an OS state may be retained in the PCD docking station memory when the PCD is undocked in order to enable instant-on when the PCD is again docked. A large screen application may include a browser application, a word processor application, a spreadsheet application, a presentation application, an email application, a calendar application, a video application, or a combination thereof. A small screen application may include a media player application, a phone application, a control application, or a combination thereof. [00187] When a PCD is docked with a PCD docking station, a user can take advantage of a relatively larger display incorporated into the PCD docking station. Further, a user may use a full keyboard and mouse to access data stored in the PCD. A PCD docking station may be incorporated into a vehicle, a kiosk, a set top box, etc. and a PCD may be docked therewith. [00188] It is to be understood that the method steps described herein need not necessarily be performed in the order as described. Further, words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the method steps. [00189] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage 37 WO 2010/110961 PCT/US2010/024439 media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [00190] Although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. 38 |
<P>PROBLEM TO BE SOLVED: To provide a method for detecting a bit error in a maskable content addressable memory. <P>SOLUTION: Parity and mask bits are stored in a RAM connected to a CAM (content addressable memory). Upon CAM query matches, reference parity and mask bits stored in an address outputted by the CAM are outputted from the RAM. The reference parity bits are compared with parity bits generated from a query data value masked by the retrieved mask bits. In absence of a CAM or RAM bit error, the reference parity bits from the RAM and the parity bits generated from the masked query data will match. If there is a CAM or RAM bit error, an error will be detected since the two parity bit sets will not match. <P>COPYRIGHT: (C)2004,JPO |
A method of detecting a CAM bit error, wherein a stored parity is retrieved from a RAM, a stored mask bit is retrieved from the RAM, and the stored mask bit from the RAM is used for a CAM query. Generating mask query parity by masking existing query data and comparing the stored parity with the mask query parity.The step of retrieving stored parity from the RAM and the step of retrieving storage mask bits from the RAM correspond to a CAM output address output by the CAM in response to the query data being used for the CAM query. The method of claim 1 including retrieving the stored parity and the storage mask bit from a RAM address to be stored.The method of claim 2, further comprising: generating input parity for input data stored in the CAM; and storing the input parity and the mask bit in the RAM.The method of claim 3, wherein the CAM is part of a TLB.A method of detecting a CAM bit error, comprising: querying a first data set and a CAM; and in response to a query with the first data set, from a location corresponding to an address provided by the CAM. Retrieving second and third data sets; and comparing the parity generated from the first data set with the third data set after masking with the second data set; Including methods.A method for detecting a CAM bit error, comprising generating and storing parity for a CAM entry masked by a mask bit set, storing the mask bit set, and querying a CAM for the CAM entry. Retrieving the parity and storage mask bits from the address supplied by the CAM; the parity; and a generated parity generated from data used for the CAM query masked by the storage mask bits; Comparing the method.The method of claim 6, wherein the CAM is part of a TLB.A device for detecting a CAM bit error, wherein a second parity is generated from a means for generating and storing a first parity for a masked CAM entry and an address supplied by the CAM when the CAM is queried. Means for retrieving a parity and a mask bit set of the data, means for generating a third parity from data used for querying the CAM masked by the mask bit set, the second parity and the third bit Means for comparing the parity of the device.A CAM that provides an address to a RAM in response to a first data bit set, wherein the RAM outputs a second data bit set including a first parity bit set and a mask bit set; A parity generator for generating a second parity bit set for a third data bit set generated by masking the first data bit set with the mask bit set, wherein the first data bit set is An apparatus comprising: a parity generator that queries the CAM; and a parity comparator that compares the first parity bit set and the second parity bit set. |
Bit error detection method and apparatus in maskable content verification memoryThis application is related to US patent application Ser. No. 10 / 197,929, filed Jul. 16, 2002. BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention generally relates to a content matching memory (CAM), and more particularly to detection of a bit error that may occur in data stored in the CAM. 2. Description of the Related Art The CAM structure performs pattern matching between a query data value and data stored in advance in a CAM entry. In the case of matching, the address of the matched entry is output. By applying external energy to the circuit, a bit value error can occur at any time in the CAM entry. For example, the collision of alpha particles can change the state of one of the storage elements in the CAM. If this occurs, an incorrect query match may result in an incorrect address being output from the circuit. If a CAM address is used to drive the RAM, this error also leads to incorrect data being output from the RAM. Since the content of the CAM entry is usually unknown outside the CAM, this false (ie false) query match cannot be detected. SUMMARY OF THE INVENTION An object of the present invention is to provide a method for detecting bit errors in a maskable content verification memory. SUMMARY OF THE INVENTION Parity bits and mask bits are stored in a random access memory (RAM) connected to the CAM. The parity bit and mask bit are stored together with the CAM entry write. When the CAM query is matched, the reference parity bits and mask bits stored at the address output by the CAM are output from the RAM. These reference parity bits are compared with the parity bits generated from the query data value masked by the retrieved mask bits. If there is no bit error in the CAM or RAM, the reference parity bits from the RAM and the parity bits generated from the masked query data are matched. If a bit error occurs in the CAM or RAM, the two parity bit sets will not match and an error will be detected. This error can be used as an indication that a false CAM match has occurred. DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG. 1 is a block diagram showing CAM bit error detection. In FIG. 1, the arrow 102 represents data written at the address represented by the arrow 109 of the CAM 120. The data 102 is also supplied to the parity generator 122. Parity generator 122 generates one or more input parity bits 105 from data 102. The input parity 105 generated by the parity generator 122 may be a simple single-bit parity such as odd parity or even parity, or a more complex multi-bit parity such as an error correction code (ECC). The input parity bit generated by the parity generator 122 is represented by an arrow 105. Input parity 105 is written to RAM 121 at an address corresponding to the address indicated by arrow 109. Therefore, after the entry is written at the specific address of the CAM 120, the corresponding input parity entry is stored at the corresponding address of the RAM 121. When query data is provided to the CAM 120, the CAM 120 can output an address that includes the query data or indicate that the query data is not in the CAM. In FIG. 1, the inquiry data is represented by an arrow 101. This inquiry data is also supplied to the parity generator 123. If the query matches, the address output by CAM 120 is represented by arrow 103. The address 103 with which the query matches is transferred to the RAM 121 and retrieves (at least) the parity stored at the corresponding address in the RAM. The stored parity output by the RAM is represented by an arrow 107. Arbitrary additional data stored at a corresponding address in the RAM 121 can also be output. This additional data is represented by an arrow 104. Parity generator 123 outputs a query parity bit represented by arrow 106. The query parity bit 106 generated by the parity generator 123 is typically the same encoding as that generated by the parity generator 122. However, depending on the functions of the parity comparator 124 and the RAM 121, the encoding generated by the parity generator 122 may be different from a specific inversion or other conversion. The parity bit 106 and the stored parity output 107 are compared by the comparator 124. The result 108 of this comparison indicates whether there is a bit error in the queried entry in the CAM or in the stored parity corresponding to that entry. FIG. 2 is a block diagram illustrating CAM bit error detection using maskable bits. In FIG. 2, the arrow 202 represents data written at the address represented by the arrow 209 of the CAM 220. Data 202 is also supplied to mask block 225. Arrow 210 represents an input mask bit. Input mask bits 210 are provided to CAM 220, mask block 225, and RAM 221. Input mask bits 210 are stored in CAM 220 at the same address 209 as data 202 and tell CAM 220 which bits to consider and not to consider when determining whether a query matches an entry at address 209. Mask block 225 receives data 202 and mask bits 210 and sets certain bits in data 202 to a predetermined value (ie, logic 1 or 0). The bit set to this predetermined value is given by the value of the mask bit 210. For example, the data 202 is 4 bits wide (and can be any arbitrary length), the binary value is “1100”, and the binary value of the mask bit 210 is “1010” (and Mask block 225 can output “1000” and bit 0 and bit 2 of data 202 (bit 0 is rightmost). In addition, the bit numbering from right to left is efficiently masked to logic 0 so that bit 3 is leftmost. Data 202 can also be masked to logic 1 to become mask block output 211 “1101”. The mask block output 211 is supplied to the parity generator 222. Parity generator 222 generates one or more input parity bits 205 from mask block output 211. The input parity 205 generated by the parity generator 222 may be a simple single-bit parity such as odd parity or even parity, or a more complex multi-bit parity such as an error correction code (ECC). Note that the parity calculation is limited to bits that affect or control query matching. This is because the masked bits are ignored in determining whether there is a match, so errors in the masked bits do not lead to a false match. For example, if data bit 13 is masked in a CAM entry, the parity of that entry should be the same regardless of the value of bit 13 of the query data. Therefore, bit 13 should be masked before the parity calculation associated with that entry. The input parity bit generated by the parity generator 222 is represented by an arrow 205. The input parity 205 is written into the RAM 221 at the address corresponding to the address indicated by the arrow 209 together with the mask bit 210. Thus, after an entry is written to a specific address in CAM 220, the corresponding input parity entry and mask bit entry are stored at the corresponding address in RAM 221. When query data is provided to the CAM 220, the CAM 220 can output an address that includes the query data 201 or indicate that the query data 201 is not in the CAM. In FIG. 2, the inquiry data is represented by an arrow 201. This inquiry data is also supplied to the mask block 226. If the query matches, the address output by CAM 220 is represented by arrow 203. The address 203 where the query matches is transferred to the RAM 221 and retrieves (at least) the parity and mask bits stored in the corresponding address in the RAM 221. The stored parity output by the RAM is represented by an arrow 207. The storage mask bit is represented by arrow 212. Arbitrary additional data stored at a corresponding address in the RAM 221 can also be output. This additional data is represented by an arrow 204. Mask block 226 receives query data 201 and storage mask bit 212 and sets a particular bit in query data 201 to a predetermined value (ie, logic 1 or 0). The function of the mask block 226 is the same as that of the mask block 225. The output of the mask block 226 is represented by an arrow 213 and is supplied to the parity generator 223. Parity generator 223 outputs a query parity bit represented by arrow 206. The query parity bit 206 generated by the parity generator 223 is typically the same encoding as that generated by the parity generator 222. However, depending on the functions of the parity comparator 224, the mask blocks 225 and 226, the parity generators 222 and 223, and the RAM 221, the encoding may be different from the encoding generated by the parity generator 222 by specific inversion or other conversion. Good. Parity bit 206 and stored parity output 207 are compared by comparator 224. The result 208 of this comparison indicates whether there is a bit error in the entry queried in the CAM 221, the mask bit in one of the CAM 221 or the stored parity, or the mask bit corresponding to the entry. FIG. 3 is a flowchart showing steps for detecting a CAM bit error. These steps can be applied to the block diagram of FIG. 1, but are not limited to applications having only that block configuration. Other configuration blocks may also be used to complete these steps. In FIG. 3, in step 302, input parity is generated for input data written to the CAM. The generated input parity may be a simple single-bit parity such as odd parity or even parity, or a more complex multi-bit parity such as error correction code (ECC). In step 304, the input data is stored at the input address of the CAM. In step 306, the input parity is stored in the RAM at an address corresponding to the address where the input data is stored in the CAM. In other words, the input parity is an index to an address where the address output by the CAM is used directly as the address or applied to the address input of the RAM when the query matches at the CAM and the CAM outputs an address. When used as a RAM, the RAM stores the input parity output address. In step 308, the CAM is queried by supplying query data to the appropriate input of the CAM. In step 310, a query parity is generated for the query data provided to the CAM. This parity algorithm isIt should produce results that match the algorithm used in step 302, or only depend on non-critical factors such as inversions and other non-critical transformations. In step 312, the stored parity is retrieved from RAM by accessing the RAM location corresponding to the address supplied by the CAM when queried in step 308. In step 314, the generated query parity and the stored parity from RAM are compared. If they match, it is detected that there are no bit errors in the CAM content or RAM stored parity content. If they do not match, a bit error is detected in the CAM content or RAM stored parity content. FIG. 4 is a flow chart illustrating steps for detecting CAM bit errors in a CAM using maskable bits. These steps can be applied to the block diagram of FIG. 2, but are not limited to applications having only that block configuration. Other configuration blocks may also be used to complete these steps. In FIG. 4, in step 401, input data is masked according to a mask bit set. In step 402, input parity is generated for the masked input data from step 401. The generated input parity may be a simple single-bit parity such as odd parity or even parity, or a more complex multi-bit parity such as error correction code (ECC). Note that the parity calculation is limited to bits that affect or control query matching. For example, if data bit 13 is masked in a CAM entry, the parity of that entry should be the same regardless of the value of bit 13 of the query data. Therefore, bit 13 should be masked before the parity calculation associated with that entry. In step 404, the input data and mask bit set are stored at the input address of the CAM. In step 406, the input parity and mask bit set are stored at the RAM address corresponding to the address where the input data is stored in the CAM. In other words, the input parity and mask bits are the address that is used as the address input by the RAM, or the address output by the CAM is used directly when the query matches in the CAM and the CAM outputs an address When used as an index to the RAM, the RAM is stored at an address that outputs the input parity. In step 408, the CAM is queried by supplying query data to the appropriate input of the CAM. In step 412, the stored parity and stored mask bits are retrieved from the RAM by accessing the RAM location corresponding to the address supplied by the CAM when queried in step 408. In step 413, the query data is masked according to the storage mask bits retrieved in step 412. In step 410, a query parity is generated for the masked query data from step 413. This parity algorithm should produce results that match the algorithm used in step 402, or only depend on non-critical factors such as inversion and other non-critical transformations. In step 414, the generated query parity and the stored parity from RAM are compared. If there is a match, no bit error is detected in the CAM content or RAM stored parity content, or RAM storage mask bits. If they do not match, a bit error is detected in the CAM content, RAM stored parity content, or stored mask bits. One use of CAM with or without mask bits is in TLB (Translation Look-aside Buffer, Address Translation Buffer). In the present application, a virtual address (or part thereof) is sent to the CAM. When a hit occurs, the CAM causes at least a part of the physical address to be output to the RAM. One of the following two may occur as a bit error in the TLB CAM. First, bit errors prevent otherwise valid TLB entries from getting hits (ie, bit errors cause the TLB entry to be matched to not match). In this case, replacement of entries in the TLB is often done on a least used basis, so erroneous entries are eventually replaced because they never match. This type of bit error is not detected. However, this type of bit error tends not to be a serious problem because the violating entry is eventually replaced or rewritten. The second is a bit error that matches the TLB entry when it should not be matched. This type of bit error can cause serious problems in computer operation and can be inconsistent and may not eventually be replaced because it is not used. However, the methods and apparatus described above are useful for detecting this type of bit error because the entry can be invalidated, rewritten, or handled before a problem occurs due to a bit error. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing detection of a CAM bit error. FIG. 2 is a block diagram illustrating detection of a CAM bit error using maskable bits. FIG. 3 is a flowchart showing steps for detecting a CAM bit error. FIG. 4 is a flowchart illustrating steps for detecting a CAM bit error in a CAM using maskable bits. [Description of code] 410 Generate query parity for masked query data 412 Search storage parity and storage mask bit from RAM address corresponding to address supplied by CAM 413 Mask query data according to storage mask bit |
PROBLEM TO BE SOLVED: To provide a method of predicting a way accessed by an instruction in a data cache.SOLUTION: A method includes identifying one or more way prediction characteristics of an instruction. The method also includes selectively reading a table on the basis of identification information of the one or more way prediction characteristics in order to identify an entry of the table associated with the instruction to identify a way of a data cache. The method further includes making a prediction whether a next access of the data cache based on the instruction will access the way.SELECTED DRAWING: Figure 1 |
Identifying one or more way prediction characteristics of an instruction for accessing data stored in a data cache, wherein the way prediction characteristic indicates that the instruction has a predictable next address , Selects the way prediction table based on the identification information of the one or more way prediction characteristics to identify an entry of a way prediction table associated with the instruction identifying a way of the data cache Wherein the way prediction table comprises a WAY field containing a value identifying a way and a REG field identifying a register location, the next access of the data cache based on a subsequent execution of the instruction Accesses the identified way of the data cache Determining whether specific entries in the way prediction table include register identifiers corresponding to the register locations; , Determining whether the particular entry corresponds to the particular instruction, removing or invalidating the particular entry when the particular entry does not correspond to the particular instruction .The method of claim 1, wherein the one or more way prediction characteristics comprises an addressing mode of the instruction, an instruction type of the instruction, an indication of whether the instruction is included in a loop, or a combination thereof .3. The method of claim 2, further comprising determining whether the addressing mode of the instruction is an auto incremental addressing mode or a base plus offset addressing mode.Setting a predicted way field of the entry to identify a particular way of the data cache in response to a determination that the addressing mode of the instruction comprises the auto incremental addressing mode, And wherein the predicted way field is set upon generation of the entry.Setting a predicted way field of the entry to identify a particular way of the data cache in response to a determination that the addressing mode of the instruction comprises the base plus offset addressing mode; , The entry is generated for a first execution of the instruction and wherein the predicted way field is set based on a second execution of the instruction following the first execution 4. The method of claim 3, further comprising:3. The method of claim 2, further comprising determining whether the instruction type of the instruction is a load type or a store type.3. The method of claim 2, wherein the table is selectively read in response to the indication that the instruction is included in a particular loop.Determining whether the table includes the entry; determining whether the entry is valid to provide a way prediction; determining that the entry indicates a valid predicted way And in response, retrieving said predicted way from said entry and driving said predicted way of said data cache.Means for identifying one or more way prediction characteristics of an instruction for accessing data stored in a data cache, wherein the way prediction characteristic comprises: means for the instruction to have a next predictable address , Selecting the way prediction table based on the identification information of the one or more way prediction characteristics to identify an entry of the way prediction table associated with the instruction identifying the way of the data cache Wherein said way prediction table comprises a WAY field containing a value identifying a way and a REG field identifying a register location, wherein said way prediction table comprises: The next access is to the identified way of the data cache Means for identifying a specific instruction which modifies the data of the register location; means for registering a register identifier corresponding to the register location with a particular entry in the way prediction table Means for determining whether said specific entry corresponds to said specific instruction; means for determining whether said particular entry corresponds to said specific instruction; and means for determining whether said particular entry corresponds to said specific instruction, And means for removing or invalidating the entries of the first group.Identifying an increment value during a first execution of an instruction, identifying a way of a data cache accessed during the first run based on the instruction, comparing a first incremented address value Adding the increment value to an address value associated with the instruction to determine whether the first incremented address value is in the way of the data cache; In response to determining that the first incremented address is in the way of the data cache, populating an entry corresponding to the instruction in a way prediction table, A WAY field of the entry based on the way and a REG field of the entry based on the identified register location are populed And comprising populating, comprising the steps of:11. The method of claim 10, wherein each entry in the table includes a program counter identifier, a register identifier, a predicted way identifier, a validity bit, or a combination thereof.11. The method of claim 10, further comprising determining whether the instruction comprises an automatic incremental instruction or a base plus offset instruction.Wherein the entry identifies the way of the accessed data cache and the entry is populated in association with the first execution of the instruction and during the second execution of the instruction, The method according to claim 10, further comprising reading from a table, calculating a second incremented address value, and applying the way as a way prediction during the second execution .Updating the table in response to determining that an incremented address value of a subsequent execution of the instruction is in a way different from the way, wherein the subsequent execution of the instruction comprises: 11. The method of claim 10, further comprising after a first execution.Removing the entry or indicating that the entry in the table is invalid based on a determination that the incremented address is associated with a different cache line than the cache line associated with the address 11. The method of claim 10, further comprising: |
Data cache way predictionClaim of priority[0001] This application claims the priority of US Nonprovisional Patent Application No. 13 / 741,917 entitled "DATA CACHE WAY PREDICTION" filed on January 15, 2013, the entire contents of which are incorporated by reference .[0002] The present disclosure is generally directed to a data cache memory system.[0003] With advances in technology, computing devices have become smaller and more powerful. For example, currently there are a variety of portable personal computing devices including wireless computing devices such as compact, lightweight, user-friendly portable wireless telephones, personal digital assistants (PDAs), and paging devices. More specifically, portable wireless telephones such as cellular telephones and internet protocol (IP) telephones can communicate voice and data packets over a wireless network. In addition, many such wireless telephones include other types of devices incorporated therein. For example, a wireless telephone may include a digital still camera, a digital video camera, a digital recorder, an audio file player. Also, such a wireless telephone includes a processor capable of processing a plurality of executable instructions including a plurality of software applications, such as a web browser application, and may be used to access the Internet. Therefore, these wireless telephones can contain considerable computing power.[0004] Accessing the data cache of the processor consumes a significant amount of power. Data caches conventionally include data arrays having multiple sets, each including a plurality of cache lines (eg, storage locations). Data caches also conventionally include multiple ways, each including a driver corresponding to at least one cache line (eg, a cache block) of the data cache. In response to a command to access data stored in the data cache, all the drivers drive a particular set of data ways (via a plurality of data lines) to the multiplexer Enabled (eg, activated) so that it can be activated.[0005] A tag lookup operation is performed to identify a particular cache line in the data array in parallel (eg, at the same time) as all drivers are enabled. Based on the result of the tag lookup operation, data provided via a single driver (corresponding to a single cache line) is selected as the output. Considering that driving all the ways for the set and performing the tag lookup operation consumes power and data is output only from a single cache line based on the instruction, Resulting in efficiency.[0006] A similar power consumption problem exists with respect to accessing the instruction cache of the processor. Access to the instruction cache is often predictable and a prediction method that utilizes a predictable sequence of instructions can be used to identify a particular way of the instruction cache to be driven. However, accessing the data cache is more complicated and less predictable than accessing the instruction cache. Therefore, the prediction techniques used for instruction cache access may not be adaptable to predict data cache access. In addition, when prediction techniques are applied to the data cache, each misprediction (eg, making incorrect prediction) of the way to be accessed may result in a performance penalty (eg, Delay) and energy penalties may occur.[0007] The way prediction technique for the processor's data cache tracks (eg, monitors) ways of the data cache to be driven for instructions (eg, ways associated with one or more cache lines) And uses a prediction table (for example, a way prediction table) for prediction. In a particular embodiment, the predicted way is based on prior execution of instructions (eg, same way as was driven at the time of execution prior to the instruction). For each instruction executed by the processor, the control logic may populate, maintain, and / or monitor the execution of each instruction to identify a predicted way, track the trace it can. For example, the control logic of the data cache uses a prediction table to determine the specific instruction, the way accessed for that particular instruction, and the base register location (base register location) of the register file modified by that particular instruction (Eg identifying) a program counter (PC) identifier that indicates the execution of one or more instructions.[0008] When an instruction having one or more way prediction characteristics (eg, an addressing mode of an instruction, an instruction type of an instruction, an indication that an instruction is included in a loop, etc.) is executed, The prediction table can be read to determine if the way being done can be identified. For example, the way prediction characteristic may be a characteristic (eg, mode, instruction type, position in a loop, etc.) or a next predictable address (eg, a cache with the same effective address retrieved based on the next execution of the instruction (Eg, opcode, operand, bit value, etc.) indicating that it may have a predictable access pattern indicating that it can be obtained from the line (eg, via the same way). The control logic can determine whether an entry corresponding to the instruction exists in the prediction table. In certain embodiments, the one or more way prediction characteristics may be in a mode (eg, addressing (eg, addressing) mode, such as auto-increment addressing mode or base plus offset addressing mode Mode). The predicted way may be a previously accessed way during the previous execution of the instruction, such as the previous execution of the instruction during prior iteration of the loop.[0009] When the prediction table indicates a predicted way for an instruction, the control logic can selectively enable (eg, turn on) the driver corresponding to the predicted way, and if the predicted way (Eg, turn off) one or more other drivers corresponding to the way of When the prediction table indicates a predicted way for an instruction, the control logic may also selectively invalidate the tag lookup behavior of the tag array (eg, using a switch). Power savings are realized by the processor by selectively disabling one or more drivers and / or selectively disabling tag lookup operations.[0010] In certain embodiments, the method includes identifying one or more way predictive properties of the instructions. The method also includes selectively reading the table based on the identification information of one or more way prediction characteristics to identify an entry in the table associated with the instruction identifying the way of the data cache. The method further includes making an estimate of whether the next access of the data cache based on the instruction accesses the way.[0011] In another specific embodiment, the processor includes decryption logic configured to identify one or more way predictive properties of the instructions. The processor also includes control logic coupled to the decoding logic. The control logic is configured to selectively read the table based on one or more way prediction characteristics to identify an entry in the table associated with the instruction identifying the way of the data cache. The control logic is further configured to make an estimate as to whether the next access of the data cache based on the instruction accesses the way.[0012] In a more specific embodiment, the apparatus includes means for identifying one or more way prediction characteristics of the instructions. The apparatus also includes means for selectively reading the table based on the identification information of one or more way prediction characteristics to identify an entry in the table associated with the instruction identifying the way of the data cache . The apparatus further includes means for predicting whether the next access of the data cache based on the instruction accesses the way.[0013] In another specific embodiment, the non-transitory computer readable medium includes instructions that, when executed by the processor, cause the processor to identify one or more way predictive characteristics of the instruction. The non-transitory computer-readable medium causes the processor to select a table based on identification information of one or more way prediction characteristics to identify an entry in a table associated with an instruction that identifies a way of the data cache In order to make it readable. The non-transitory computer readable medium further includes instructions to cause the processor to predict whether the next access of the data cache based on the instruction will access the way.[0014] In another specific embodiment, the method comprises identifying an increment value during a first execution of an instruction, and determining, based on the instruction, a way of the data cache accessed during the first run And identifying. The method further includes adding an increment value to the address value associated with the instruction to determine the first incremented address value. The method also includes determining whether the first incremented address value is in the way of the data cache. The method further includes populating an entry corresponding to the instruction in the table in response to determining that the first incremented address is in the way of the data cache.[0015] One particular advantage provided by the disclosed embodiments is that one or more instructions (eg, based on instruction type, instruction addressing mode, identification of instructions in a loop, or a combination thereof) The way prediction technique maintains the prediction table for. The way prediction table can be utilized to selectively enable and / or invalidate one or more drivers based on way prediction. By selectively activating and / or deactivating one or more drivers, power savings can be realized during data access of the data cache. In addition, by monitoring, tracking and storing the register locations associated with each entry in the way prediction table, an instruction other than the instruction corresponding to the entry corrects the data (eg content) at that register location Potential mispredictions that may occur occasionally can be avoided. A further power advantage may be realized by selectively invalidating the tag lookup operation after the entry (eg, way prediction) is confirmed valid.[0016] Other aspects, advantages, and features of the disclosure will become apparent after review of the present application, including the following sections, including a brief description of the drawings, embodiments for carrying out the invention, and claims It will be.1 is a diagram of a first exemplary embodiment of elements of a processor system that utilizes way prediction for a data cache; FIG.3 is a flow diagram of a first exemplary embodiment of a method for performing way prediction for a data cache.7 is a flow diagram of a second exemplary embodiment of a method for performing way prediction for a data cache.FIG. 4 is a block diagram of an exemplary embodiment of program code that includes data arrays of data caches used for way prediction and instructions in a loop.7 is a flow diagram of a third exemplary embodiment of a method of performing way prediction for a data cache.FIG. 2 is a block diagram of a particular embodiment of a wireless communication device including a data cache and logic for performing way prediction.DETAILED DESCRIPTION [0023] FIG. 1 illustrates a first particular embodiment of elements of a processor system 100 that utilizes a way prediction table 152. The processor system 100 includes a data cache 102, control logic 150, a program counter 170, a tag array 180, and decode logic 190. The data cache 102 includes a data array 110 including a plurality of cache lines 120 a - d. In a particular embodiment, the data cache 102 comprises a set associative data cache.[0024] The processor system 100 is configured to execute (eg, process) instructions (eg, a series of instructions) included in the program. The program may include a loop, or a plurality of loops, in which a series of instructions are executed one or more times. The program may determine that a predictable access pattern (eg, a predictable access pattern indicating that the effective address of the next instruction to be executed is available from the same cache line (eg, via the same way) Such as an instruction having one or more way prediction characteristics (eg, an addressing mode of an instruction, an instruction type of an instruction, an indication that an instruction is included in a loop, etc.) It may contain multiple instructions. For example, the addressing mode of the instruction may include an automatic incremental addressing mode and / or a base plus offset addressing mode, causing the cache line of the data array of the data cache (eg, data cache 102) to operate. Instructions that use the automatic incremental addressing mode (eg, autoincrement instruction) can identify the register location (eg, base register) of a register file (not shown) and store the stored content (eg, , Address data) can be modified (eg, incremented) by an incremental amount (eg, an integer value such as 1 or 2). Instructions using the base plus offset addressing mode (eg, base plus offset instruction) can access register locations (eg, base register locations) during each execution of the instructions, An offset can be added to the data at the base register location by each successive execution.[0025] When an instruction using automatic incremental addressing mode and / or base plus offset addressing mode is executed (eg, several times) as part of a loop, each instruction is executed based on the next execution of the instruction And may include a predictable access pattern indicating that the retrieved effective address is available from the same cache line 120a-d (eg, the same way) of the data array 110. Thus, during execution of the instruction (eg, during one or more iterations of the loop), a particular way of the data cache 102 accessed for instructions that use the auto incremental addressing mode or the base plus offset addressing mode Can be identified. Since the instructions using automatic incremental addressing mode or base plus offset addressing mode operate on the same register, post-incremented or offset addresses are added to the previous execution of the instruction in the data cache 102, (Eg, confirm) that the same cache line (eg, the same way) as the cache line can be accessed. Accordingly, the processor system 100 can generate, maintain and use the prediction table 152 as described below to predict way access for one or more instructions.[0026] The data cache 102 may include a data array 110 and a multiplexer 160. Data cache 102 may be configured to store recently used or frequently used data (in a cache line). The data stored in data cache 102 may be accessed more rapidly than data accessed from another location such as main memory (not shown). In a particular embodiment, the data cache 102 is a set associative cache, such as a 4-way set associative cache. Additionally or alternatively, data cache 102 may include control logic 150, program counter 170, tag array 180, decode logic 190, or a combination thereof.[0027] The data array 110 may be accessed during execution of instructions (executed by the processor system 100). The instructions may be included in a program (eg, a series of instructions) and may or may not be included in a loop (eg, a software loop) of the program. The data array 110 includes a plurality of sets (eg, rows) including a plurality of ways (eg, columns), such as a first way, a second way, a third way, and a fourth way, For example, rows). Each of the ways may be associated with a plurality of cache lines in a row of the data cache 102 and may be associated with a corresponding cache line 120 a - d (eg, a single cache line) of each set of data caches 102. Multiple ways can be accessed during program execution. Each way of the plurality of ways includes drivers 140a-d (eg, line drivers) and data lines 130a-d corresponding to a plurality of cache lines (eg, storage locations) in a column of the data array 110 obtain. For example, a first way may be associated with a cache line A 120a, including a first driver 140a and a first data line 130a, a second way may be associated with a cache line B 120b, a second driver 140b and a second data line 130b and a third way may be associated with a cache line C 120c and includes a third driver 140c and a third data line 130c and a fourth way is a cache line D 120d and includes a fourth driver 140d and a fourth data line 130d.[0028] Each driver 140a-d is read (eg, driven) from the data array 110 via a corresponding data line 130a-d and the corresponding cache line 120a-d (eg, corresponding cache Block) can be validated. The content stored in a particular cache line of cache lines 120a-d may include multiple bytes (eg, 32 bytes or 64 bytes). In certain embodiments, a particular cache line may correspond to a block of sequentially addressed memory locations. For example, a particular cache line may correspond to a block of eight sequentially addressed memory locations (eg, eight 4 byte segments).[0029] Decryption logic 190 may receive one or more instructions (eg, a series of instructions) to be executed by processor system 100. Decode logic 190 is configured to decode a particular one of the one or more instructions and provide decoded instructions to program counter 170 (including index portion 172, tag portion 174, or a combination thereof) Encoded decoder. Decode logic 190 may also be configured to provide instruction data associated with a particular instruction to control logic 150, such as by sending data or modifying one or more control registers. For example, the instruction data may include decoded instructions (eg, index portion 172 and / or tag portion 174), one or more way prediction characteristics, instruction type (eg, load type, store type One or more register locations of a register file (not shown) associated with a particular instruction, associated with a particular instruction, such as a type (store type), mode of a particular instruction (eg, addressing mode) An incremental value (and / or an offset value), an address value associated with a particular instruction, whether a particular instruction starts a loop (eg, a software loop), whether to end the loop, , Or a combination of these. One or more way prediction characteristics may be obtained at the next address at which a particular instruction is predictable (eg, the effective address for the next instruction to be executed is obtained from the same cache line (eg, via the same way) A predictable access pattern indicating that it is possible). For example, the one or more way prediction characteristics may include characteristics of a particular instruction (eg, addressing mode, instruction type, position in a loop, etc.), components of a particular instruction (eg, opcode, operand, bit value, Increment value, register value, etc.), or a combination thereof. The addressing mode of a particular instruction may include an auto incremental addressing mode or a base plus offset addressing mode. The instruction type of a particular instruction may include a load type or a store type.[0030] Program counter 170 may identify the instruction to be executed based on the decoded instruction retrieved from decode logic 190. The program counter 170 may include an index portion 172 (eg, a set index portion) and a tag portion 174 that may be used to access the data cache 102 during execution of the instruction. Each time an instruction is executed, program counter 170 may be adjusted (eg, incremented) to identify the next instruction to be executed.[0031] The control logic 150 may include a way prediction table 152, a tag array enable 154, and a driver enable 156. The control logic 150 may be configured to receive instruction data from the decoding logic 190 and access the way prediction table 152 based on at least a portion of the instruction data, as described further below. For example, the control logic 150 may selectively access the way prediction table 152 based on one or more way prediction characteristics received from the decoding logic 190.[0032] The way prediction table 152 may include one or more entries 153, each containing one or more fields. Each entry 153 corresponds to a different instruction and includes a program counter (PC) field, a predicted way (WAY) field, a register location identifier (REG) field, a valid / invalid field (V / I), or a combination thereof . For a particular entry, the PC field may identify a corresponding instruction to be executed by the processor system 100. The WAY field (eg, the predicted way field) identifies the way (eg, accessed "last way") that was previously accessed (of the data array 110) when the corresponding instruction was last executed (E.g., a way field identifier). The REG field may identify the register location of a modified register file (not shown) when the corresponding instruction was last executed. For example, the register location may be the base register location of the instruction modified based on the execution of the instruction as part of the post increment operation. The V / I field may identify whether the value of the WAY field is valid or invalid. For example, the V / I field may indicate whether the value of the WAY field can be used as a predicted way. Alternatively and / or additionally, the V / I field may indicate whether the entry is valid or invalid. The way prediction table 152 may be maintained (eg, stored) in the processor core of the processor system 100 and / or may be included in or associated with the prefetch table of the data cache 102. In particular embodiments, each entry in the way prediction table 152 may include a program counter identifier (eg, a PC field), a particular register location identifier (eg, a REG field), a particular predicted way identifier (eg, WAY field).[0033] Control logic 150 may be configured to access instruction data (eg, instruction data corresponding to instructions to be executed) provided by decode logic 190. Based on at least a portion of the instruction data, such as one or more way prediction characteristics, the control logic 150 may determine whether the way prediction table 152 includes an entry corresponding to the instruction. For example, the control logic 150 may selectively read the way prediction table 152 in response to receiving an indication that the instruction type of the instruction is a load type or a store type. In a particular embodiment, the control logic 150 does not read the way prediction table unless the instruction type is a load type or a store type. Based on the PC field of the way prediction table 152, the control logic 150 may determine whether the way prediction table 152 includes an entry 153 corresponding to the instruction. In another particular embodiment, the control logic 150 selects the way prediction table 152 in response to receiving an indication that the addressing mode of the instruction is an auto incremental addressing mode or base plus offset addressing mode Read it in a similar way. In another particular embodiment, the control logic 150 selectively reads the way prediction table 152 in response to receiving an indication that the instruction is included in the loop.[0034] Based on the determination that the way prediction table 152 does not include the entry 153 corresponding to the instruction and the instruction is associated with one or more way prediction characteristics (eg, having an automatic incremental address mode) where the way prediction is useful The control logic 150 may generate (eg, populate) a new entry 153 associated with an instruction in the way prediction table 152. The control logic 150 may identify the register location (eg, identified by it) contained in the instruction and the way of the data array 110 accessed based on the instruction. The control logic 150 may populate the WAY and REG fields of the new entry 153, respectively, based on the identified register location and the identified way. Thus, when an instruction is executed next time (eg, during the next iteration of the loop), the control logic 150 may identify the way accessed during the previous execution of the instruction based on the WAY field. In particular, when an entry is created, the value of the WAY field may be set to indicate the way accessed based on the execution of the instruction that generated the entry. The REG field may be used by the control logic 150 to maintain the way prediction table 152, as further described herein.[0035] The control logic 150 may also predict whether the subsequent (eg, next) execution of the instruction will access the same way as the execution of the instruction that generated the entry. For example, as described in more detail with respect to FIG. 1, control logic 150 determines that the next execution of an instruction is the same cache line as executed by an automatic incremental addressing mode instruction or a base plus offset addressing mode instruction, (Eg, verify) whether or not to access the user interface. If a decision (eg, prediction) is made that the incremented address is not on the same cache line that was accessed during the execution of the instruction, the control logic 150 generates a prediction to be used during subsequent execution of the instruction The V / I field (eg, validity bit) of the new entry may be set as invalid to indicate that the value of the WAY field can not be trusted to indicate a way to be done. If the incremented address is in the same cache line that was accessed during the execution of the instruction (eg, the execution of the instruction accessed cache line A 120a, and the value associated with the incremented address is stored in cache line A 120a When a decision (eg, prediction) is made, the control logic 150 may determine whether the value of the WAY field indicates that the value of the WAY field indicates the predicted way to be used during the subsequent execution of the instruction V / I field (eg validity bit) can be set valid.[0036] The control logic 150 may use the way prediction table 152 to predict the way for instructions to be executed. The control logic 150 may selectively read the way prediction table 152 to identify the entry 153 of the way prediction table 152 corresponding to the instruction based on the PC field of each entry 153. When the control logic 150 identifies the corresponding entry 153, if the entry 153 is indicated as valid, then the control logic 150 causes the value of the WAY field to be provided (or made available) to the driver enablement 156 so that the entry 153 Of the WAY field can be used as the way prediction and the value of the V / I field can be given (or made available) to the tag array validation 154.[0037] Driver activation 156 selectively activates (eg, turns on) or deactivates one or more of drivers 140 a - d based on the predicted way identified in way prediction table 152 And may be configured to deactivate (eg, turn off). In a particular embodiment, driver enable 156 activates all drivers 140a-d when the value of the WAY field provided to driver enable 156 is a null value (eg, a zero value). In another particular embodiment, when the value of the V / I field of entry 153 indicates that the value of the WAY field of entry 153 can be used as a way prediction, driver activation 156 identifies the identified (eg, corresponding You can use the expected way from entry 153. Additionally, driver enablement 156 may selectively invalidate at least one driver 140a-d of one or more ways that is not a predicted way indicated by the WAY field. In certain embodiments, the WAY field may include one or more bits (eg, a bit mask) indicating a predicted way and the driver enable 156 may include each driver of the plurality of drivers 140 a - d The bitmask may be applied to multiple drivers 140a-d for selectively enabling or disabling.[0038] Tag array enable 154 activates a tag lookup operation in tag array 180 via switch 176 (or other mechanism) to identify ways (eg, cache lines 120a-d) to be selected based on the instructions May be configured to selectively activate (eg, enable) or deactivate (eg, invalidate) the data. When the tag validation portion 154 determines that the value of the V / I field indicates that the value of the WAY field can be used as a way prediction, the tag validation portion 154, via the operation of the switch 176, performs a tag lookup operation Can be selectively invalidated. When the value of the V / I field indicates that the value of the WAY field can not be used as a way prediction, the tag array validation 154 can be performed in parallel (eg simultaneously) with the drivers 140 a - The switch 176 may be selectively enabled such that a lookup operation is performed.[0039] Multiple combinations of the value of the WAY field given to the driver validation 156 and the value of the V / I field given to the tag array validation 154 are given to instruct the operation of the driver validation 156 and / or the tag array validation 154 Respectively. For example, the value of the WAY field indicates that the driver enable 156 is a null value (eg, a zero value) that activates (eg, turns on) all the drivers 140 a - d regardless of the value of the WAY field , In which case the tag array enable 154 may selectively enable or selectively disable the switch 176. As another example, when the value of the V / I field indicates that the value of the WAY field can not be trusted as way prediction, driver enablement 156 may turn on all drivers 140a-d and tag array enable 154 The switch 176 may be selectively enabled. As a further example, when the value of the V / I field indicates that the value of the WAY field can be trusted (eg, used) as a way prediction, the driver enable 156 is a single of a plurality of drivers 140 a - d (Eg, turn on) the line driver and tag array enable 154 may selectively enable or selectively disable switch 176. Alternatively or additionally, the tag array enable 154 determines whether the instruction corresponding to the entry giving the value of the WAY field and the value of the V / I field, whether the register file is being tracked (eg being monitored) Selectively activate or deactivate the switch 176 based on whether it is identified as included in the loop, for example based on command data received at the control logic 150, or a combination thereof.[0040] The control logic 150, or other logic coupled to the control logic 150, tracks whether any instruction modifies (eg, changes) the data at the register location identified in the way prediction table 152 (eg, Monitoring) tracking logic. The tracking logic may identify the value of the modified register location and provide the identification information of the register location to the control logic 150. The control logic 150 may read the way prediction table 152 to determine if a particular entry 153 contains a REG field with a value corresponding to the register location. Based on the determination that a particular entry 153 includes such a REG field, the control logic 150 may determine whether the PC field of a particular entry 153 corresponds to a particular instruction that modified the register location, Of the entry 153 does not correspond to a particular instruction, the control logic 150 determines whether the value of the WAY field of a particular entry is not trusted (eg, used) as a way prediction, by a particular entry 153 (Eg, invalid) of the V / I field of the entry 153 or may remove (eg, delete) the particular entry 153.[0041] During operation of processor system 100, decode logic 190 and / or control logic 150 may determine whether way prediction table 152 includes an entry corresponding to the instruction to be executed. When the way prediction table 152 does not contain an entry, the control logic 150 may generate a new entry in the way prediction table 152. When the way prediction table 152 includes an entry, the control logic 150 may identify one or more values of one or more fields of the entry. When one or more of the fields indicate that the entry is not valid (eg, the entry can not be used for way prediction), the control logic 150 determines that all of the drivers 140 a - d of the data array 110 And validate the way selection signal to be provided to the output of the tag array 180 based on the multiplexer 160. When one or more fields indicate that the entry is valid, the control logic 150 selectively enables and / or invalidates one or more of the plurality of drivers 140a-d, and the multiplexer 160 To control the selection, you can use the value of the WAY field of the entry. The control logic 150 determines whether the incremented address is likely to access a cache line corresponding to a way different from the way indicated by the WAY field or based on the prediction that the register location identified in the way prediction table 152 And may update one or more entries of the way prediction table 152 based on identifying modifications to the way prediction table 152. An example of the operation of the processor system 100 will be described later with reference to FIG. 4.[0042] By maintaining a way prediction table 152 for instructions to be executed by the processor system 100, one or more drivers 140a-d of the data array 110 of the data cache 102 selectively invalidate based on way prediction And power benefits may be realized during data access of data cache 102. In addition, by tracking and storing the register location (eg, REG field) associated with each entry 153, the control logic 150 determines that an instruction other than the instruction corresponding to the entry is an arbitrary entry in the way prediction table 152 To avoid potential misprediction when modifying data at a particular register location identified by the REG field of the REG field. A further power advantage can be realized by selectively invalidating the tag lookup operation.[0043] Turning to FIG. 2, a flow diagram of a first exemplary embodiment of a method 200 for performing a way prediction associated with a data cache is shown. For example, the data cache may include the data cache 102 of FIG. In certain embodiments, the method 200 may be performed by the control logic 150 of FIG. 1.[0044] At 202 the increment value is identified during the first execution of the instruction. An increment value may be associated with an instruction using an auto incremental addressing mode. The increment value may be determined (eg, identified) by decode logic, such as the decode logic 190 of FIG. 1. The increment value may be included in the instruction data provided to the control logic, such as the control logic 150 of FIG. 1, from the decoding logic. The control logic may receive the instruction data and identify the increment value. The control logic may also determine whether the instruction is associated with one or more way prediction characteristics indicating that the instruction may have a predictable access pattern. In a particular embodiment, the control logic identifies the increment value of the instruction after making a determination that the instruction is associated with one or more way prediction characteristics.[0045] At 204, a way of the data cache accessed based on the instruction during the first execution of the instruction is identified. For example, the data cache may be the data cache 102 of FIG. The control logic may identify the way accessed during the first execution of the instruction.[0046] At 206, the increment value is added to the address value associated with the instruction to determine the incremented address value. The control logic may add an increment value to the address value associated with the instruction to determine the incremented address. In a particular embodiment, the address value may be an address value stored in the register location identified by the instruction. The register location may be identified by the control logic based on the instruction data provided by the decoding logic.[0047] At 208, a determination is made as to whether the incremented address value is in the way of the data cache. The control logic may determine whether the subsequent (eg, next) execution of the instruction is predicted to access the same way as the execution of the instruction (ie, the first execution).[0048] At 210, entries corresponding to the instructions are populated in the table. The entry may be populated in the table in response to determining that the incremented address is in the way of the data cache. The control logic may populate (eg, generate) an entry corresponding to the instruction into a way prediction table, such as the way prediction table 152 of FIG. 1. In certain embodiments, the generation (eg, population) of entries in a table may be based on one or more conditions associated with the instruction that are satisfied before the entry is created (eg, populated) (Eg, automatic incremental addressing mode, type of instruction, instructions in the loop) are conditional. The control logic may determine whether the entry is an instruction (eg, a PC field value), a way of the data cache accessed during the first execution of the instruction (eg, a WAY field value), a register location (eg, a REG field value) One or more fields of the entry are selected to identify whether subsequent (eg, next) execution is expected to access the same cache line (eg, V / I field value), or a combination thereof It can populate.[0049] By generating (eg populating) an entry for an instruction in a way prediction table, ways accessed based on the instruction can be recorded and tracked. The recorded way selectively enables and / or invalidates one or more drivers of the data cache so as to realize the power advantage during the data cache data access (for example, from all drivers Too few drivers are turned on) during a subsequent execution of one or more of the instructions.[0050] Referring to FIG. 3, a flow diagram of a second exemplary embodiment of a method 300 for performing a way prediction associated with a data cache is shown. For example, the data cache may include the data cache 102 of FIG. In certain embodiments, the method 300 may be performed by the control logic 150 of FIG. 1.[0051] At 302, the addressing mode of the instruction is identified. In a particular embodiment, the addressing mode of the instruction is identified as an auto incremental addressing mode. An instruction having an auto incremental addressing mode may identify an increment value and a register location that stores the address associated with the instruction. The addressing mode may be determined (eg, identified) by control logic, such as control logic 150 of FIG. 1, based on instruction data received from decode logic, such as decode logic 190. In addition, the type (eg, instruction type) associated with the instruction may be determined. For example, the type may be determined to be a load type or a store type. The type can be determined (eg, identified) by control logic.[0052] At 304, the table is read based on the identity of the instruction to identify an entry in the table associated with the instruction identifying the way of the data cache. The control logic may determine whether the table includes an entry associated with the instruction identifying the way of the data cache. For example, the control logic may access the table and read the entry from the table based on the instruction. The table may include the way prediction table 152 of FIG. When a decision is made that the table contains an entry, the control logic determines whether the entry is identified as valid or invalid based on the value of the validity bit included in the V / I field associated with the entry As shown in FIG. The V / I field may be included in the entry or stored in a different register location or buffer than the table. The value of the validity bit may enable the control logic to determine whether an entry or a portion of the entry (eg, at least one field) is valid to provide a way prediction for the instruction. Based on the entry, the control logic selectively enables and / or deactivates one or more drivers of the data cache (eg fewer drivers are turned on than all drivers), an entry You can use the way prediction included in. By selectively invalidating one or more drivers of the data cache, a power advantage is realized during data access of the data cache.[0053] At 306, an estimate is made as to whether the next access of the data cache based on the instruction accesses the same way. For example, the control logic may predict whether the next access of the data cache based on the instruction will access the way identified by the entry. The control logic may add the increment value associated with the instruction to the address of the register location associated with the instruction (eg, stored in the register location) to determine the incremented address, A prediction may be made by determining whether the incremented address is on the same cache line as that address of the data array.[0054] If an expectation is made that the next access of the data cache will not access the way, processing proceeds to 308 where the entry in the table is either invalidated or deleted (eg, removed). For example, the control logic may remove entries, or may be stored in a cache line of a data cache that is different from a cache line where the incremented address includes an address (eg, an address incremented to generate an incremented address) Based on the determination that there is some, it can indicate that the entry in the table is invalid. Alternatively, if the prediction is made that the next access of the data cache will access the way, processing proceeds to 310 and entries in the table are maintained. For example, the control logic may maintain the entry of the table as valid to provide way prediction based on a determination (eg, prediction) that the incremented address is on the same cache line as the address of the data array. Control logic, or logic other than control logic, may monitor (eg, track) register files containing register locations. As a result of monitoring (eg tracking) the register file, the control logic can invalidate or delete the entry in response to the content of the register location being changed. The content of the register location may be changed by another instruction.[0055] By accessing the table, the previous way of the data cache accessed based on the instruction can be used as a way prediction for instruction execution. In addition, the control logic may determine (eg, predict) whether the subsequent (eg, next) execution of the instruction accesses the same cache line as the instruction execution. Based on the determination of whether subsequent runs access the same cache line, the control logic may remove the entry, update one or more fields of the entry, and / or add one or more fields of the entry Can be maintained. By updating and maintaining entries in the table, the table can be used and trusted to make one or more way predictions and avoid mispredictions.[0056] Referring to FIG. 4, a particular illustrative embodiment of a row 400 of data caches is shown. For example, the data cache may include the data cache 102 of FIG. Row 400 includes a first cache line A 402, a second cache line B 404, a third cache line C 406, a fourth cache line D 406, each separated by a cache line boundary 410 - 408. For example, the four cache lines 402 - 408 may correspond to the cache lines 120 a - d of FIG. 1. It is to be appreciated that although four exemplary cache lines AD are shown, row 400 may include more than four cache lines or fewer than four cache lines. Each of cache lines 402 - 408 may include multiple segments. For example, the first cache line A 402 includes a first segment 402 a, a second segment 402 b, a third segment 402 c, and a fourth segment 402 d. In a particular embodiment, each cache line 402 - 408 includes the same number of segments. Each of the cache lines 402 - 408 may be associated with a corresponding way.[0057] An exemplary embodiment of a representative computer instruction including a representative program loop (eg, loop code 430) is illustrated in FIG. 4 to illustrate the operation and usage of row 400. The instruction includes a loop code 430 starting with the loop top identifier 440. The loop contains three instructions 442, 444, and 446. The loop ends with an end loop indicator 448. Not all aspects of the program loop are shown to give a simplified example. For example, for brevity, the number of loop iterations and the loop termination condition are omitted.[0058] The first instruction 442 is an exemplary load type instruction (eg, a post increment load) that includes an auto incremental addressing mode. In a particular embodiment, the first instruction 442 stores the memory address and stores the memory address in the register location R 9 to load content (eg, data) corresponding to the memory address from the data cache into the register location R 1 It is a memory write instruction that accesses register location R 9 to use. A register file (not shown) may include multiple register locations. After the content identified by register location R 9 is loaded into register location R 1, the value of the memory address of register location R 9 is automatically incremented by 2 (eg, post increment of 2). Therefore, the first instruction 442 has an increment value of 2 and can be regarded as operating on the base register R 9. It can be accessed to a particular cache line of the data cache to load the contents of register location R 9 into register location R 1. Certain cache lines are associated with specific ways of the data array and specific drivers.[0059] The second instruction 444 is a representative operation instruction. In a particular embodiment, the second instruction 444 includes a register location R 4 (which has a corresponding first content (eg, data)) and stores a first memory address ) Register location R 5 which stores the second memory address. The first content corresponding to the first memory address of the register location R 4 and the second content corresponding to the second memory address of the register location R 5 can be added together and the sum is added to the second instruction 444 (Eg, data) corresponding to the third memory address stored in register location R 9, based on the first memory address.[0060] The third instruction 446 may include another load type instruction including an automatic incremental addressing mode. For example, execution of the third instruction 446 stores the memory address and uses the memory address in the register location R 10 to load content (eg, data) corresponding to the memory address from the data cache to the register location R 2 , Register location R 10 (eg, the base register of third instruction 446). After the content is loaded into register location R2, the value of the memory address of register location R10 may be incremented by one (eg, post increment of 1).[0061] Executing the loop code 430 includes executing one or more (eg, one or more iterations) instructions 442, 444, and 446. During the first iteration of the loop code 430, the first instruction 442 is executed. Since the first instruction 442 includes one or more way prediction characteristics, such as an automatic incremental addressing mode, control logic (not shown) generates an entry corresponding to the first instruction 442 in the way prediction table . For example, the control logic 150 of FIG. 1, or other control logic not shown, may generate an entry in the way prediction table 152. In certain embodiments, the control logic may generate an entry in the way prediction table after determining that the way prediction does not include an entry corresponding to the first instruction 442.[0062] Since the first instruction 442 includes an auto incremental addressing mode, the control logic may identify an increment value of two. Since the increment value of 2 is smaller than the size of the cache lines 402 to 408 (eg, the size of 4), the control logic determines that the next way to be accessed corresponds to the same cache line (eg, remains within the same cache line ). Based on the prediction that the same way will be accessed during the next iteration (eg, the next execution of the first instruction 442), the way can be identified as a way prediction for the next execution of the first instruction 442 .[0063] By way of example, at 450, during the first iteration of the loop code 430, the contents of register location R 9 may point to the first (sequential) segment 406 a of the third cache line C 406. For example, the third cache line C 406 may comprise four segments, such as a first segment 406 a, a second segment 406 b, a third segment 406 c, and a fourth segment 406 d. Thus, at 452, incrementing the contents of register location R 9 by an increment value of 2 means that the contents of register location R 9 will point to the third (sequential) segment 406 c of the third cache line C 406. Thus, the way prediction for the first instruction 442 is used during the first iteration of the loop code 430 to determine the way (corresponding to the third cache line C 406) used during the execution of the first instruction 442 Will be identified. Thus, as described with respect to FIG. 1, a new entry for the first instruction 442 is added to the way prediction table 152 by the control logic 150. For example, a particular entry may have the following fields: PC = 0x10148 (eg, corresponding to the first instruction 442), WAY = 3, REG = R 9, and V / I = valid Data value) that is associated with the data item.[0064] The control logic may set the validity bit (eg, to indicate validity) of the entry's V / I field based on the determination that subsequent execution of the instruction accesses the same cache line. In another embodiment, the WAY field of the new entry may be populated only to identify ways to be accessed when a decision is made (eg, prediction) that subsequent executions access the same cache line. When prediction is made that subsequent executions will not access the same cache line, the control logic may set the WAY field of the new entry to a null value (eg, a zero value). In another particular embodiment, an entry is created in the way prediction table when predictions are made that subsequent executions access the same cache line.[0065] The creation of a new entry based on the instruction may be based on one or more additional way prediction characteristics being identified (eg, based on the instruction data, one or more additions being made by the control logic 150 of FIG. 1 (For example, based on those characteristics). For example, a new entry may be created (eg, populated) only when the instruction is associated with auto incremental addressing mode and / or base plus offset addressing mode. As another example, if an instruction is included in a loop, a new entry may only be generated (eg, populated). In particular, when the instruction is the first instance of the instruction in the loop, a new entry can be generated. In a particular embodiment, no entry is generated when the instruction is not included in the loop.[0066] When the instruction uses the base plus offset addressing mode, the control logic 150 is not operable to make an estimate as to whether subsequent execution of the instruction will access the same cache line based on the first execution of the instruction There are cases. For example, unlike the automatic incremental addressing mode, executing the base plus offset addressing mode may cause the address location to change to a predetermined value (eg, 1, 2, 3, 4, 5, , Constant). At least two executions of the instruction may be required to determine the stride (eg, offset) of the instruction using the base plus offset addressing mode. Thus, when an instruction uses base plus offset addressing mode, a new entry may be generated during the first execution of the instruction, but a second execution of the instruction (eg, the next execution after the first execution) May not be able to set the value of the V / I field of the new entry to indicate that the value of the WAY field may be used as the way to be predicted up to. In an alternative embodiment, when an instruction uses base plus offset addressing mode, entries can not be generated in the way prediction table 152 based on the first execution of instructions. Rather, based on the first execution of the instruction, for the potential new entry associated with the instruction, the value associated with the PC field and the value associated with the WAY field can be identified and the value associated with the PC field The value and the value associated with the WAY field may be maintained in a different location (and / or structure) than the way prediction table 152. For example, the location may include a buffer or register associated with control logic, such as control logic 150. An entry associated with an instruction using the base plus offset addressing mode may be generated based on the execution of the instruction following the first execution.[0067] In certain embodiments, the control logic 150, or other control logic, registers a register file (not shown) identified as a register (eg, identified in the REG field) in the way prediction table, such as the way prediction table 152, And tracking logic that tracks each register location of the register. In certain embodiments, the control logic 150, or other control logic, tracks only the register location of the register file identified in the corresponding REG field of the valid entry. For example, since register location R 9 was used by first instruction 442 and since register location R 9 was associated with an entry added to way prediction table 152, the tracking logic may arbitrarily modify the value of register location R 9 Monitor the instructions. In certain illustrative examples, the second instruction 444 changes (eg, modifies) the value of the register location R 9. Accordingly, the tracking logic monitors one or more instructions, such as the second instruction 444, and in response to detecting that the value of the register location R 9 has been modified by the second instruction 444, (Eg, to invalidate the V / I field) or delete (eg, remove) an entry in the way prediction table 152 corresponding to the instruction 442 of the instruction 442. Thus, after a subsequent execution (eg, the next execution) of the first instruction 442, the way prediction table 152 may include a valid entry associated with the first instruction 442 (or any Entries of the same).[0068] Continuing loop code 430, third instruction 446 may include an automatic incremental addressing mode. The control logic may identify an increment value of 1 and a register location R 10 (eg, the base register) of the third instruction 446. At 420, during the first iteration of the loop code 430, the contents of the register location R 10 may point to the first (sequential) segment 402 a of the first cache line A 402. At 422, since the increment value of 1 is smaller than the size of the first cache line A 402 (eg, the first cache line A 402 includes the size of 4), the control logic determines that the next way is the second (Sequential) segment 402 b within the same cache line 402. Based on the prediction that the same way will be accessed, the way can be identified as a way prediction for the next execution of the third instruction 446. 1, a new entry for the third instruction 446 is added to the way prediction table 152 by the control logic 150. The first iteration of the loop ends with an end loop indicator 448.[0069] During the second iteration of the loop code 430, the first instruction 442 is executed again. The control logic 150 may search the way prediction table 152 for valid entries corresponding to the first instruction 442. During the first iteration of the loop code 430, since the entries are generated and stored in the way prediction table 152, the way prediction table 152 is stored in the PC associated with the program counter value corresponding to the first instruction 442 It has an entry containing a field value. However, since the tracking logic has invalidated the entry, the result of way prediction table 152 lookup (eg, reading entry of way prediction table 152) based on the first instruction 442 is an indication of an invalid entry. Invalid entries will indicate that the control logic 150 can not simply trust the value of the WAY field (eg, way prediction) indicated by the entry in the way prediction table 152. Accordingly, the control logic 150 will selectively activate (eg, enable) a search (eg, a tag lookup operation) of the tag array 180 based on the memory address stored in the register location R 9.[0070] In certain embodiments, the control logic activates the tag lookup operation at the same time (eg, concurrently) using the value of the WAY field indicated by the entry in the way prediction table 152. Simultaneously enabling the tag lookup operation allows the control logic to determine whether a misprediction based on the value of the WAY field (eg, predicting a false way) will occur. In another embodiment, the control logic activates all ways' drivers simultaneously (eg in parallel), so that as a result of trusting the value of the WAY field of invalid entries, no mispredicted performance penalty arises In order to ensure that the tag lookup operation is enabled.[0071] Continuing execution with the loop code 430, the addition operation corresponding to the second instruction 444 is executed again, and then the processing proceeds to execute the third instruction 446. Since the third instruction 446 includes an automatic incremental addressing mode, the control logic 150 accesses the way prediction table 152 (eg, reads the way prediction table 152) and a third instruction 446 (corresponding to the register location R 10) To identify previously stored entries associated with that entry. In this case, the entry associated with the third instruction 446 is valid and the control logic 150 generates a way select signal and is selected (eg, without activating any of the other ways) (eg, predicted A signal for activating a way may be generated from driver enable 156. In this way, the second iteration of the loop code 430 involved in the second execution of the third instruction 446 was used during the first execution of the third instruction 446 in the loop code 430 (previously Advantageously selects a previously stored way (corresponding to the first cache line A 402 accessed). Thus, a previously stored way can be used as a way prediction and the control logic 150 validates a single driver of multiple drivers, such as the plurality of drivers 140a-d of FIG. 1, based on the way prediction (E.g., selectively activated). By selectively enabling a single driver (eg, fewer than all of the plurality of drivers 140a-d), a power advantage is realized during data access of the data cache, such as the data cache 102 of FIG. 1 For example.[0072] At 422, during the second iteration of the loop code 430, the contents of the register location R 10 may point to the second (sequential) segment 402 b of the first cache line A 402. If the content of register location R 10 is incremented by an increment value of 1 at 424, the control logic determines that during the third iteration of loop code 430, the content of register location R 10 is the first cache line A 402's The next way associated with the third instruction 446 will stay in the same cache line 402 by calculating that it will point to a (sequential) segment 402 c of 3. Since the predicted way of the third instruction 446 during the third iteration of the loop code 430 stays in the first cache line A 402, the value of the WAY field of the entry associated with the third instruction 446 is the same (Eg, the way corresponding to the first cache line A 402) and the entry associated with the third instruction 446 may be valid. The processing (eg, execution) of the end loop indicator 448 ends the second iteration of the loop code 430.[0073] The loop code 430 may continue to be processed by additional iterations as described above. For example, loop code 430 may go through a third iteration and a fourth iteration. At 424, during execution of the third instruction 446 in the third iteration of the loop code 430, the register location R 10 may point to the third (sequential) segment 402 c of the first cache line A 402. At 426, during execution of the third instruction 446 in the fourth iteration of the loop code 430, the register location R 10 may point to the fourth (sequential) segment 402 d of the first cache line A 402. During execution of the third instruction 446 in the fourth iteration of the loop code 430, the control logic, at 428, as the content of the register location R 10 is incremented by an increment value of 1, 2 cache line B 404, then the next way associated with the third instruction 446 (eg during the fifth iteration of the loop code 430) will be within the same cache line A 402 (Eg, traversing across the boundary 410). Since the predicted address of the third instruction 446 (associated with the next execution of the third instruction 446) is outside of the first cache line A 402, the control logic invalidates the way's prediction of the entry Or delete the entry associated with the third instruction 446 from the way prediction table. If the control logic invalidates the entry, during the fifth iteration of the loop code 430 the entry may be updated with a new (valid) way prediction. Alternatively, when the control logic deletes the entry, a new entry may be generated during the fifth iteration.[0074] By generating (eg, populating) and maintaining entries for one or more instructions in the way prediction table, the processor system implements way prediction (eg, way prediction techniques) on the data cache (eg, , To do). Performing a way prediction for the data cache allows the processor system to realize the power advantage during some data access of the data cache. For example, a way prediction technique may be utilized when one or more instructions have a predictable access pattern to be performed as part of a loop (eg, executed several times). Such instructions may include instructions using automatic incremental addressing mode or base plus offset addressing mode.[0075] Referring to FIG. 5, a flow diagram of a third exemplary embodiment of a method 500 for performing a way prediction associated with a data cache is shown. For example, the data cache may include the data cache 102 of FIG. In certain embodiments, the method 500 may be performed by the control logic 150 of FIG. 1.[0076] At 502, one or more way prediction characteristics of the instruction are identified. One or more way prediction characteristics may include an addressing mode of the instruction, an instruction type of the instruction, an indication of whether the instruction is included in the loop, or a combination thereof. For example, one or more way prediction characteristics may be identified by control logic, such as control logic 150 of FIG. 1, or by decoding logic, such as decoding logic 190 of FIG. 1. In a particular embodiment, a determination is made whether the addressing mode of the instruction is an auto incremental addressing mode or a base plus offset addressing mode. In another particular embodiment, a determination is made as to whether the instruction type of the instruction is a load type or a store type. In another particular embodiment, a determination is made whether an instruction is included in a loop of one or more instructions. Decoding logic may give the control logic an instruction type, an addressing mode, or an indication of whether the instruction is included in the loop.[0077] At 504, the table is selectively read based on identification information of one or more way prediction characteristics to identify an entry in the table associated with the instruction identifying the way of the data cache. The control logic may read the table to determine if the table contains an entry corresponding to the instruction. For example, the control logic 150 of FIG. 1 may selectively read the way prediction table 152. The corresponding entry in the table may indicate a way (eg, a predicted way) based on the value of one or more bits contained in the entry (eg, the value of the WAY field of the entry). One or more bits may be applied to multiple drivers as masks to selectively enable or disable each of the drivers of the plurality of drivers. The control logic may also determine whether the entry is valid. In a particular embodiment, the predicted way is the same way as the previously accessed way, based on the previous execution of the instruction. For example, the control logic may identify the predicted way from the table and selectively enable and / or invalidate one or more drivers when the entry is valid. One or more drivers such as drivers 140a-d may be included in a data cache, such as the data cache 102 of FIG.[0078] At 506, an estimate is made as to whether the next access of the data cache based on the instruction will access the way. For example, the control logic may perform an arithmetic operation to predict (eg, confirm) whether the next execution of the instruction accesses the same cache line as execution and thus the same way. When a decision (eg, prediction) is made that the incremented address is not on the same cache line accessed during execution of the instruction, the V / I field (eg, validity bit) of the entry indicates that the value of the WAY field is It can be set to indicate that it is invalid and can not be trusted to indicate the expected way to be used during subsequent execution of the instruction. If a determination is made that the incremented address is on the same cache line, the V / I field of the entry can be set to indicate that the value of the WAY field is valid and can be trusted.[0079] By accessing the table, the previous way of the data cache accessed based on the instruction can be used as a way prediction for instruction execution. Previously stored ways can be used as way predictions and one or more drivers can be selectively invalidated (eg, turned off) based on way prediction. By selectively invalidating one or more drivers fewer drivers are activated (eg turned on) than all the drivers, and a power advantage can be realized during the data cache data access .[0080] The method 200 of FIG. 2, the method 300 of FIG. 3, the method 500 of FIG. 5, or any combination thereof may be implemented in a field programmable gate array (FPGA) device, an application specific integrated circuit (ASIC), a central processing unit (CPU) , A digital signal processor (DSP), a controller, another hardware device, a firmware device, or any combination thereof, or the like. By way of example, at least a portion of any of method 200 of FIG. 2, method 300 of FIG. 3, method 500 of FIG. 5, or any combination thereof may be stored in memory 632 Instructions executing by the processor 610.[0081] FIG. 6 is a block diagram of a particular embodiment of a device 600 (eg, a communication device) that includes a cache memory system that utilizes a multi-bit way prediction mask. Device 600 may be a wireless electronic device and may include a processor 610, such as a digital signal processor (DSP) coupled to memory 632.[0082] Processor 610 may be configured to execute software 660 (eg, program of one or more instructions) stored in memory 632. The processor 610 may include a data cache 680 and control logic 686. For example, the data cache 680 may include or correspond to the data cache 102 of FIG. 1 and the control logic 686 may include or correspond to the control logic 150 of FIG. 1. The data cache 680 may include a data array 682 and a tag array 684. Data array 682 and tag array 684 may correspond to data array 110 and tag array 180 of FIG. 1, respectively. Data array 682 may include multiple line drivers, such as line drivers 140a-d of FIG. The control logic 686 may include a way prediction table 688. The way prediction table 688 may include or correspond to the way prediction table 152 of FIG. In the illustrative example, processor 610 includes or corresponds to processor system 100 of FIG. 1, or components thereof, and operates according to any of the embodiments of FIGS. 1-5 or any combination thereof.[0083] In certain embodiments, the processor 610 may cause the computer, such as the processor 610, to access at least a portion of any of the method 200 of FIG. 2, the method 300 of FIG. 3, the method 500 of FIG. 5, Executable instructions 660 stored on a non-transitory computer readable medium, such as memory 632, that is executable to cause the computer to perform one or more functions. For example, the computer executable instructions 660 may be executable to cause the processor 610 to identify one or more way prediction characteristics of the instructions. The computer executable instructions 660 may cause the processor 610 to select a table based on the identification information of one or more way prediction characteristics to identify an entry in the table associated with the instruction that identifies the way of the data cache To make it readable. Computer-executable instructions 660 may further be executable to cause processor 610 to predict whether the next access of the data cache based on the instruction will access the way.[0084] Camera interface 668 is coupled to processor 610 and is coupled to a camera, such as video camera 670. The display controller 626 is coupled to the processor 610 and the display device 628. A coder / decoder (codec) 634 may also be coupled to the processor 610. Speaker 636 and microphone 638 may be coupled to codec 634. Wireless interface 640 may be coupled to processor 610 and antenna 642 such that wireless data received via antenna 642 and wireless interface 640 may be provided to processor 610. In a particular embodiment, the processor 610, the display controller 626, the memory 632, the codec 634, the wireless interface 640, and the camera interface 668 are included in the system in package or the system on chip device 622. In certain embodiments, input device 630 and power supply 644 are coupled to system-on-chip device 622. 6, the display device 628, the input device 630, the speaker 636, the microphone 638, the wireless antenna 642, the video camera 670, and the power supply 644 are arranged outside the system-on-chip device 622 is there. However, each of the display device 628, the input device 630, the speaker 636, the microphone 638, the wireless antenna 642, the video camera 670, and the power supply 644 can be coupled to components of a system-on-chip device 622, such as an interface or controller.[0085] An apparatus is disclosed that includes means for identifying one or more way prediction characteristics of an instruction, with one or more of the described embodiments. The means for identifying may include the control logic 150 of FIG. 1, the decoding logic 190, the processor 610 of FIG. 6, the control logic 686, one or more others configured to identify one or more way prediction characteristics Devices or circuits, or any combination thereof.[0086] The apparatus also includes means for selectively reading the table based on the identification information of one or more way prediction characteristics in order to identify an entry in the table associated with the instruction identifying the way of the data cache obtain. The means for selectively reading the table may be selected from the group consisting of control logic 150 of FIG. 1, control logic 686 of FIG. 6, one or more other devices or circuits configured to selectively read a table, And may include any combination.[0087] The device may also include means for predicting whether the next access of the data cache based on the instruction accesses the way. The means for making predictions may include control logic 150 of FIG. 1, control logic 686 of FIG. 6, one or more other devices or circuits configured to make predictions, or any combination thereof .[0088] The apparatus may also include means for decoding the instructions, the instructions including a register identifier and having a predictable next address. The means for decrypting may be decryption logic 190 of FIG. 1, processor 610 of FIG. 6, one or more other devices or circuits configured to decode instructions to be executed, or any combination thereof For example.[0089] The device may also include means for selectively driving the data cache line based on the way. The means for selectively driving the data cache lines may include the line drivers 140a-c of FIG. 1, the data array 682 of FIG. 6, one or more others configured to selectively drive the data cache lines Devices or circuits, or any combination thereof.[0090] One or more of the disclosed embodiments may be implemented in a mobile phone, cellular phone, satellite phone, computer, set top box, entertainment unit, navigation device, communication device, personal digital assistant (PDA), fixed location data unit, Mobile location data unit, tablet, portable computer, desktop computer, monitor, computer monitor, television, tuner, radio, satellite radio, music player, digital music player, portable music player, video player, digital video player, digital video disk Implemented in a system or device, such as device 600, which may include a DVD player, a portable digital video player, or any combination thereof It can be. As another illustrative, non-limiting example, a system or device may be a portable data unit such as a mobile phone, a handheld personal communication system (PCS) unit, a personal digital assistant, a global positioning system (GPS) compatible device, a navigation A fixed location data unit such as a device, a meter reading device, or any other device that stores or retrieves data or computer instructions, or any combination thereof.[0091] Although one or more of FIGS. 1-6 may depict a system, apparatus, and / or method according to the teachings of the present disclosure, it is to be understood that this disclosure is not limited to these illustrated systems, apparatuses, and / It is not limited. Embodiments of the present disclosure may be suitably used in any device including an integrated circuit including a processor and a memory.[0092] The various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software executed by a processor, or a combination thereof Those skilled in the art will further appreciate that. The various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their function. Whether such functionality is implemented as hardware or as processor executable instructions depends on the particular application and design constraints imposed on the overall system. Those skilled in the art may implement the described functionality in varying ways for a particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0093] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software module may be a random access memory (RAM), a flash memory, a read only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM Trademark)), a register, a hard disk, a removable disk, a compact disk read only memory (CD-ROM), or any other form of non-transitory storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from and write information to the storage medium. Alternatively, the storage medium may be integrated with the processor. The processor and the storage medium may reside in an application specific integrated circuit (ASIC). The ASIC may reside in a computing device or user terminal. Alternatively, the processor and the storage medium may reside as discrete components in a computing device or user terminal.[0094] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art and the principles defined herein may be applied to other embodiments without departing from the scope of the present disclosure. Accordingly, the present disclosure is not limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features defined by the following claims It should be. |
A processor (100) predicts a number of loop iterations (115) associated with a set of loop instructions. In response to the predicted number of loop iterations exceeding a first loop iteration threshold, the set of loop instructions are executed in a loop mode that includes placing at least one component (105) of an instruction pipeline (114) of the processor in a low-power mode or state and executing the set of loop instructions from a loop buffer (109). In response to the predicted number of loop iterations being less than or equal to a second loop iteration threshold, the set of loop instructions are executed in a non-loop mode that includes maintaining at least one component of the instruction pipeline in a powered up state and executing the set of loop instructions from an instruction fetch unit (103) of the instruction pipeline. |
1.A method, the method includes:In the processor, predict the number of loop iterations associated with a set of loop instructions; andIn response to the predicted number of loop iterations exceeding a first loop iteration threshold, executing the set of loop instructions in a loop mode, the loop mode including:Placing at least one component of the instruction pipeline of the processor in a low power mode; andThe set of loop instructions from the loop buffer is executed.2.The method of claim 1, further comprising:In response to the predicted number of loop iterations being less than the second loop iteration threshold, delay entering the loop mode until the threshold number of loop iterations has been performed.3.The method according to claim 2, further comprising:In response to the predicted number of loop iterations being greater than the second loop iteration threshold, the loop mode is entered before the threshold number of loop iterations has been performed.4.The method of claim 1, wherein placing the at least one component of the instruction pipeline in the low power mode includes placing a loop exit predictor of the processor in the low power mode.5.The method of claim 1, further comprising:After placing the at least one component of the instruction pipeline in the low power mode, update the number of loop iterations associated with the loop instruction through a loop exit predictor; andBased on the updated number of loop iterations, a time to restore power to the at least one component of the instruction pipeline in the low power mode is determined.6.The method according to any one of claims 1 to 5, the method further comprising:Before predicting the number of loop iterations, the instruction is identified as the set of loop instructions by matching the characteristics of the loop instruction with an identifier in a set of stored loop identifiers.7.The method of claim 1, wherein the placing the at least one component of the instruction pipeline in the low power mode is performed before executing the instructions in the stored set of instructions.8.The method according to any one of claims 1 to 7, the method further comprising:Predict the exit of the set of loop instructions during the loop mode.9.A method, the method includes:In the processor, in response to predicting the number of loop iterations:Store a set of loop instructions in the loop buffer;Placing the components of the instruction pipeline of the processor in a low power mode;Executing the set of loop instructions from the loop buffer;Predicting the loop exit through the loop exit predictor of the processor; andRestore power to the component placed in the low power mode based on the predicted loop exit.10.The method according to claim 9, further comprising:Before powering off the components of the instruction pipeline, the predicted number of loop iterations is compared with a first loop iteration threshold.11.9. The method of claim 9, wherein the components of the instruction pipeline are placed in the low power mode before executing the set of loop instructions from the loop buffer.12.11. The method of any one of claims 9 to 11, wherein powering down the component of the instruction pipeline includes powering down the loop exit predictor of the processor.13.A processor, the processor includes:An instruction cache, the instruction cache having a set of cyclic instructions;A circular buffer configured to store the set of circular instructions;A loop exit predictor configured to predict the number of loop iterations; andWherein the processor is configured to:In response to the predicted number of loop iterations exceeding a first loop iteration threshold, executing the set of loop instructions in a loop mode, the loop mode including:Placing at least one component of the instruction pipeline of the processor in a low power mode; andExecute the set of loop instructions from the loop buffer; andIn response to the predicted number of loop iterations being less than or equal to the first loop iteration threshold, executing the set of loop instructions in an acyclic mode, the acyclic mode including:Keeping the at least one component of the instruction pipeline in an active state; andThe set of loop instructions acquired from the instruction cache through the instruction fetching unit of the instruction pipeline is executed.14.The processor of claim 13, the processor further comprising:A decoder for decoding the set of cyclic instructions into micro-operations to be executed by the functional unit of the processor; andThe instruction fetching unit is configured to provide the loop instruction from the instruction cache to the decoder.15.The processor of claim 14, wherein the instruction fetch unit is configured to provide instructions to the loop exit predictor.16.The processor of claim 13, wherein the at least one component of the instruction pipeline placed in the low power mode is an instruction fetching component of the processor.17.The processor of claim 13, wherein the at least one component of the instruction pipeline placed in the low power mode is the loop exit predictor.18.The processor according to any one of claims 13 to 18, wherein the loop exit predictor is further configured to:After placing the at least one component of the instruction pipeline in the low power mode, update the number of loop iterations associated with the loop instruction; andThe timing of restoring power to the at least one component placed in the low power mode of the instruction pipeline is based on the updated number of loop iterations.19.The processor according to any one of claims 13 to 17, the processor further comprising:A buffer of stored circular identifiers; andThe loop exit predictor is further configured to match the characteristics of the loop instruction with the identifier in the buffer of the stored loop identifier.20.The processor of any one of claims 13 to 19, wherein placing the at least one component of the instruction pipeline in the low power mode is before executing any instructions associated with the loop instruction And it is executed after predicting the number of loop iterations associated with the loop instruction. |
Use loop exit prediction to speed up or suppress the processor's loop modeBackground techniqueIn order to improve processing efficiency, modern processors sometimes use a loop mode to execute program loops. In the loop mode, the processor retrieves the loop instruction from the loop instruction buffer and executes the instruction instead of repeatedly retrieving the loop instruction through the instruction fetch unit. The loop mode allows the processor to save resources by, for example, placing the instruction fetch unit or other parts of the processor in a low power state when in the loop mode. However, conventional cyclic mode operation is inefficient under certain conditions. For example, the processor usually exits the loop mode because it encounters a branch misprediction for the loop exit instruction. Branch prediction errors cause the processor's instruction pipeline to be flushed, which consumes additional processor resources and generates power overhead. For relatively short instruction cycles, the resources consumed by pipeline flushing may exceed the resources saved by running in the loop mode.Description of the drawingsThe present disclosure can be better understood by referring to the accompanying drawings, and numerous features and advantages of the present disclosure will be apparent to those skilled in the art. The same reference numerals are used in different drawings to indicate similar or identical items.Figure 1 is a block diagram of an instruction pipeline in a processor that implements loop exit prediction in a low power state in a loop mode according to some embodiments.Figure 2 is a block diagram of the instruction pipeline of Figure 1 in which the loop exit predictor is in a power-on state during loop mode, according to some embodiments.Figure 3 is a block diagram of the instruction pipeline of Figure 1 according to some embodiments, illustrating additional aspects of the loop exit predictor and loop mode.4 is a flowchart illustrating a method of using loop exit prediction to identify relatively large loop iterations of loop patterns according to some embodiments.5 is a flowchart illustrating a method of using loop exit prediction to identify small loop iterations of loop patterns according to some embodiments.Detailed waysFigures 1 to 5 illustrate techniques for adopting loop exit prediction (LEP) at the processor to save processor resources associated with adopting the loop mode. The processor includes a LEP unit that predicts the exit of each loop in execution. Based on the prediction made by the LEP unit, the processor implements one or more loop management techniques, including: refusing to enter loop mode for relatively short loops; exiting loop mode before indicating a branch prediction error or encountering a branch prediction error; and For relatively large loops, accelerate into loop mode. Each of these technologies reduces the amount of resources consumed by the processor in the loop mode, thereby improving processing efficiency.To illustrate, in some embodiments, the processor uses LEP units to predict the number of iterations of each execution loop in the program flow. In response to the LEP unit indicating that the number of iterations of the loop is lower than the specified threshold number of iterations, the processor inhibits entering the loop mode with respect to the loop. When the resource cost of entering the loop mode exceeds the resource savings obtained by executing the loop in the loop mode, the processor thus avoids entering the loop mode.In some embodiments, the processor employs the LEP unit when executing the loop in the loop mode to predict when it is desired to exit the loop. In response to the LEP unit indicating the predicted loop exit, the processor initiates the exit of the loop mode, for example, fetching the instruction pipeline when exiting the loop and filling the instruction pipeline with one or more next instructions to be executed. Therefore, the processor does not wait for the branch misprediction to indicate or trigger the loop exit, so this process avoids pipeline refresh, which consumes processor resources and delays further instruction execution. Even during loop mode, LEP is used to predict loop exit branches. In some embodiments, a dedicated LEP unit within the processor performs LEP. Since the LEP will make special adjustments for loop exit branches, the LEP accuracy is higher than the accuracy of general branch prediction applied by one or more branch predictors during the execution of the loop.The processor also uses the predicted number of iterations provided by the LEP unit to identify relatively large loops, and the processor accelerates into loop mode before executing large loops. In particular, the processor nominally enters the loop mode in response to the first threshold number of iterations of the loop that has been executed or may be executed before entering the loop mode to ensure that the loop is actually encountered and the loop of the loop instruction set is successfully completed. However, in some embodiments, in response to the predicted number of iterations exceeding the specified second threshold, the processor initiates the loop mode without waiting for the first threshold number of loop iterations to be executed, thereby achieving better implementation than using loop mode. The solution enters the loop mode faster to save processor resources.FIG. 1 is a block diagram of an instruction pipeline architecture in a processor 100 that implements LEP according to some embodiments. To simplify the description, only some components of the processor 100 are illustrated. In addition, some components of the processor 100 can be regarded as a part of the front side or the rear side of the processor 100 for retrieving and executing instructions, respectively, as traditionally understood, but since the technology described here is applicable to various types of There are multiple types of processors such as components, architectures, instruction sets, operating modes, etc., so there is no such designation in this article. The processor 100 generally has an instruction set (for example, a computer program) to perform tasks on behalf of an electronic device. Therefore, in some embodiments, the processor 100 is incorporated into electronic devices such as desktop computers, laptop computers, servers, smartphones, game consoles, household appliances, and the like.In order to support the execution of instructions, the processor 100 includes an instruction pipeline 114 that includes an instruction cache 101, a data cache 102, an instruction fetch unit 103 with one or more predictors 104, a loop exit predictor 105, The decoder 106, the reordering buffer 107, the register 108, the circular instruction buffer 109, the reservation station 110, the load/store unit 111, one or more execution units 112, and the power controller 117. The instruction pipeline 114 operates in at least two modes: an active (non-cyclic) mode and a cyclic mode. In the active mode, power is provided to the components of the processor 100 to actively execute instructions. In the cyclic mode, the processor 100 places one or more components in a low-power state to save one or more resources, including energy that may have been consumed in the active mode, such as repeatedly when certain components remain idle. When executing a loop instruction.In the active mode, the instruction fetching unit retrieves instructions from the instruction cache 101 based on the value stored at the program counter 113. In some embodiments, the instruction acquisition unit 103 also acquires instructions based on the prediction generated by the predictor 104. The predictor 104 includes a branch predictor and a loop predictor that recognize branch instructions, generate branch target addresses, loop instructions, and perform other branch, loop, and prediction functions.The instruction fetching unit 103 provides the fetched instructions to the decoder 106, which converts each instruction into one or more micro-operations. The dispatch stage (not shown) of the decoder 106 sends each micro-operation to a corresponding unit in the load/store unit 111 and the execution unit 112 for execution. The reordering buffer 107 manages the scheduling of the execution of micro-operations at the load/store unit 111 and the execution unit 112. In addition, the reservation station 110 manages the access to the register 108 by the load/store unit 111 and the execution unit 112. After executing the corresponding micro-operation, each instruction is retired at the retirement stage (not shown) of the instruction pipeline 114.In the loop mode, the instruction pipeline 114 uses the loop instruction buffer 109 to execute iterations of the loop. As used herein, a loop is a set of instructions that are executed repeatedly until a conditional branch that terminates the loop is selected. For example, for some loops, the conditional branch instruction is a relative jump instruction that includes an offset added to the program counter 113 that points to the conditional branch instruction. In some embodiments, in order to be recognized as a loop, the instruction pipeline 114 recognizes that in the most recent execution instance of the loop, the conditional branch instruction was taken a threshold number of times (eg, 2, 3, 4, 5). The iteration of the loop refers to a single execution of the instructions of the loop.Returning to the loop mode, in response to detecting an instruction loop (eg, based on the logic of the predictor 104 indicating the instruction loop), the instruction pipeline 114 stores one or more micro-operations for the looped instruction in the loop instruction buffer 109 . In the loop mode, the loop instruction buffer 109 repeatedly provides micro-operations to the load/store unit 111 and the execution unit 112 for execution until the loop exit is reached. Therefore, in the loop mode, the instruction fetching unit 103 suspends retrieving instructions from the instruction cache 101. When in the loop mode, the power controller 117 places certain components of the processor 100 (including one or more components of the instruction pipeline 114) in a low power mode or state to save power, as shown by the dashed line 118. For example, the power controller 117 powers off the instruction fetch unit 103, one or more predictors 104, the loop exit predictor 105, and the decoder 106, while powering off other components (such as the loop instruction buffer 109, load/store unit) 111 and the execution unit 112) remain active. When in an active state, certain components remain powered on and perform their functions until a loop exit condition occurs, and power is restored to those components placed in low power mode (e.g., before, during, or after entering the loop mode).In order to support effective execution of the loop mode, the instruction pipeline 114 includes a loop exit predictor (LEP) 105 that predicts the number of iterations of each executed loop. To illustrate, the LEP 105 stores a loop history 116 that indicates the patterns in the loop executed at the instruction pipeline 114. In some embodiments, the LEP 105 generates and stores the loop history 116 during one or more dedicated training periods of the instruction pipeline 114. During each training period, the instruction pipeline 114 executes a specified group of instructions, counts the number of iterations of each executed loop, and stores the number of iterations in a storage structure designated for predicting the number of loops 115. In some embodiments, during normal operation of the processor 100, the instruction pipeline 114 continues to count iterations of each executed loop, and adjusts the predicted loop number 115 based on the iterations.In some embodiments, LEP 105 supports the effective use of cyclic mode in a variety of ways. For example, because some loops have relatively few iterations, the resource cost of entering and exiting loop mode exceeds the resource savings of using loop mode. Therefore, in some embodiments, the instruction pipeline 114 uses the prediction of the LEP 105 to identify loops that are predicted to have relatively few iterations, and avoid entering loop mode for those loops. Therefore, in response to the predicted loop number 115 of the loop being less than the threshold, the instruction pipeline 114 prevents entering the loop mode.In addition, for loops with a relatively high number of iterations, resource saving is enhanced by entering the loop mode faster, so that more iterations of the loop are executed in the loop mode. Therefore, in some embodiments, the instruction pipeline 114 uses LEP prediction to identify loops that are predicted to have a relatively high number of iterations, and accelerates into loop mode for those loops. Therefore, in response to the predicted loop number 115 for the loop being higher than the threshold (eg, the first threshold), the instruction pipeline 114 enters the loop mode to perform the first iteration of the loop.In other embodiments, the instruction pipeline 114 uses the LEP 105 during the loop mode itself. Refer to Figure 2 for a better understanding of this use. 2 is a block diagram of an alternative configuration of the processor 100 according to some embodiments, whereby the instruction pipeline 114 keeps the loop exit predictor 105 in an active state during the loop mode (as illustrated by the placement of the LEP 105 relative to the dashed line 218). When active during the loop mode, the loop exit predictor 105 continues to predict the number of loop iterations. For example, the loop exit predictor 105 updates the predicted number of loop iterations that may be executed by the loop being executed, and the loop exit predictor 105 updates the components that restore the instruction pipeline 114 to the low power mode based on the updated predictions The timing of the power supply causes the loop mode to exit before the branch misprediction, and the poor branch misprediction leads to a pipeline refresh that is both performance and power overhead.To illustrate, in conventional processors, the end of the loop and therefore the exit of the loop mode are indicated by a branch misprediction regarding the branch instruction that ends the loop. However, like other mispredictions, a branch misprediction that indicates the end of the loop requires refreshing the instruction pipeline and returning the pipeline to an earlier state. Therefore, the loop is executed until it encounters a false prediction that causes power loss through the pipeline bubble, thereby one or more downstream (such as decoder 106, reordering buffer 107, register 108, reservation station 110, load/store unit 111, and execution Unit 112) Hungry for instructions. On the contrary, the loop exit predictor 105 is kept in an active state and predicts to exit the loop. In response to the predicted exit, the instruction pipeline exits the loop mode by returning the instruction fetching unit 103 and other modules to an active state. The instruction pipeline 114 thus avoids mispredictions of branch exits for loops, and therefore avoids mispredictive performance compensation.FIG. 3 is a block diagram of the processor 100 of FIG. 1 according to some embodiments, illustrating additional aspects of the LEP 105. In addition to the predicted loop number 115 and loop history 116, the loop exit predictor 105 also includes: a loop instruction buffer 302, a loop prediction logic 303, one or more loop counters 304, a loop identifier 305, a first loop threshold 306, a first loop A two-loop threshold 307, a loop prediction 308, one or more comparison results 309, and one or more confidence values 310. The loop prediction logic 303 provides loop exit prediction based on a set of instructions that are identified as being repeatedly executed. Loop prediction 308 includes identifying and storing the predicted loop number for a specific loop or a set of one or more loop instructions. The loop counter 304 and the loop identifier 305 are used by the instructions of the loop exit predictor 105 and the loop instruction buffer 302. For example, the loop counter 304 is used in training to identify when to execute a set of instructions as a loop, and is used during loop execution to track how many iterations of the loop instruction have been completed. When preparing to exit the loop with the predicted loop exit count, each loop counter 304 is compared with the predicted loop exit. One or more loops may be encountered while executing processor instructions, and the processor 100 maintains a history of multiple executing loops in the loop history 116, for example when the second loop is executed within the first loop. The loop counter 304 includes at least a loop confidence value, and current, past, and predicted loop iteration values.During the training phase, the loop exit predictor 105 detects loops and loop exit branches in the set of processor instructions. The training includes the loop exit predictor 105 recording the number of loop iterations repeatedly executed for a specific set of loop instructions, for example, recording in one of the loop counters 304. Whenever the number of iterations of a particular loop is the same as the number of iterations in the previous run or execution instance of the loop, the confidence value 310 is incremented, and the confidence value 310 is used by the loop exit predictor 105 when providing its estimate of loop exit .When identifying or predicting, the loop exit predictor 105 searches the current set of loop identifiers 305 for a matching loop identifier. The hit of the LEP entry in the loop identifier 305 indicates that the predicted branch instruction is an exit branch instruction. Finding a hit in the loop identifier 305 includes matching the characteristics of the loop instruction with at least one of the loop identifiers. If the current iteration of the specific loop tracked by the loop exit predictor 105 is equal to the total number of iterations predicted by the loop exit predictor 105, then the specific loop is predicted to exit during this iteration. That is, the specific loop iteration of the loop exit branch is predicted as not-taken. Otherwise, the loop exit branch is predicted as taken.According to some embodiments, when the confidence value 310 associated with a particular branch is sufficiently high, only the LEP performed by the loop exit predictor 105 is executed. If the confidence value 310 is too low (ie, cannot exceed the confidence threshold), or if there is no LEP entry hit in the loop identifier 305, the branch is predicted or subjected to prediction by other branch predictors (eg, the instruction fetch unit 103). One of the devices 104). Since the loop prediction logic 303 will specifically adjust the loop exit branch, when the processor 100 executes the instruction to exit the branch, the prediction accuracy of the loop prediction logic is generally higher than that of other branch predictors or general types of predictors. . The loop prediction logic 303 provides loop predictions 308 about each loop. The loop prediction 308 indicates whether a set of executing instructions is indeed a set of loop instructions. The loop exit predictor 105 provides the predicted number of loops: the number of iterations that the loop instruction set is likely to complete before exiting.According to some embodiments, the entry into the loop mode is triggered by saturating a specified number of bits of the direction history of the conditional branch (not shown) to ensure that the loop (eg, a set of one or more instructions) is actually being used by the processor 100 execution. For example, the cycle is identified by looking for a repeating pattern in one direction in the history register. For a 100-bit direction history register, if the group of five bits in the 100-bit is repeated, this indicates that there is a loop with five conditional branches. In operation, the loop mode is entered only after a certain number of bits of the direction history register is saturated or the direction threshold (value) is exceeded. For a saturation of 80 bits to reach saturation, and a loop with only two conditional branches, the system will have to wait for 40 iterations of the loop, because only at this point will the directional history variable (for example, dirHist) become Saturation (up to 80 count bits), which triggers the entry of cyclic mode. On the other hand, if there are too few bits to be saturated (for example, 10), the system will enter a loop mode after the fifth iteration to reach a value of 10 by increasing the saturation by two bits for each loop. If only assuming that a specific loop in this case runs (or is expected to run) for six iterations, the processor will enter loop mode and then immediately exit loop mode, thus wasting the benefits provided by loop mode. Generally, if the number of bits of the direction history is greater than the direction threshold, the processor 100 is identified as executing a loop. The larger the direction threshold, the longer it takes for the processor 100 to be triggered to enter the loop mode, and the smaller the chance of identifying opportunities to save power by entering the loop mode when the instruction is actually a loop instruction. If the direction threshold is too low, the processor 100 may enter the loop mode when there is actually no loop being executed or a too short loop is being executed. Therefore, given the length of the loop in execution, there is a balance as to when to enter the loop mode. In at least some embodiments, branch prediction includes branch direction, direction threshold, and target address. The same is true for LEP, so that LEP includes loop direction, loop threshold, and loop exit target address.The processor 100 also uses loop prediction 308 before and during the execution of micro-operations to determine when to enter and exit loop mode. In particular, once the processor 100 has determined that the micro-operation may be executing a loop, it compares the loop prediction 308 with the first loop threshold 306 and the second loop threshold 307. The comparison produces corresponding comparison results 309, one result at a time. Based on at least one of the comparison results 309, the processor 100 enters a loop mode.When an application (for example, a software application that is the source of micro-operations of the processor 100) undergoes a repeated loop, the micro-operations of the instruction (or instructions) related to that loop are cached in the loop instruction before or during the loop mode Buffer 302. During the loop mode, micro-operations are performed by one or more cores (for example, the first processor core 301) outside the loop instruction buffer 302, and certain other components of the processor 100 are placed in low power mode, thus saving This is the power that the components will consume when running at full power. For a group of loop instructions that are too large to fit into the loop instruction buffer 302, the loop exit predictor 105 remains powered on, and the loop instruction buffer 302 is powered off to a low-power or low-power state, and the energy of the processor 100 The consumption is still the result of cyclical mode. In this case, the loop exit predictor 105 remains powered on, and continues to predict the exit of the loop and the direction of the loop instruction when the instruction is fetched from the instruction cache 101 and provided to the first processor core 301. According to at least some embodiments, the loop mode occurs when one or more components are powered off or placed in a low power mode and when a loop instruction from the loop instruction buffer 302 is executed, for example.When the predictor 104 is powered off or placed in the low power mode in the loop mode, one way to exit the loop mode is to have the instruction execution component send an instruction to one or more components of the processor 100 indicating that the exit branch was predicted incorrectly The redirect message is the exit signal. The exit signal causes the instruction pipeline 114 to fetch and execute instructions that occur after the loop. Since branch misprediction is expensive in terms of wasted power and wasted execution cycles, improper selection or specification of direction thresholds will bring power performance overhead. Therefore, there is a trade-off between the power savings obtained by entering the loop mode and the power performance overhead for mispredicting the exit branch instruction. For short loops (for example, loops less than 5 iterations, loops less than 10 iterations), in some cases, for the specific configuration of the processor 100, the power performance overhead of the mispredicted exit branch will exceed the power performance overhead in the loop mode. Power saving. Another method of exiting the loop mode includes: the loop exit predictor 105 remains powered on, and after a successful loop exit prediction, the loop exit predictor 105 provides a loop exit signal. In this way, mispredictions are avoided by passing the instruction pipeline 114 in time for execution of instructions that occur after the loop.4 is a flowchart illustrating a method 400 for implementing loop exit prediction for relatively large loop iteration prediction according to some embodiments. The method 400 is executed by a component of a processor (for example, a component of the processor 100). At block 401, the method 400 includes identifying whether the branch instruction is a loop instruction-a loop that is potentially executed in a loop mode. If it is a loop instruction, at block 402, the processor determines the loop identifier and the number of loop iterations for the loop. This identification includes looking up the cycle identifier in a set of stored cycle identifiers (e.g., cycle identifier 305). At block 403, the processor determines whether the determined number of loop iterations exceeds a first loop threshold, for example, the first loop threshold 306. For example, the first loop threshold is a relatively large number (e.g., 500; 1,000; 10,000) for identifying a loop as a large loop with a relatively large number of predicted loop iterations to be executed by the processor. If the determined number of loop iterations exceeds the first loop threshold, the loop mode is directly entered. In addition, according to some embodiments, if the first cycle threshold is exceeded, it is not checked whether a certain direction history threshold or direction history variable is exceeded: directly enter the cycle mode without performing this check.If the determined number of loop iterations does not exceed the first loop threshold, then in block 404, the processor determines whether the determined number of loop iterations exceeds a second loop threshold, for example, the second loop threshold 307. For example, the second loop threshold is a relatively small number (eg, 15, 10, 5, 3) used to identify loops as small loops with a relatively small number of predicted loop iterations to be executed by the processor. If the predicted number of loop iterations does not exceed the second threshold, then at block 405, the processor waits for the next loop by keeping one or more components of the instruction pipeline in the active mode (including keeping the components in a powered-on state). , And execution returns to block 401. In this case, the processor and the loop exit predictor have encountered a loop that may be too small to benefit from the energy saving of loop mode, and the processor avoids entering based on a determination relative to the first loop threshold and the second loop threshold. Cycle mode. Alternatively, the processor avoids entering the loop mode based on a determination relative to the second loop threshold.If the determined number of loop iterations does not exceed the first loop threshold and does not exceed the second threshold, then at block 406, the processor waits for a certain number of actual loop iterations before confirming that the instruction is executing within the loop. If in block 403, the determined number of loop iterations exceeds the first threshold, or after waiting a certain number of successful loop executions in block 406, the method 400 continues to block 407, where a set of loop instructions are stored in, for example, loop Buffer 109 in the circular buffer. Subsequently, the loop instruction is repeatedly executed from the loop buffer. At block 408, one or more components of the processor are placed in a low power mode. In block 409, the loop instruction is executed until a branch misprediction or the number of loop iterations predicted by the execution of the loop instruction occurs and the loop exit predictor accurately predicts the loop exit and provides a loop exit signal to exit. In this case, the processor will not encounter pipeline bubbles. After exiting the loop, at block 410, power is restored to the processor components placed in the low power mode during the loop mode at block 408. Once the power supply is restored, at block 405, the processor waits for the next cycle.5 is a flowchart illustrating a method 500 for implementing loop exit prediction for relatively small loop iteration prediction according to some embodiments. The method 500 is performed by a component of a processor, such as the processor 100. At block 501, the method 500 includes predicting the number of loop iterations associated with a set of loop instructions. In response to the number of loop iterations predicted at block 502 exceeding the first loop iteration threshold, the set of loop instructions are executed in loop mode, and in response to the predicted number of loop iterations not exceeding the first loop iteration threshold (eg, the predicted loop The number of iterations is less than or equal to the loop iteration threshold), and the set of instructions is executed in the active mode. In particular, for the positive result at block 502, at block 503, the loop mode includes placing at least one component of the processor's instruction pipeline in a low power mode or state. In addition, according to some embodiments, in block 503, it is not checked whether a certain direction history threshold or direction history variable is exceeded: when it is determined that the predicted number of loop iterations exceeds the first loop iteration threshold, the loop mode is directly entered. At block 504, the loop mode further includes executing the set of loop instructions from the loop buffer.At blocks 505 to 507, according to some embodiments of the method 500, the cyclic mode includes certain additional steps. For example, at block 505, the loop mode updates the predicted number of loop iterations associated with the set of loop instructions. The prediction and update of the number of loop iterations are performed by the loop exit predictor, such as the loop exit predictor 105. At block 506, the loop mode determines the time to restore power to the components of the instruction pipeline placed in the low power mode of the processor. The time to restore power to low-power components can come before the end of the loop instruction execution, because it usually needs to advance time (for example, a certain number of clock cycles) to fill the instruction pipeline with instructions that arrive in sequence after exiting the loop to avoid pipeline bubbles . At block 507, the loop mode predicts the exit of the set of loop instructions. The processor determines the time to restore power to the components placed in the low power mode, and determines the next instruction address based on the predicted exit.At block 508, the active mode of method 500 includes maintaining at least one component of the instruction pipeline in a powered-on state. For example, keep the loop exit predictor such as the loop exit predictor 105 powered on. At block 509, the active mode also executes the set of loop instructions from the instruction fetch stage unit of the instruction pipeline. For the method 500, for each cycle, the processor runs in a cycle mode or in an active mode.In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. Software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer-readable storage medium. The software may include instructions and certain data that, when executed by one or more processors, manipulate one or more processors to perform one or more aspects of the aforementioned techniques. Non-transitory computer-readable storage media include, for example, magnetic or optical disk storage devices, solid-state storage devices (such as flash memory), caches, random access memory (RAM), or one or more other non-volatile memory devices. The executable instructions stored on the non-transitory computer-readable storage medium can be implemented in source code, assembly language code, object code, or other instruction formats that are interpreted by one or more processors or executable in other ways.It should be noted that not all the activities, components or elements described in the general description above are required, a specific activity or part of a device may not be required, and one or more other activities may be performed, or may include those other than those described Components other than components. In addition, the order in which the activities are listed is not necessarily the order in which the activities are performed. In addition, the concept has been described with reference to specific embodiments. However, those of ordinary skill in the art will understand that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the appended claims. Therefore, this specification and the drawings are to be regarded as illustrative rather than restrictive, and all such modifications are intended to be included in the scope of the present disclosure.The benefits, other advantages, and solutions to problems have been described above with respect to specific implementations. However, none of the benefits, advantages, solutions to problems, and any features that can make any benefits, advantages, or solutions to problems appear or become more prominent, should not be construed as being critical, necessary, or essential to any or all claims. Essential features. In addition, the specific embodiments disclosed above are only illustrative, because the disclosed subject matter can be modified and practiced in different but equivalent ways that are obvious to those skilled in the art who benefit from the teachings herein. Furthermore, it is not intended to limit the details of construction or design shown herein, except as described in the appended claims. Therefore, it is obvious that the specific embodiments disclosed above can be changed or modified, and all such variations are considered to be within the scope of the disclosed subject matter. Therefore, the protection sought herein is as stated in the appended claims. |
Technologies for software attack detection include a computing device with a processor and a memory external to the processor. The processor originates a memory transaction with an associated secure enclave status bit that indicates whether the memory transaction originated in a secure execution mode, such as from a secure enclave. The processor computes an error-correcting code (ECC) based as a function of memory transaction data and the secure enclave status bit, and performs the memory transaction based on the ECC and the memory transaction data using the memory of the computing device. The processor may store the ECC and the memory transaction data to memory. The processor may load a stored ECC and data from the memory and compare the computed ECC to the stored ECC to detect memory transactions with an invalid secure enclave status bit. Other embodiments are described and claimed. |
WHAT IS CLAIMED IS:1. A computing device for secure memory access, the computing device comprising:a processor; anda memory external to the processor;wherein the processor comprises:a secure execution module to originate, by the processor, a memory transaction and an associated secure enclave status bit, wherein the secure enclave status bit is indicative of whether the memory transaction is originated by the processor in a secure execution mode;an error-correcting code module to compute a first error-correcting code as a function of memory transaction data and the secure enclave status bit, wherein the memory transaction data is associated with the memory transaction; anda memory operation module to perform the memory transaction based on the first error-correcting code and the memory transaction data with the memory of the computing device.2. The computing device of claim 1, wherein the memory transaction data comprises a first number of bits and the first number of bits is less than a maximum number of data bits supported by the error-correcting code.3. The computing device of claim 1, wherein to compute the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises to calculate a single-error correction and double-error detection (SECDED) error-correcting code.4. The computing device of claim 3, wherein the memory transaction data comprises sixty-four bits and the error-correcting code comprises seven bits of Hamming code and one bit of parity.5. The computing device of claim 1, wherein the secure execution mode comprises a secure enclave execution mode.6. The computing device of claim 1, wherein to originate the memory transaction and the associated secure enclave status bit comprises to: determine, by the processor, whether the memory transaction is originated by the processor from a secure enclave;set, by the processor, the secure enclave status bit in response to a determination that the memory transaction is originated by the processor from the secure enclave; andclear, by the processor, the secure enclave status bit in response to a determination that the memory transaction is not originated by the processor from the secure enclave.7. The computing device of claim 6, wherein to originate the memory transaction and the associated secure enclave status bit further comprises to perform, by the processor, an encryption operation with the memory transaction data in response to the determination that the memory transaction is originated by the processor from the secure enclave.8. The computing device of any of claims 1-7, wherein:to perform the memory transaction comprises to (i) determine whether the memory transaction is a write transaction, and (ii) write, in response to a determination that the memory transaction is a write transaction, the memory transaction data and the error-correcting code to the memory of the computing device; andto compute the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises to compute, in response to the determination that the memory transaction is a write transaction, the first error-correcting code as a function of the memory transaction data included in the memory transaction and the secure enclave status bit.9. The computing device of any of claims 1-7, wherein:to perform the memory transaction comprises to (i) determine whether the memory transaction is a read transaction, (ii) read, in response to a determination that the memory transaction is a read transaction, the memory transaction data and a second error- correcting code that correspond to the memory transaction from the memory of the computing device, and (iii) determine whether the first error-correcting code matches the second error- correcting code; andto compute the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises to compute, in response to the determination that the memory transaction is a read transaction, the first error-correcting code as a function of the memory transaction data that corresponds to the memory transaction and the secure enclave status bit.10. The computing device of claim 9, wherein to perform the memory transaction further comprises to return the memory transaction data and the second error-correcting code in response to a determination that the first error-correcting code matches the second error- correcting code.11. The computing device of claim 9, wherein to perform the memory transaction further comprises to:determine whether a bit error has occurred in a bit position that corresponds to the secure enclave status bit in response to a determination that that the first error-correcting code does not match the second error-correcting code; andgenerate a error condition in response to a determination that the bit error has occurred in the bit position that corresponds to the secure enclave status bit.12. The computing device of claim 11, wherein the error condition comprises a machine check exception.13. The computing device of claim 11, wherein to perform the memory transaction further comprises to:determine whether an odd-numbered bit error has occurred based on the first error-correcting code and the second error-correcting code in response to the determination that the first error-correcting code does not match the second error-correcting code; andgenerate an error condition in response to a determination that an odd-numbered bit error has not occurred;wherein to determine whether the bit error has occurred in the bit position that corresponds to the secure enclave status bit comprises to determine whether the bit error has occurred in the bit position that corresponds to the secure enclave status bit in response to a determination that that an odd-numbered bit error has occurred.14. The computing device of claim 11, wherein to perform the memory transaction further comprises to:attempt to correct the bit error in the memory transaction data and the second error-correcting code to generate a corrected memory transaction data and a corrected second error-correcting code in response to a determination that the bit error has not occurred in the bit position that corresponds to the secure enclave status bit;determine whether the bit error was corrected in response to an attempt to correct the bit error;generate an error condition in response to a determination that the bit error was not corrected; andreturn the corrected memory transaction data and the corrected second error- correcting code in response to a determination that the bit error was corrected.15. A method for secure memory access, the method comprising:originating, by a processor of a computing device, a memory transaction and an associated secure enclave status bit, wherein the secure enclave status bit is indicative of whether the memory transaction is originated by the processor in a secure execution mode;computing a first error-correcting code as a function of memory transaction data and the secure enclave status bit, wherein the memory transaction data is associated with the memory transaction; andperforming the memory transaction based on the first error-correcting code and the memory transaction data using a memory of the computing device, wherein the memory is external to the processor.16. The method of claim 15, wherein the memory transaction data comprises a first number of bits and the first number of bits is less than a maximum number of data bits supported by the error-correcting code.17. The method of claim 15, wherein computing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises calculating a single-error correction and double-error detection (SECDED) error-correcting code.18. The method of claim 15, wherein the secure execution mode comprises a secure enclave execution mode.19. The method of claim 15, wherein:performing the memory transaction comprises (i) determining whether the memory transaction is a write transaction, and (ii) writing, in response to determining that the memory transaction is a write transaction, the memory transaction data and the error-correcting code to the memory of the computing device; andcomputing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises computing, in response to determining that the memory transaction is a write transaction, the first error-correcting code as a function of the memory transaction data included in the memory transaction and the secure enclave status bit.20. The method of claim 15, wherein:performing the memory transaction comprises (i) determining whether the memory transaction is a read transaction, (ii) reading, in response to determining that the memory transaction is a read transaction, the memory transaction data and a second error- correcting code corresponding to the memory transaction from the memory of the computing device, and (iii) determining whether the first error-correcting code matches the second error- correcting code; andcomputing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises computing, in response to determining that the memory transaction is a read transaction, the first error-correcting code as a function of the memory transaction data corresponding to the memory transaction and the secure enclave status bit.21. The method of claim 20, wherein performing the memory transaction further comprises:determining whether a bit error has occurred in a bit position corresponding to the secure enclave status bit in response to determining that that the first error-correcting code does not match the second error-correcting code; andgenerating an error condition in response to determining that the bit error has occurred in the bit position corresponding to the secure enclave status bit.22. The method of claim 21, wherein performing the memory transaction further comprises:determining whether an odd-numbered bit error has occurred based on the first error-correcting code and the second error-correcting code in response to determining that the first error-correcting code does not match the second error-correcting code; and generating an error condition in response to determining that an odd-numbered bit error has not occurred;wherein determining whether the bit error has occurred in the bit position corresponding to the secure enclave status bit comprises determining whether the bit error has occurred in the bit position corresponding to the secure enclave status bit in response to determining that that an odd-numbered bit error has occurred.23. A computing device comprising:a processor; anda memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of claims 15-22.24. One or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of claims 15-22.25. A computing device comprising means for performing the method of any of claims 15-22. |
TECHNOLOGIES FOR SOFTWARE ATTACK DETECTION USING ENCODED ACCESSINTENTCROSS-REFERENCE TO RELATED U.S. PATENT APPLICATION[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 14/866,856, entitled "TECHNOLOGIES FOR SOFTWARE ATTACK DETECTION USING ENCODED ACCESS INTENT," which was filed on September 26, 2015.BACKGROUND[0002] Current processors may provide support for a trusted execution environment such as a secure enclave. Secure enclaves include segments of memory (including code and/or data) protected by the processor from unauthorized access including unauthorized reads and writes. Additionally, the processor can crypto graphically prove that a particular secure enclave is authentic and unaltered.[0003] Certain secure enclave implementations provide full cryptographic protection of enclave memory, including confidentiality, integrity, and replay protection. Full cryptographic protection may require the processor to store additional data such as counters and authentication tags, which may impose a storage overhead for enclave memory. Additionally, certain secure enclave implementations use a range register to identify physical memory reserved to be used by secure enclaves, which is typically referred to as an enclave page cache (EPC). The range register typically must be set in a pre-boot firmware environment and thus the size of the EPC may not be changed at runtime.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0005] FIG. 1 is a simplified block diagram of at least one embodiment of a computing device for software attack detection;[0006] FIG. 2 is a simplified block diagram of at least one embodiment of a processor and memory of the computing device of FIG. 1;[0007] FIG. 3 is a simplified block diagram of at least one embodiment of an environment that may be established by the computing device of FIGS. 1-2; [0008] FIG. 4 is a simplified flow diagram of at least one embodiment of a method for software attack detection that may be executed by the computing device of FIGS. 1-3;[0009] FIGS. 5 A and 5B are a simplified flow diagram of at least one embodiment of a method for memory transaction processing that may be executed by the computing device of FIGS. 1-3;[0010] FIG. 6 is a schematic diagram illustrating a memory transaction that may be processed by the methods of FIGS. 4, 5A, and 5B; and[0011] FIG. 7 is a schematic diagram illustrating another memory transaction that may be processed by the methods of FIGS. 4, 5 A, and 5B.DETAILED DESCRIPTION OF THE DRAWINGS[0012] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0013] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one of A, B, and C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0014] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).[0015] Referring now to FIG. 1, in an illustrative embodiment, a computing device 100 for software attack detection includes a processor 120 with secure enclave support 122. In use, as described in more detail below, the processor 120 of the computing device 100 generates memory transactions with associated secure enclave status bits. When the memory transaction is generated from a secure enclave, the secure enclave status bit is set. Therefore, the secure enclave status bit indicates the access intent of the memory transaction. The processor 120 computes an error-correcting code as a function of the memory transaction data combined with the secure enclave status bit. For write transactions, the error correcting code and the data may be stored in main memory, without storing the secure enclave status bit. For read transactions, the computed error-correcting code may be compared to the error-correcting code stored in the memory to detect memory transactions with an invalid access intent. Thus, the computing device 100 may detect invalid access intents for any location in the memory 126, without relying on range registers to identify a pre-allocated secure memory partition. Additionally, the computing device 100 may detect invalid access intents without the storage overhead associated with integrity- and replay-protection mechanisms such as counters and authentication tags. Further, the computing device 100 may use ordinary ECC memory commonly used in server devices.[0016] The computing device 100 may be embodied as any type of computation or computer device capable of performing the functions described herein, including, without limitation, a server, a workstation, a computer, a desktop computer, a laptop computer, a notebook computer, a tablet computer, a mobile computing device, a wearable computing device, a network appliance, a web appliance, a distributed computing system, a processor- based system, and/or a consumer electronic device. As shown in FIG. 1, the computing device 100 illustratively includes a processor 120, an input/output subsystem 124, a memory 126, a data storage device 128, and communication circuitry 130. Of course, the computing device 100 may include other or additional components, such as those commonly found in a server computer (e.g., various input/output devices), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 126, or portions thereof, may be incorporated in the processor 120 in some embodiments. [0017] The processor 120 may be embodied as any type of processor capable of performing the functions described herein. The processor 120 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. As described above, the processor 120 includes secure enclave support 122. The secure enclave support 122 allows the processor 120 to establish a trusted execution environment often referred to as a secure enclave, in which executing code may be measured, verified, and/or otherwise determined to be authentic. Additionally, code and data included in the secure enclave may be encrypted or otherwise protected from being accessed by code executing outside of the secure enclave. For example, code and data included in the secure enclave may be protected by hardware protection mechanisms of the processor 120 while being executed or while being stored in certain protected cache memory of the processor 120. The code and data included in the secure enclave may be encrypted when stored in a shared cache or in the main memory 126. The secure enclave support 122 may be embodied as a set of processor instruction extensions that allows the processor 120 to establish one or more secure enclaves in the memory 126. For example, the secure enclave support 122 may be embodied as Intel® Software Guard Extensions (SGX) technology.[0018] The memory 126 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 126 may store various data and software used during operation of the computing device 100 such as operating systems, applications, programs, libraries, and drivers. As described above, the memory 126 may store encrypted code and data associated with one or more secure enclaves. For example, the memory 126 may be used as a backing store for an enclave page cache (EPC) or other protected memory of the processor 120. The memory 126 is communicatively coupled to the processor 120 via the I/O subsystem 124, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 120, the memory 126, and other components of the computing device 100. For example, the I O subsystem 124 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, platform controller hubs, integrated control circuitry, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 124 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 120, the memory 126, and other components of the computing device 100, on a single integrated circuit chip. [0019] The data storage device 128 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. In some embodiments, the data storage device 128 may be used to store the contents of one or more secure enclaves. When stored by the data storage device 128, the contents of the secure enclave may be encrypted to prevent unauthorized access.[0020] The communication circuitry 130 of the computing device 100 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing device 100 and other remote devices over a network. The communication circuitry 130 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.[0021] In some embodiments, the computing device 100 may also include one or more peripheral devices 132. The peripheral devices 132 may include any number of additional input/output devices, interface devices, and/or other peripheral devices. For example, in some embodiments, the peripheral devices 132 may include a display, touch screen, graphics circuitry, keyboard, mouse, speaker system, microphone, network interface, and/or other input/output devices, interface devices, and/or peripheral devices.[0022] Referring now to FIG. 2, a schematic diagram 200 illustrates one potential embodiment of the processor 120 and the memory 126 of the computing device 100. The illustrative processor 120 includes two processor cores 202, each of which is an independent processing unit capable of executing programmed instructions. Although the illustrative processor 120 includes two processor cores 202, in other embodiments the processor 120 may include a different number of processor cores 202. Each processor core 202 may originate memory transactions (e.g., read transaction or write transactions) in response to executing certain programmed instructions. Each core 202 also sets and/or clears a secure enclave status bit signal based on the access intent of the instruction that originates the memory transaction. The access intent indicates the intention of the memory transaction to access secure memory. For example, the core 202 may set the secure enclave status bit signal when the transaction originates from a secure enclave and clear the secure enclave status bit signal when the transaction originates from outside of the secure enclave. A coherent cache fabric 204 coupled to the cores 202 forwards transactions to a last- level cache 206 and a system agent 208. The last-level cache 206 may store data associated with memory transactions, including the secure enclave status bit. The system agent 208 forwards transactions with the secure enclave status bit to a memory encryption engine 210 or a memory controller 212 based on the access intent of the transaction. For example, the system agent 208 may forward a transaction to the memory encryption engine 210 if the secure enclave status bit is set or to the memory controller 212 if the secure enclave status bit is cleared. The memory encryption engine 210 is configured to perform one or more cryptographic operation based on the memory transactions, including encrypting data, decrypting data, and/or generating integrity- and replay-protection data. The memory controller 212 performs memory transactions, including reading data from the memory 126, writing data to the memory 126, and/or calculating and verifying error correcting codes. For example, the memory controller 212 may execute a method for performing memory transactions as described further below in connection with FIGS. 5 A and 5B.[0023] Referring now to FIG. 3, in an illustrative embodiment, the computing device100 establishes an environment 300 during operation. The illustrative environment 300 includes a secure execution module 302, an error correcting code module 310, and a memory operation module 314. The various modules of the environment 300 may be embodied as hardware, firmware, microcode, software, or a combination thereof. As such, in some embodiments, one or more of the modules of the environment 300 may be embodied as circuitry or collection of electrical devices (e.g., secure execution circuitry 302, error correcting code circuitry 310, and/or memory operation circuitry 314). It should be appreciated that, in such embodiments, one or more of the secure execution circuitry 302, the error correcting code circuitry 310, and/or the memory operation circuitry 314 may form a portion of one or more of the processor 120 (e.g., the processor cores 202 and/or the memory controller 212), the I/O subsystem 124, and/or other components of the computing device 100. Additionally, in some embodiments, one or more of the illustrative modules may form a portion of another module and/or one or more of the illustrative modules may be independent of one another.[0024] The secure execution module 302 is configured to originate, by the processor120, a memory transaction 304 and an associated secure enclave status bit 306. The enclave bit 306 may be embodied as any processor signal, processor flag, status bit, or other signal that indicates whether the memory transaction 304 was originated by the processor 120 in a secure execution mode, such as from a secure enclave established using Intel® SGX technology. Thus, the enclave bit 306 indicates the access intent of the memory transaction 304. In the illustrative embodiment, the memory transaction 304 may be embodied as a write transaction or a read transaction. Write transactions 304 may also include or otherwise be associated with data 308. The data 308 may be plaintext data or encrypted data, for example data encrypted by the memory encryption engine 210 when the memory transaction 304 originates in the secure execution mode.[0025] The error correcting code module 310 is configured to compute an error- correcting code (ECC) 312 as a function of the memory transaction data 308 and the secure enclave status bit 306. For example, for a write transaction, the error correcting code module 310 may be configured to compute the ECC 312 based on the data 308 included in the memory transaction 304, and for a read transaction, the error correcting code module 310 may be configured to compute the ECC 312 based on data 320 read from the memory 126. In the illustrative embodiment, the ECC 312 is computed using a single-error correction and double- error detection (SECDED) error-correcting code scheme.[0026] The memory operation module 314 is configured to perform the memory transaction 304 based on the error-correcting code (ECC) 312 and the memory transaction data 308 using the memory 126 of the computing device 100. For example, for a write transaction 304, the memory operation module 314 may be configured to write the data 308 and the ECC 312 to the memory 126. As another example, for a read transaction 304, the memory operation module 314 may be configured to read data 320 and an error-correcting code (ECC) 322 from the memory 126 and determine whether the ECC 312 computed by the error correcting code module 310 matches the ECC 322 stored in the memory 126. As described further below, if the ECCs 312, 322 do not match, the memory operation module 314 may be configured to generate an error condition such as a machine check exception. In some embodiments, those functions may be performed by one or more sub-modules, such as a read module 316 and/or a write module 318.[0027] Referring now to FIG. 4, in use, the computing device 100 may execute a method 400 for software attack detection. The method 400 begins with block 402, in which a processor core 202 of the computing device 100 originates a memory transaction 304. The memory transaction 304 may include a read transaction or a write transaction. The memory transaction 304 may be originated, for example, in response to the processor core 202 executing one or more programmed instructions as part of a computer program.[0028] In block 404, the processor core 202 of the computing device 100 determines whether the memory transaction 304 originates from a secure enclave or other appropriate secure execution environment and/or secure execution mode of the processor 120. For example, the processor core 202 may determine whether the memory transaction 304 originates from a secure enclave established using Intel® SGX technology. If the memory transaction 304 does not originate from a secure enclave, the method 400 branches to block 406, in which the processor core 202 clears the enclave bit 306 associated with the memory transaction 304. If the memory transaction 304 originates from a secure enclave, the method 400 branches to block 408, in which the processor core 202 sets the enclave bit 306 associated with the memory transaction 304. The processor core 202 sets and/or clears the enclave bit 306 using hardware, firmware, microcode, or other resources of the processor 120. User and system software (e.g., executable programmed instructions) executed by the processor 120 may not be capable of modifying the enclave bit 306. After setting and/or clearing the enclave bit 306 in blocks 406, 408, the method proceeds to block 410.[0029] In block 410, the processor 120 includes the enclave bit 306 in any cached data associated with the memory transaction 304. For example, the processor 120 may include the enclave bit 306 in the last-level cache 206 or in any local caches of the processor cores 202. The processor 120 may use any technique to include the enclave bit 306 in the cached data. For example, the processor 120 may include a hardware enclave bit 306 in each cache line of the last- level cache 206. As another example, the processor 120 may store one or more representations of the enclave bit 306 in a specialized memory or in the cache memory itself.[0030] In block 412, the coherent cache fabric 204 of the computing device 100 forwards the memory transaction 304 with the enclave bit 306 to the system agent 208. The coherent cache fabric 204 may forward the memory transaction 304 using any bus, interconnect, or other communication technique. In block 414, the system agent 208 of the computing device 100 determines whether the enclave bit 306 associated with the memory transaction 304 is set. If the enclave bit 306 is set, then the memory transaction 304 originated from a secure enclave or other secure execution environment and/or secure execution mode of the processor 120. Thus, by examining the enclave bit 306, the system agent 208 determines the access intent of the memory transaction 304. that is, whether the memory transaction 304 is intended to access secure memory. In block 416, the computing device 100 checks whether the enclave bit 306 is set. If not, the method 400 branches ahead to block 422, described below. If the enclave bit 306 is set, the method 400 advances to block 418.[0031] In block 418, the system agent 208 forwards the memory transaction 304 to the memory encryption engine 210. After being forwarded to the memory encryption engine 210, the memory encryption engine 210 may perform further processing of the memory transaction 304. In block 420, the memory encryption engine 210 of the computing device 100 performs an encryption operation for the memory transaction 304. For example, for a write memory transaction 304, the memory encryption engine 210 may encrypt the data 308 included in the memory transaction 304 to generate encrypted data. As another example, for a read memory transaction 304 the memory encryption engine 210 may decrypt encrypted data 320 read from the memory 126 to generate the data 308 associated with the memory transaction 304. The memory encryption engine 210 may perform the encryption operation using encryption keys, certificates, or other cryptographic information associated with the secure enclave established by the processor 120. For example, the memory encryption engine 210 may encrypt or decrypt the data using a 128-bit encryption key. In some embodiments, the memory encryption engine 210 may perform additional cryptographic operations, including generating one or more counters and/or authentication tags to provide integrity and replay protection.[0032] In block 422, the computing device 100 forwards the memory transaction 304 to the memory controller 212. For example, as described above in connection with block 416, if the enclave bit 306 is not set, the system agent 208 may forward the memory transaction 304 directly to the memory controller 212 without encryption. As another example, as described above in connection with blocks 416 through 420, the memory encryption engine 210 may forward the memory transaction 304 to the memory controller 212.[0033] In block 424, the memory controller 212 of the computing device 100 processes the memory transaction 304 with the enclave bit 306. For a write memory transaction 304, the memory controller 212 may generate an error-correcting code (ECC) 312 as a function of the data 308 and the enclave bit 306 associated with the memory transaction 304. The memory controller 212 may in turn write the data 308 and the ECC 312 to the memory 126 as the data 320 and the ECC 322, respectively. Additionally or alternatively, for a read transaction 304, the memory controller 212 may read the data 320 and the ECC 322 from the memory 126, and then generate an ECC 312 as a function of the data 320 and the enclave bit 306. The memory controller 212 may compare the calculated ECC 312 to the ECC 322 read from the memory 126 to detect and/or prevent attempted software attacks. For example, potential software attacks include attempts to access secure enclave data from outside of a secure enclave (with an invalid access intent). If the data 320 and associated ECC 322 were stored by a memory transaction 304 originating from a secure enclave, then the ECC 312 calculated for a memory transaction 304 that does not originate from a secure enclave would not match the ECC 322, and the potential software attack may be detected. One potential embodiment of a method for processing the memory transaction 304 with the enclave bit 306 is described below, in connection with FIGS. 5A and 5B. After processing the memory transaction 304, the method 400 loops back to block 402 to process another memory transaction 304.[0034] Referring now to FIGS. 5A and 5B, in use, the computing device 100 may execute a method 500 for performing a memory transaction 304. The method 500 may be executed, for example, by the memory controller 212 of the processor 120 and/or by other hardware, firmware, microcode, or other resources of the processor 120. The method 500 begins in block 502, in which the computing device 100 receives a memory transaction 304 that includes or is otherwise associated with an enclave bit 306. The memory transaction 304 may also include or be associated with data 308. For example, a write transaction 304 may include data 308 to write to the memory 126. As described above in connection with FIG. 4, the memory transaction 304 may be forwarded to the memory controller 212 from the system agent 208 and/or the memory encryption engine 210, and the memory transaction 304 may read and/or write encrypted data.[0035] In block 504, the computing device 100 determines whether the memory transaction 304 is a write transaction. If not (i.e., if the memory transaction 304 is a read transaction), then the method 500 branches ahead to block 510, described below. If the memory transaction 304 is a write transaction, the method 500 advances to block 506.[0036] In block 506, the computing device 100 computes an error-correcting code(ECC) 312 as a function of the data 308 of the memory transaction 304 and the enclave bit 306. For example, the computing device 100 may append the enclave bit 306 to the data 308 and calculate the ECC 312 based on the combined bit values. In the illustrative embodiment, the computing device 100 calculates the ECC 312 using a single-error correction and double-error detection (SECDED) scheme. In particular, for every 64 bits of data 308 and one bit of the enclave bit 306 (i.e., 65 total bits), the computing device 100 calculates an eight-bit ECC 312 that includes seven bits of Hamming code and one bit of parity. Note that seven bits of Hamming code is capable of error-correcting up to 127 total bits (that is, the capacity of a seven-bit Hamming code is 127 bits). The illustrative embodiment includes 72 bits to be corrected, including the 64 data bits, the enclave bit 306, and the seven Hamming bits, which is well below the capacity of the seven-bit Hamming code. In other embodiments, the computing device 100 may use any appropriate number of data bits and/or ECC bits such that the number of bits to be corrected (the data bits, the Hamming bits, and the enclave bit) is less than the maximum capacity supported by the ECC 312.[0037] In block 508, the computing device 100 writes the data 308 of the memory transaction 304 and the calculated ECC 312 to the memory 126. As shown in FIG. 3, the data 308 and the ECC 312 may be stored in the memory 126 as the data 320 and the ECC 322, respectively. As described above, the data 320 stored in the memory 126 may include encrypted data that is protected from accesses outside of a secure enclave. After writing the data 320 and the ECC 322 to the memory 126, the method 500 is completed. Note that the computing device 100 does not write the value of the enclave bit 306 to the memory 126. As described above in connection with FIG. 4, after processing the memory transaction 304, the computing device 100 may continue to process additional memory transactions 304. For example, the computing device 100 may perform eight write transactions 304 of sixty-four data bits each to write an entire cache line of 64 bytes.[0038] Referring now to FIG. 6, a schematic diagram 600 illustrates one potential embodiment of a write memory transaction 304. The write transaction 304 includes data 308 and the secure enclave bit 306. In the illustrative embodiment, the data 308 includes eight data bits d\through <¾ and the enclave bit 306 includes a single bit E. Thus, in the illustrative embodiment the data 308 represents the binary value "01101011" and the enclave bit 306 is set and therefore indicates that the memory transaction 304 was originated by the processor 120 from a secure enclave. It should be understood that some embodiments, the computing device 100 may process a different number of data bits; for example, in some embodiments the data 308 may include 64 data bits d.[0039] As described above in connection with block 506 of FIG. 5A, the computing device 100 may append the bit E to the data bits d\through <¾ and then generate an error correcting code 312 based on the combined data bits d and enclave bit E. The illustrative diagram 600 includes a resulting value 602 that includes the ECC 312, the data 308, and the enclave bit 306. As shown, the value 602 includes a parity bit po, four Hamming bits p\, /¾, P4, and /?8, the data bits d\through <¾, and the enclave bit E. The computing device 100 may use any appropriate technique to compute the Hamming bits. In some embodiments, for each Hamming bit the computing device 100 may set the Hamming bit if an odd number of a particular group of data bits are set, and clear the Hamming bit if an even number of those data bits are set. Table 1 illustrates the data bits that are used to determine each Hamming bit. For example, for bit p\the computing device 100 determines whether bits d\, <¾, d\, <¾, di, and E are set; for bit /¾ the computing device 100 determines whether bits d\, <¾, <i4, d , and άη are set; and so on. The computing device 100 determines the parity bit pO last, and may set the bit pO if an odd number of the other bits are set or clear the bit pO if an even number of the other bits are set. As shown, the value 602 generated for the data 308 and the enclave bit 306 is a thirteen-bit value "00000110010111."Table 1. Illustrative calculation of Hamming code.[0040] After generating the value 602 including the data 308, the ECC 312, and the enclave bit 306, as described above in connection with block 508 of FIG. 5A, the computing device 100 stores the data 308 and the ECC 312 to the memory 126. As shown in the diagram 600, the computing device 100 may remove the enclave bit 306 from the value 602 to generate the value 604 that includes the data 308 and the ECC 312. The computing device 100 stores the value 604 in the memory 126, without transmitting the enclave bit 306 to the memory 126.[0041] Referring back to FIGS. 5 A and 5B, as described above in connection with block504, if the memory transaction 304 is not a write transaction (i.e., if it is a read transaction), the method 500 branches to block 510. In block 510, the computing device 100 reads the data 320 and the ECC 322 specified by the read memory transaction 304 from the memory 126. As described above, the data 320 and the ECC 322 may have been stored in the memory 126 by the computing device 100 in response to a previous memory transaction 304.[0042] In block 512, the computing device 100 computes an error correcting code(ECC) 312 as a function of the data 320 read from the memory 126 and the enclave bit 306. For example, the computing device 100 may append the enclave bit 306 to the data 320 and calculate the ECC 312 based on the combined bit values. The computing device 100 uses the same technique to calculate the ECC 312 that is used to calculate the ECC 312 for write transactions 304, as described above in connection with block 506. Thus, in the illustrative embodiment the computing device 100 calculates the ECC 312 using a SECDED scheme. In particular, for every 64 bits of data 320 and one bit of the enclave bit 306 (i.e., 65 total bits), the computing device 100 calculates an eight-bit ECC 312 that includes seven bits of Hamming code and one bit of parity.[0043] In block 514, the computing device 100 determines whether the calculated ECC312 equals the ECC 322 read from the memory 126. If so, the method 500 advances to block 516, in which the computing device 100 returns the data 320 and the ECC 322 read from the memory 126. Because the ECC 312 matches the ECC 322, that means that the current memory transaction 304 originated with the same access intent as the previous memory transaction 304 that stored the data 320 and the ECC 322. In other words, both the current memory transaction 304 and the previous memory transaction 304 originated from a secure enclave or other secure execution mode of the processor 120, or both the current memory transaction 304 and the previous memory transaction 304 originated from a non-secure execution mode of the processor 120. In either of those circumstances, the current memory transaction 304 is allowed. After returning the data 320 and the ECC 322, the method 500 is completed. As described above in connection with FIG. 4, after processing the memory transaction 304, the computing device 100 may continue to process additional memory transactions 304. For example, the computing device 100 may perform eight read transactions 304 of sixty-four data bits each to read an entire cache line of 64 bytes.[0044] Referring back to block 514, if the calculated ECC 312 does not equal the ECC322 read from the memory 126, then the method 500 branches ahead to block 518. If the ECC 312 does not equal the ECC 322, then the current memory transaction 304 may have the incorrect access intent, or one or more bit errors may have occurred in the memory 126 (e.g., due to cosmic ray strikes or other errors). The computing device 100 may respond to this circumstance using any appropriate technique, such as generating a machine check exception or other error condition. In the illustrative embodiment, in block 518, the computing device 100 determines whether a bit error having an odd number of bits has occurred. The computing device 100 may determine whether an odd-bit error occurred, for example, by appending the enclave bit 306 to the data 320 and ECC 322 read from the memory 126 and determining whether the parity bit of the ECC 322 is correct for that combined value.[0045] In block 520, the computing device 100 checks whether an odd-bit error has occurred. If so, the method 500 branches ahead to block 524, shown in FIG. 5B, to process the odd-bit error. If an odd-bit error has not occurred (i.e., if an even number of bit errors have occurred), the method 500 advances to block 522, in which the computing device 100 generates a machine check exception or other error condition. As described above, the ECCs 312, 322 used by the computing device 100 are calculated using a single-error correcting, double-error detecting scheme. Thus, even-bit errors having two or more bit errors are not correctable by the computing device 100. Two-bit errors that cause a machine check condition may include reads including two bit errors that occurred in the memory 126 or reads that include a single bit error that occurred in the memory 126 combined with an incorrect enclave bit 306. (Higher numbers of bit errors occurring in the memory 126 are possible but highly unlikely.) In other words, a detected two-bit error may indicate an attempt to access protected data from outside of a secure enclave combined with a bit error in the memory 126. After generating the machine check exception, the method 500 is completed. The computing device 100 may hang or otherwise cease execution in response to the machine check exception or other error condition. [0046] Referring back to block 520, if an odd-bit error has occurred, the method 500 branches ahead to block 524, shown in FIG. 5B. In block 524, the computing device 100 determines the location of the bit error in the combined data 320 and enclave bit 306. For example, the computing device 100 may determine the Hamming bits within the ECC 322 that do not match the calculated Hamming bits of the calculated ECC 312. The computing device 100 may add the bit positions of each erroneous Hamming bit to identify the location of the bit error.[0047] In block 526, the computing device 100 determines whether the bit error occurred in the location of the enclave bit 306. If not, the method 500 branches ahead to block 530, described below. If the bit error occurred in the location of the enclave bit 306, the method 500 branches ahead to block 528, in which the computing device 100 generates a machine check exception or other error condition. The bit error identified in the enclave bit 306 indicates that the current memory transaction 304 has the wrong access intent. In other words, the current memory transaction 304 may be attempting to access data 320 from outside of a secure enclave, when the data 320 had originally been written by a previous memory transaction 304 that originated from within a secure enclave. Thus, the bit error in the location of the enclave bit 306 may indicate an attempted software attack, a programming error, and/or other vulnerability. After generating the machine check exception, the method 500 is completed. The computing device 100 may hang or otherwise cease execution in response to the machine check exception or other error condition. In some embodiments, the computing device 100 may perform any other appropriate security response to the potential software attack, such as logging the attack, alerting a user, performing appropriate page abort semantics, or performing another security response.[0048] Referring back to block 526, if the bit error did not occur in the location of the enclave bit 306, then the method 500 branches ahead to block 530. In block 530, the computing device 100 attempts to correct the bit error(s) in the data 320 and the ECC 322. The computing device 100 may use any appropriate technique to correct the bit error(s). In block 532, the computing device 100 determines whether the bit error was successfully corrected. If corrected, the method 500 branches to block 536, described below. If not corrected, the method 500 branches to block 534.[0049] In block 534, the computing device 100 the computing device 100 generates a machine check exception or other error condition. As described above, the ECCs 312, 322 used by the computing device 100 are computed using a single-error correcting, double-error detecting scheme. Thus, an odd-bit error that is not correctable indicates that three (or more) bit errors were detected, which are not correctable by the computing device 100. Three-bit errors that cause a machine check condition may include reads including three bit errors that occurred in the memory 126 (which is highly unlikely) or reads that include two bit errors that occurred in the memory 126 combined with an incorrect enclave bit 306. (Higher numbers of bit errors occurring in the memory 126 are possible but highly unlikely.) In other words, a three -bit error may indicate an attempt to access protected data from outside of a secure enclave combined with multiple bit errors in the memory 126. After generating the machine check exception, the method 500 is completed. The computing device 100 may hang or otherwise cease execution in response to the machine check exception or other error condition.[0050] Referring back to block 532, if the bit error was successfully corrected, the method 500 branches to block 536, in which the computing device 100 returns the corrected data 320 and the corrected ECC 322 read from the memory 126. After returning the corrected data 320 and the corrected ECC 322, the method 500 is completed. As described above in connection with FIG. 4, after processing the memory transaction 304, the computing device 100 may continue to process additional memory transactions 304. For example, the computing device 100 may perform eight read transactions 304 of sixty-four data bits each to read an entire cache line of 64 bytes.[0051] It should be understood that in certain rare circumstances, returning the correcting data 320 and the corrected ECC 322 in block 536 may cause the computing device 100 to allow a memory transaction 304 with an incorrect access intent. In particular, the SECDED ECC scheme used in the illustrative embodiment may be unable to distinguish between a correctable one-bit error and an uncorrectable three-bit error. For example, when the memory transaction 304 is associated with an incorrect enclave bit 306 (e.g., a transaction 304 originating from outside a secure enclave attempts to access secure data 320) and the memory read includes two error bits (e.g., two erroneous bits from the memory 126), the computing device 100 may detect an odd-numbered bit error (i.e., three error bits) and, in certain circumstances, that error may be apparently corrected by the computing device 100. If so, then the computing device 100 may allow the transaction 304 even though the enclave bit 306 is incorrect. Of course, the bit errors would change the data 308, and if the data 308 is encrypted, then it is highly unlikely that the modified data 308 could be successfully decrypted. Additionally, the likelihood of the computing device 100 accepting an incorrect access intent is extremely low. For example, as described above, the computing device 100 may be required to perform eight consecutive read transactions 304 of 64 data bits in order to read a single 64-byte cache line. If the probability of a 2-bit error in the data 308 is p, then the probability of eight consecutive, apparently correctable 2-bit errors P is less than p , because not all 2-bit errors (combined with an incorrect enclave access bit 306) appear to be correctable. If the probability p is less than or equal to 2"16, which has been confirmed by industrial data, then the probability-128P is less than or equal to 2". In other words, the likelihood of accepting an incorrect access intent for the cache line is less than the probability of guessing a 128-bit encryption key.[0052] Referring now to FIG. 7, a schematic diagram 700 illustrates one potential embodiment of a read memory transaction 304'. As shown, the read transaction 304' is associated with the secure enclave bit 306'. The secure enclave bit 306' is cleared, indicating that the read transaction 304' was originated by the processor 120 outside of a secure enclave or otherwise outside of a secure execution mode. As described above in connection with block 510 of FIG. 5A, the computing device 100 reads the value 604 from the memory 126. The value 604 includes the data 320 and the ECC 322 stored in the memory 126. As shown, the value 604 is the same value 604 stored in the memory 126 by the write transaction 304 illustrated in FIG. 6.[0053] The computing device 100 appends the enclave bit 306' to the value 604 and checks the global parity bit /¾· As shown, the parity bit po is incorrect, indicating that an odd number of bit errors have occurred, as described above in connection with block 518 of FIG. 5A. The computing device 100 also generates the ECC 312 based on the data 320 (i.e., the data bits do through <¾ of the value 604) and the enclave bit 306', as described above in connection with block 512 of FIG. 5A. As shown, the value 702 includes the data 320, the ECC 312, and the enclave bit 306'. As shown, the ECC 322 (i.e., the bits po through p of the value 604) and the ECC 312 (i.e., the bits po through p of the value 702) do not match. In particular, the bits p\, /?4, and p%of the ECCs 312', 322 do not match. As described above in connection with block 524 of FIG. 5B, the sum of the bit position of the non-matching bits (i.e., 1 + 4 + 8) is 13, which is the bit position of the enclave bit 306'. Therefore, the computing device 100 has detected an incorrect access intent, that is, that the read transaction 304' is associated with the incorrect enclave bit 306'. The computing device 100 may generate a machine check exception or other error condition, as described above in connection with block 528 of FIG. 5B.[0054] It should be appreciated that, in some embodiments, any one or more of the methods 400 and/or 500 may be embodied as various instructions stored on a computer- readable media, which may be executed by the processor 120, a peripheral device 132, and/or other components of a computing device 100 to cause the computing device 100 to perform the corresponding method 400 and/or 500. The computer-readable media may be embodied as any type of media capable of being read by the computing device 100 including, but not limited to, the memory 126, the data storage 128, a local memory of the processor 120, firmware and/or microcode of the processor 120, and/or other memory or data storage devices of the computing device 100, portable media readable by a peripheral device 132 of the computing device 100, and/or other media.EXAMPLES[0055] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0056] Example 1 includes a computing device for secure memory access, the computing device comprising a processor; and a memory external to the processor; wherein the processor comprises a secure execution module to originate, by the processor, a memory transaction and an associated secure enclave status bit, wherein the secure enclave status bit is indicative of whether the memory transaction is originated by the processor in a secure execution mode; an error-correcting code module to compute a first error-correcting code as a function of memory transaction data and the secure enclave status bit, wherein the memory transaction data is associated with the memory transaction; and a memory operation module to perform the memory transaction based on the first error-correcting code and the memory transaction data with the memory of the computing device.[0057] Example 2 includes the subject matter of Example 1, and wherein the memory transaction data comprises a first number of bits and the first number of bits is less than a maximum number of data bits supported by the error-correcting code.[0058] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to compute the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises to calculate a single-error correction and double-error detection (SECDED) error-correcting code.[0059] Example 4 includes the subject matter of any of Examples 1-3, and wherein the memory transaction data comprises sixty-four bits and the error-correcting code comprises seven bits of Hamming code and one bit of parity.[0060] Example 5 includes the subject matter of any of Examples 1-4, and wherein the secure execution mode comprises a secure enclave execution mode.[0061] Example 6 includes the subject matter of any of Examples 1-5, and wherein to originate the memory transaction and the associated secure enclave status bit comprises to determine, by the processor, whether the memory transaction is originated by the processor from a secure enclave; set, by the processor, the secure enclave status bit in response to a determination that the memory transaction is originated by the processor from the secure enclave; and clear, by the processor, the secure enclave status bit in response to a determination that the memory transaction is not originated by the processor from the secure enclave.[0062] Example 7 includes the subject matter of any of Examples 1-6, and wherein to originate the memory transaction and the associated secure enclave status bit further comprises to perform, by the processor, an encryption operation with the memory transaction data in response to the determination that the memory transaction is originated by the processor from the secure enclave.[0063] Example 8 includes the subject matter of any of Examples 1-7, and wherein to perform the memory transaction comprises to (i) determine whether the memory transaction is a write transaction, and (ii) write, in response to a determination that the memory transaction is a write transaction, the memory transaction data and the error-correcting code to the memory of the computing device; and to compute the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises to compute, in response to the determination that the memory transaction is a write transaction, the first error-correcting code as a function of the memory transaction data included in the memory transaction and the secure enclave status bit.[0064] Example 9 includes the subject matter of any of Examples 1-8, and wherein to perform the memory transaction comprises to (i) determine whether the memory transaction is a read transaction, (ii) read, in response to a determination that the memory transaction is a read transaction, the memory transaction data and a second error-correcting code that correspond to the memory transaction from the memory of the computing device, and (iii) determine whether the first error-correcting code matches the second error-correcting code; and to compute the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises to compute, in response to the determination that the memory transaction is a read transaction, the first error-correcting code as a function of the memory transaction data that corresponds to the memory transaction and the secure enclave status bit.[0065] Example 10 includes the subject matter of any of Examples 1-9, and wherein to perform the memory transaction further comprises to return the memory transaction data and the second error-correcting code in response to a determination that the first error-correcting code matches the second error-correcting code. [0066] Example 11 includes the subject matter of any of Examples 1-10, and wherein to perform the memory transaction further comprises to determine whether a bit error has occurred in a bit position that corresponds to the secure enclave status bit in response to a determination that that the first error-correcting code does not match the second error-correcting code; and generate a error condition in response to a determination that the bit error has occurred in the bit position that corresponds to the secure enclave status bit.[0067] Example 12 includes the subject matter of any of Examples 1-11, and wherein the error condition comprises a machine check exception.[0068] Example 13 includes the subject matter of any of Examples 1-12, and wherein to perform the memory transaction further comprises to determine whether an odd-numbered bit error has occurred based on the first error-correcting code and the second error-correcting code in response to the determination that the first error-correcting code does not match the second error-correcting code; and generate an error condition in response to a determination that an odd-numbered bit error has not occurred; wherein to determine whether the bit error has occurred in the bit position that corresponds to the secure enclave status bit comprises to determine whether the bit error has occurred in the bit position that corresponds to the secure enclave status bit in response to a determination that that an odd-numbered bit error has occurred.[0069] Example 14 includes the subject matter of any of Examples 1-13, and wherein to perform the memory transaction further comprises to attempt to correct the bit error in the memory transaction data and the second error-correcting code to generate a corrected memory transaction data and a corrected second error-correcting code in response to a determination that the bit error has not occurred in the bit position that corresponds to the secure enclave status bit; determine whether the bit error was corrected in response to an attempt to correct the bit error; generate an error condition in response to a determination that the bit error was not corrected; and return the corrected memory transaction data and the corrected second error-correcting code in response to a determination that the bit error was corrected.[0070] Example 15 includes a method for secure memory access, the method comprising originating, by a processor of a computing device, a memory transaction and an associated secure enclave status bit, wherein the secure enclave status bit is indicative of whether the memory transaction is originated by the processor in a secure execution mode; computing a first error-correcting code as a function of memory transaction data and the secure enclave status bit, wherein the memory transaction data is associated with the memory transaction; and performing the memory transaction based on the first error-correcting code and the memory transaction data using a memory of the computing device, wherein the memory is external to the processor.[0071] Example 16 includes the subject matter of Example 15, and wherein the memory transaction data comprises a first number of bits and the first number of bits is less than a maximum number of data bits supported by the error-correcting code.[0072] Example 17 includes the subject matter of any of Examples 15 and 16, and wherein computing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises calculating a single-error correction and double- error detection (SECDED) error-correcting code.[0073] Example 18 includes the subject matter of any of Examples 15-17, and wherein the memory transaction data comprises sixty-four bits and the error-correcting code comprises seven bits of Hamming code and one bit of parity.[0074] Example 19 includes the subject matter of any of Examples 15-18, and wherein the secure execution mode comprises a secure enclave execution mode.[0075] Example 20 includes the subject matter of any of Examples 15-19, and wherein originating the memory transaction and the associated secure enclave status bit comprises determining, by the processor, whether the memory transaction is originated by the processor from a secure enclave; setting, by the processor, the secure enclave status bit in response to determining that the memory transaction is originated by the processor from the secure enclave; and clearing, by the processor, the secure enclave status bit in response to determining that the memory transaction is not originated by the processor from the secure enclave.[0076] Example 21 includes the subject matter of any of Examples 15-20, and wherein originating the memory transaction and the associated secure enclave status bit further comprises performing, by the processor, an encryption operation with the memory transaction data in response to determining that the memory transaction is originated by the processor in the secure execution mode.[0077] Example 22 includes the subject matter of any of Examples 15-21, and wherein performing the memory transaction comprises (i) determining whether the memory transaction is a write transaction, and (ii) writing, in response to determining that the memory transaction is a write transaction, the memory transaction data and the error-correcting code to the memory of the computing device; and computing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises computing, in response to determining that the memory transaction is a write transaction, the first error-correcting code as a function of the memory transaction data included in the memory transaction and the secure enclave status bit.[0078] Example 23 includes the subject matter of any of Examples 15-22, and wherein performing the memory transaction comprises (i) determining whether the memory transaction is a read transaction, (ii) reading, in response to determining that the memory transaction is a read transaction, the memory transaction data and a second error-correcting code corresponding to the memory transaction from the memory of the computing device, and (iii) determining whether the first error-correcting code matches the second error-correcting code; and computing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises computing, in response to determining that the memory transaction is a read transaction, the first error-correcting code as a function of the memory transaction data corresponding to the memory transaction and the secure enclave status bit.[0079] Example 24 includes the subject matter of any of Examples 15-23, and wherein performing the memory transaction further comprises returning the memory transaction data and the second error-correcting code in response to determining that the first error-correcting code matches the second error-correcting code.[0080] Example 25 includes the subject matter of any of Examples 15-24, and wherein performing the memory transaction further comprises determining whether a bit error has occurred in a bit position corresponding to the secure enclave status bit in response to determining that that the first error-correcting code does not match the second error-correcting code; and generating an error condition in response to determining that the bit error has occurred in the bit position corresponding to the secure enclave status bit.[0081] Example 26 includes the subject matter of any of Examples 15-25, and wherein generating the error condition comprises generating a machine check exception.[0082] Example 27 includes the subject matter of any of Examples 15-26, and wherein performing the memory transaction further comprises determining whether an odd-numbered bit error has occurred based on the first error-correcting code and the second error-correcting code in response to determining that the first error-correcting code does not match the second error-correcting code; and generating an error condition in response to determining that an odd- numbered bit error has not occurred; wherein determining whether the bit error has occurred in the bit position corresponding to the secure enclave status bit comprises determining whether the bit error has occurred in the bit position corresponding to the secure enclave status bit in response to determining that that an odd-numbered bit error has occurred. [0083] Example 28 includes the subject matter of any of Examples 15-27, and wherein performing the memory transaction further comprises attempting to correct the bit error in the memory transaction data and the second error-correcting code to generate a corrected memory transaction data and a corrected second error-correcting code in response to determining that the bit error has not occurred in the bit position corresponding to the secure enclave status bit; determining whether the bit error was corrected in response to attempting to correct the bit error; generating an error condition in response to determining that the bit error was not corrected; and returning the corrected memory transaction data and the corrected second error- correcting code in response to determining that the bit error was corrected.[0084] Example 29 includes a computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of Examples 15-28.[0085] Example 30 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of Examples 15-28.[0086] Example 31 includes a computing device comprising means for performing the method of any of Examples 15-28.[0087] Example 32 includes a computing device for secure memory access, the computing device comprising means for originating, by a processor of the computing device, a memory transaction and an associated secure enclave status bit, wherein the secure enclave status bit is indicative of whether the memory transaction is originated by the processor in a secure execution mode; means for computing a first error-correcting code as a function of memory transaction data and the secure enclave status bit, wherein the memory transaction data is associated with the memory transaction; and means for performing the memory transaction based on the first error-correcting code and the memory transaction data using a memory of the computing device, wherein the memory is external to the processor.[0088] Example 33 includes the subject matter of Example 32, and wherein the memory transaction data comprises a first number of bits and the first number of bits is less than a maximum number of data bits supported by the error-correcting code.[0089] Example 34 includes the subject matter of any of Examples 32 and 33, and wherein the means for computing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises means for calculating a single-error correction and double-error detection (SECDED) error-correcting code. [0090] Example 35 includes the subject matter of any of Examples 32-34, and wherein the memory transaction data comprises sixty-four bits and the error-correcting code comprises seven bits of Hamming code and one bit of parity.[0091] Example 36 includes the subject matter of any of Examples 32-35, and wherein the secure execution mode comprises a secure enclave execution mode.[0092] Example 37 includes the subject matter of any of Examples 32-36, and wherein the means for originating the memory transaction and the associated secure enclave status bit comprises means for determining, by the processor, whether the memory transaction is originated by the processor from a secure enclave; means for setting, by the processor, the secure enclave status bit in response to determining that the memory transaction is originated by the processor from the secure enclave; and means for clearing, by the processor, the secure enclave status bit in response to determining that the memory transaction is not originated by the processor from the secure enclave.[0093] Example 38 includes the subject matter of any of Examples 32-37, and wherein the means for originating the memory transaction and the associated secure enclave status bit further comprises means for performing, by the processor, an encryption operation with the memory transaction data in response to determining that the memory transaction is originated by the processor in the secure execution mode.[0094] Example 39 includes the subject matter of any of Examples 32-38, and wherein the means for performing the memory transaction comprises (i) means for determining whether the memory transaction is a write transaction, and (ii) means for writing, in response to determining that the memory transaction is a write transaction, the memory transaction data and the error-correcting code to the memory of the computing device; and the means for computing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises means for computing, in response to determining that the memory transaction is a write transaction, the first error-correcting code as a function of the memory transaction data included in the memory transaction and the secure enclave status bit.[0095] Example 40 includes the subject matter of any of Examples 32-39, and wherein the means for performing the memory transaction comprises (i) means for determining whether the memory transaction is a read transaction, (ii) means for reading, in response to determining that the memory transaction is a read transaction, the memory transaction data and a second error-correcting code corresponding to the memory transaction from the memory of the computing device, and (iii) determining whether the first error-correcting code matches the second error-correcting code; and the means for computing the first error-correcting code as a function of the memory transaction data and the secure enclave status bit comprises means for computing, in response to determining that the memory transaction is a read transaction, the first error-correcting code as a function of the memory transaction data corresponding to the memory transaction and the secure enclave status bit.[0096] Example 41 includes the subject matter of any of Examples 32-40, and wherein the means for performing the memory transaction further comprises means for returning the memory transaction data and the second error-correcting code in response to determining that the first error-correcting code matches the second error-correcting code.[0097] Example 42 includes the subject matter of any of Examples 32-41, and wherein the means for performing the memory transaction further comprises means for determining whether a bit error has occurred in a bit position corresponding to the secure enclave status bit in response to determining that that the first error-correcting code does not match the second error-correcting code; and means for generating an error condition in response to determining that the bit error has occurred in the bit position corresponding to the secure enclave status bit.[0098] Example 43 includes the subject matter of any of Examples 32-42, and wherein the means for generating the error condition comprises means for generating a machine check exception.[0099] Example 44 includes the subject matter of any of Examples 32-43, and wherein the means for performing the memory transaction further comprises means for determining whether an odd-numbered bit error has occurred based on the first error-correcting code and the second error-correcting code in response to determining that the first error-correcting code does not match the second error-correcting code; and means for generating an error condition in response to determining that an odd-numbered bit error has not occurred; wherein the means for determining whether the bit error has occurred in the bit position corresponding to the secure enclave status bit comprises means for determining whether the bit error has occurred in the bit position corresponding to the secure enclave status bit in response to determining that that an odd-numbered bit error has occurred.[00100] Example 45 includes the subject matter of any of Examples 32-44, and wherein the means for performing the memory transaction further comprises means for attempting to correct the bit error in the memory transaction data and the second error-correcting code to generate a corrected memory transaction data and a corrected second error-correcting code in response to determining that the bit error has not occurred in the bit position corresponding to the secure enclave status bit; means for determining whether the bit error was corrected in response to attempting to correct the bit error; means for generating an error condition in response to determining that the bit error was not corrected; and means for returning the corrected memory transaction data and the corrected second error-correcting code in response to determining that the bit error was corrected. |
This application is directed to power management at a processor system having a plurality of domains. Power samples are collected from the domains and combined to generate a system temperature profile including a temporal sequence of system temperature values. When the system temperature profile satisfies a first criterion, it is determined in real time whether a respective system temperature value of the system temperature profile satisfies a second criterion or a third criterion. In accordance with a determination that the respective system temperature value satisfies the second criterion, a power management engine determines power budgets of the domains on a firmware level and enables operations of the domains according to the power budgets. In accordance with a determination that the respective system temperature value satisfies the third criterion, a subset of domains are selected to apply a respective power throttling action directly on a hardware level. |
What is claimed is:1. A power management method, comprising, at a processor system having a plurality of domains: collecting a plurality of power samples from the plurality of domains over a time duration, each power sample including at least one of temperature, power consumption, and current values associated with a respective domain; combining a subset of the plurality of power samples of the plurality of domains to generate a system temperature profile including a plurality of system temperature values; determining whether the system temperature profile satisfies a first criterion; and in accordance with a determination that the system temperature profile satisfies the first criterion at a first time t1, at a predefined controlling frequency: in real time, determining whether a respective system temperature value of the system temperature profile satisfies a second criterion or a third criterion; in accordance with a determination that the respective system temperature value satisfies the second criterion, determining power budgets of the plurality of domains on a firmware level and enabling operations of the plurality of domains according to the power budgets; and in accordance with a determination that the respective system temperature value satisfies the third criterion, selecting a subset of domains and applying a respective power throttling action to each of the subset of domains directly on a hardware level.2. The method of claim 1, further comprising: generating a local power profile of a first domain based on a first subset of the plurality of power values collected at the first domain; identifying, on the local power profile, a first temperature value and a second temperature value corresponding to a start and an end of a time window having a predefined window size, respectively; determining a temperature difference between the first and second temperature values; determining whether the temperature difference exceeds a predefined temperature increase limit; and
in accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, applying a power throttling action to the first domain on the hardware level.3. The method of claim 1, further comprising: identifying, on the system temperature profile, a first temperature value and a second temperature value corresponding to a start and an end of a time window having a predefined window size, respectively; determining a temperature difference between the first and second temperature values; determining whether the temperature difference exceeds a predefined temperature increase limit; and in accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, selecting the subset of domains and applying the respective power throttling action to each of the subset of domains on the hardware level.4. The method of claim 1, wherein: for each of the subset of domains, the respective throttling action includes one or more of: architecture throttling, power rail scaling, and clock throttling; architecture throttling is applied to periodically block traffic to the respective domain including DRAM or suppress high current spikes in the respective domain including a processor unit; clock throttling is applied to reduce a clock frequency of the respective domain; and performance point throttling is applied to adjust the clock frequency and power supply voltages of the respective domain jointly.The method of claim 1, wherein for each of the subset of domains, the respective throttling action is associated with a throttling threshold for a subset of power values corresponding to the respective domain, the method further comprising: in accordance with a predefined power management policy: determining by a power management engine the throttling threshold associated with the respective throttling action of the respective domain; and
in accordance with a determination that the subset of power values of the respective domain exceeds the throttling threshold, implementing the respective throttling action on the respective domain.6. The method of claim 1, further comprising: determining a total power budget for the entire processor system; and dynamically assigning a respective portion of the total power budget to each of the plurality of domains.7. The method of claim 1, determining the power budgets among the plurality of domains on the firmware level further comprising: based on the respective system temperature value, selecting one of a plurality of predefined power performance states (P-states) for each of a plurality of processors, each of the P-states corresponding to a predefined set of power and performance settings of the processors; and redistributing the power budgets among the plurality of domains according to the predefined set of power and performance settings of the selected P-state for each of the plurality of processors.8. The method of claim 1, wherein: the first criterion requires that the system temperature profile increases to and beyond a first temperature threshold TSETat a corresponding time; the second criterion requires that a system temperature value at a corresponding time is between the first temperature threshold TSETand a second temperature threshold TTH; the third criterion requires that a system temperature value at a corresponding time is greater than the second temperature threshold TTHor that the system temperature value stays above the first temperature threshold TSETfor an extended time longer than a threshold duration of time; the first temperature threshold TSETis less than the second temperature threshold TTH, the second temperature threshold TTHless than a maximal temperature TMAXbelow which the processor system is controlled.9. The method of claim 1, wherein: the plurality of power samples are collected from the plurality of domains according to a local sampling rate; each system temperature value is combined from a respective subset of power samples of the plurality of domains according to a global pooling rate; and the local sampling rate is greater than the global pooling rate, and the global pooling rate is greater than the predefined controlling frequency.10. The method of claim 1, wherein each domain is driven by one or more power rails, the method further comprising for each power rail: collecting a respective set of current values; and in accordance with a determination that the respective set of current values have been greater than a first threshold current for a first duration of time or greater than a second threshold current for a second duration of time, enabling a power throttling action on the respective power rail of the respective domain; wherein the first threshold current is greater than the second threshold current, and the first duration of time is shorter than the second duration of time.11. The method of claim 1, wherein the respective system temperature value belongs to a temporally-ordered sequence of system temperature values that are monitored subsequently to the first time t1on the system temperature profile according to the predefined controlling frequency.12. The method of claim 1, wherein the processor system includes a plurality of processor units, one or more memory units, and power management integrated circuit (PMIC), and each of the plurality of domains includes a distinct subset of the processor system.13. An electronic system, comprising: one or more processor clusters; a plurality of power sensors distributed on the electronic system, wherein the power sensors are configured to collect a plurality of power samples from a plurality of power domains of the electronic system, each power sample including at least one of temperature, power consumption, and current values associated with a respective power domain; and
a power management engine coupled to the plurality of power sensors, wherein the power management engine is configured to: collect a plurality of power samples from the plurality of domains over a time duration, each power sample including at least one of temperature, power consumption, and current values associated with a respective domain; combine a subset of the plurality of power samples of the plurality of domains to generate a system temperature profile including a plurality of system temperature values; determine whether the system temperature profile satisfies a first criterion; and in accordance with a determination that the system temperature profile satisfies the first criterion at a first time t1, at a predefined controlling frequency: in real time, determine whether a respective system temperature value of the system temperature profile satisfies a second criterion or a third criterion; in accordance with a determination that the respective system temperature value satisfies the second criterion, determine power budgets of the plurality of domains on a firmware level and enable operations of the plurality of domains according to the power budgets; and in accordance with a determination that the respective system temperature value satisfies the third criterion, select a subset of domains and apply a respective power throttling action to each of the subset of domains directly on a hardware level.14. The electronic system of claim 13, wherein the power management engine is configured to: generate a local power profile of a first domain based on a first subset of the plurality of power values collected at the first domain; identify, on the local power profile, a first temperature value and a second temperature value corresponding to a start and an end of a time window having a predefined window size, respectively; determine a temperature difference between the first and second temperature values; determine whether the temperature difference exceeds a predefined temperature increase limit; and
in accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, apply a power throttling action to the first domain on the hardware level.15. The electronic system of claim 13, wherein the power management engine is configured to: identify, on the system temperature profile, a first temperature value and a second temperature value corresponding to a start and an end of a time window having a predefined window size, respectively; determine a temperature difference between the first and second temperature values; determine whether the temperature difference exceeds a predefined temperature increase limit; and in accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, selecting the subset of domains and applying the respective power throttling action to each of the subset of domains on the hardware level.16. The electronic system of claim 13, wherein: for each of the subset of domains, the respective throttling action includes one or more of: architecture throttling, power rail scaling, and clock throttling; architecture throttling is applied to periodically block traffic to the respective domain including DRAM or suppress high current spikes in the respective domain including a processor unit; clock throttling is applied to reduce a clock frequency of the respective domain; and performance point throttling is applied to adjust the clock frequency and power supply voltages of the respective domain jointly.17. The electronic system of claim 13, wherein for each of the subset of domains, the respective throttling action is associated with a throttling threshold for a subset of power values corresponding to the respective domain, and the power management engine is configured to: in accordance with a predefined power management policy: determine by a power management engine the throttling threshold associated with the respective throttling action of the respective domain; and
in accordance with a determination that the subset of power values of the respective domain exceeds the throttling threshold, implement the respective throttling action on the respective domain.18. The electronic system of claim 13, wherein the power management engine is configured to: determine a total power budget for the entire processor system; and dynamically assign a respective portion of the total power budget to each of the plurality of domains.19. A non-transitory computer-readable storage medium, having instructions stored thereon, which when executed by a processor system having a plurality of domains cause the processor system to perform: collecting a plurality of power samples from the plurality of domains over a time duration, each power sample including at least one of temperature, power consumption, and current values associated with a respective domain; combining a subset of the plurality of power samples of the plurality of domains to generate a system temperature profile including a plurality of system temperature values; determining whether the system temperature profile satisfies a first criterion; and in accordance with a determination that the system temperature profile satisfies the first criterion at a first time t1, at a predefined controlling frequency: in real time, determining whether a respective system temperature value of the system temperature profile satisfies a second criterion or a third criterion; in accordance with a determination that the respective system temperature value satisfies the second criterion, determining power budgets of the plurality of domains on a firmware level and enabling operations of the plurality of domains according to the power budgets; and in accordance with a determination that the respective system temperature value satisfies the third criterion, selecting a subset of domains and applying a respective power throttling action to each of the subset of domains directly on a hardware level.
20. An apparatus for managing power at a processor system having a plurality of domains, the apparatus comprising: means for collecting a plurality of power samples from the plurality of domains over a time duration, each power sample including at least one of temperature, power consumption, and current values associated with a respective domain; means for combining a subset of the plurality of power samples of the plurality of domains to generate a system temperature profile including a plurality of system temperature values; means for determining whether the system temperature profile satisfies a first criterion; and means for in accordance with a determination that the system temperature profile satisfies the first criterion at a first time t1, at a predefined controlling frequency: in real time, determining whether a respective system temperature value of the system temperature profile satisfies a second criterion or a third criterion; in accordance with a determination that the respective system temperature value satisfies the second criterion, determining power budgets of the plurality of domains on a firmware level and enabling operations of the plurality of domains according to the power budgets; and in accordance with a determination that the respective system temperature value satisfies the third criterion, selecting a subset of domains and applying a respective power throttling action to each of the subset of domains directly on a hardware level. |
Dynamic Power Management for SoC-based Electronic DevicesRELATED APPLICATIONS[0001] This application claims priority to U.S. Provisional Patent Application No. 63/215,355, titled “Dynamic Power Management for SoC-based Electronic Devices,” filed on June 25, 2021, and U.S. Provisional Patent Application No. 63/215,351, titled “Hierarchical Power Management Architecture for SoC-based Electronic Devices,” filed on June 25, 2021, each of which is hereby incorporated by reference in its entirety.[0002] This application also claims priority to U.S. Patent Application No. 17/701,534, titled “Dynamic Power Management for SoC-based Electronic Devices,” filed on March 22, 2022, which is hereby incorporated by reference in its entirety.TECHNICAL FIELD[0003] This application relates generally to power management of an electronic device (e.g., having a system on a chip (SoC)), particularly to methods, systems, and non- transitory computer-readable media for monitoring and controlling power consumption and device performance of an SoC-based electronic device.BACKGROUND[0004] An electronic device oftentimes integrates a system on a chip (SoC) with a power management integrated circuit (PMIC), communication ports, external memory or storage, and other peripheral function modules on a main logic board. The SoC includes one or more microprocessor or central processing unit (CPU) cores, memory, input/output ports, and secondary storage in a single package. The PMIC is typically disposed adjacent to the SoC on the main logic board and provides multiple direct current (DC) power supply rails to the SoC via conductive wires formed on the main logic board. The PMIC provides a plurality of power rails configured to drive operations of the SoC. Power characteristics (e.g., power consumption, current, and voltage) are monitored and controlled for each power rail and a corresponding portion of the SOC. It would be beneficial to have a more efficient and flexible power management mechanism than the current practice.
SUMMARY[0005] To address power management issues of an SoC -based electronic device, it would be highly desirable to provide a semiconductor device or system with a plurality of distributed power sensors and a power management engine in addition to a plurality of processor clusters, cluster memory or cache, PMIC, and system memory. Various implementations of systems, methods and devices within the scope of the appended claims each have several aspects, no single one of which is solely responsible for the attributes described herein. Without limiting the scope of the appended claims, after considering this disclosure, and particularly after considering the section entitled “Detailed Description” one will understand how the aspects of various implementations are used to provide a semiconductor device with a dynamic power management hierarchy configured to control power management of the semiconductor device at a desirable control rate from a firmware level and/or a hardware level. Specifically, the power management engine is configured to collect power samples from the distributed power sensors, generate power profiles and power throttling thresholds from the power samples, implement a global firmware-level power control operation by determining power budgets among different power domains and enabling global and local hardware-level power control operations (e.g., a local throttling action) on the different power domains.[0006] In this application, “power” may broadly refer to any power-related characteristics. For example, power samples include temperatures, power consumptions, current values, or a combination thereof, and power sensors include any of temperature, power consumption, and current sensors. Power profiles can be any of temperature, power consumption, and current profiles. Power control operations are applied to control temperature, power consumption, or current profiles.[0007] In one aspect, a power management method is implemented at a processor system having a plurality of domains. The method includes collecting a plurality of power samples from the plurality of domains over a time duration, wherein each power sample includes at least one of temperature, power consumption, and current values associated with a respective domain. The method further includes combining a subset of the plurality of power samples of the plurality of domains to generate a system temperature profile including a plurality of system temperature values and determining whether the system temperature profile satisfies a first criterion. The method further includes in accordance with a
determination that the system temperature profile satisfies the first criterion at a first time, at a predefined controlling frequency, in real time, determining whether a respective system temperature value of the system temperature profile satisfies a second criterion or a third criterion. The method further includes in accordance with a determination that the respective system temperature value satisfies a second criterion, determining power budgets of the plurality of domains on a firmware level and enabling operations of the plurality of domains according to the power budgets. The method further includes in accordance with a determination that the respective system temperature value satisfies a third criterion, selecting a subset of domains and applying a respective power throttling action to each of the subset of domains directly on a hardware level.[0008] In another aspect, a power management method is implemented at a processor system having a plurality of domains. The method includes collecting a plurality of power samples from the plurality of domains over a time duration, and each power sample includes at least one or temperature, power consumption, and current values associated with a respective domain. The method further includes combining a subset of the plurality of power samples of the plurality of domains to generate a system power profile including a plurality of system power values and determining whether the system power profile satisfies a first criterion. The method further includes, in accordance with a determination that the system power profile satisfies the first criterion at a first time, at a predefined controlling frequency, in real time, determining whether a respective system power value of the system power profile satisfies a second criterion or a third criterion. The method further includes, in accordance with a determination that the respective system power value satisfies the second criterion, determining power budgets of the plurality of domains on a firmware level and enabling operations of the plurality of domains according to the power budgets. The method further includes, in accordance with a determination that the respective system power value satisfies the third criterion, selecting a subset of domains and applying a respective power throttling action to each of the subset of domains on a hardware level.[0009] In yet another aspect, an electronic system includes one or more processor clusters, first memory (e.g., a cache 208 in Figure 2), power management integrated circuit (PMIC), and second memory (e.g., memory 104 in Figure 2). A plurality of power sensors is distributed on the electronic system and configured to collect or preprocess a plurality of power samples from a plurality of power domains. Each power sample includes at least one
of temperature, power consumption, and current values associated with a respective power domain. A power management engine is coupled to the plurality of power sensors and configured to receive the plurality of power samples from the plurality of power domains and process the power samples based on locations of the corresponding power sensors to generate one or more power profiles and a plurality of power throttling thresholds. The power management engine is configured to implement a global power control operation having a first rate based on the one or more power profiles by determining power budgets of a plurality of power domains on a firmware level and enabling operations of the plurality of power domains according to the power budgets. The power management engine is also configured to based on the one or more power profiles, enable the plurality of power domains to implement a plurality of local power control operations based on the plurality of power throttling thresholds on a hardware level. The local power control operations have second rates greater than the first rate.[0010] These illustrative embodiments and implementations are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there. Other implementations and advantages may be apparent to those skilled in the art in light of the descriptions and drawings in this specification.BRIEF DESCRIPTION OF THE DRAWINGS[0011] Figure 1 is a block diagram of an example system module in a typical electronic device, in accordance with some implementations.[0012] Figure 2 is a block diagram of a power management system of the electronic device shown in Figure 1, in accordance with some implementations.[0013] Figure 3 is a cross sectional view of an integrated semiconductor device having an SoC and a PMIC chip, in accordance with some implementations.[0014] Figure 4 is a block diagram of a processor system of an electronic device including a plurality of distributed power sensors and a power management engine, in accordance with some implementations.[0015] Figures 5A and 5B are block diagrams of power management system configured to manage power of an SoC -based electronic device on a firmware level and a hardware level, in accordance with some implementations, respectively.
[0016] Figure 5C illustrates a comprehensive power management scheme in which power of an SoC-based electronic device is managed on both a firmware level and a hardware level, in accordance with some implementations.[0017] Figure 6 is a temporal diagram of device temperatures of an electronic device including an SoC, in accordance with some implementations.[0018] Figure 7 is a flow diagram of a method of managing power consumption of anSoC-based electronic device, in accordance with some implementations.[0019] Figure 8 is a flow diagram of a method of managing power consumption of anSoC-based electronic device, in accordance with some implementations.[0020] For a better understanding of the various described implementations, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures. Like reference numerals refer to corresponding parts throughout the drawings.DESCRIPTION OF IMPLEMENTATIONS[0021] Reference will now be made in detail to specific embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous non-limiting specific details are set forth in order to assist in understanding the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that various alternatives may be used without departing from the scope of claims and the subject matter may be practiced without these specific details.[0022] Various embodiments of this application are directed to a dynamic power management hierarchy configured to control power management of a semiconductor device (e.g., an SoC) at a desirable control rate from a firmware level and/or a hardware level.Specifically, the power management engine is configured to collect power samples from the distributed power sensors, generate power profiles and power throttling thresholds from the power samples, implement a global firmware-level power control operation by determining power budgets among different power domains and enabling global and local hardware-level power control operations (e.g., a local throttling action) on the different power domains.Compared with such a dynamic power management hierarchy, existing solutions monitor and control power characteristics (e.g., power consumption, current, and voltage) for each power
rail and a corresponding portion of the SOC. The dynamic power management hierarchy offers a more efficient and flexible power management mechanism.[0023] Figure 1 is a block diagram of an example system module 100 in a typical electronic device, in accordance with some implementations. System module 100 in this electronic device includes at least a system on a chip (SoC) 102 having one or more processors, memory modules 104 for storing programs, instructions and data, an input/output (I/O) controller 106, one or more communication interfaces such as network interfeces 108, and one or more communication buses 150 for interconnecting these components. In some implementations, I/O controller 106 allows SoC 102 to communicate with an I/O device (e.g., a keyboard, a mouse or a touch screen) via a universal serial bus interface. In some implementations, network interfaces 108 include one or more interfaces for Wi-Fi, Ethernet and Bluetooth networks, each allowing the electronic device to exchange data with an external source, e.g., a server or another electronic device. In some implementations, communication buses 150 include circuitry (sometimes called a chipset) that interconnects and controls communications among various system components included in system module 100.[0024] In some implementations, memory modules 104 include high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices. In some implementations, memory modules 104 include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some implementations, memory modules 104, or alternatively the non-volatile memory device(s) within memory modules 104, include a non-transitory computer readable storage medium. In some implementations, memory slots are reserved on system module 100 for receiving memory modules 104. Once inserted into the memory slots, memory modules 104 are integrated into system module 100.[0025] In some implementations, system module 100 further includes one or more components selected from:• a memory controller 110 that controls communication between SoC 102 and memory components, including memory modules 104, in electronic device;
• solid-state drives (SSDs) 112 that apply integrated circuit assemblies to store data in the electronic device, and in many implementations, are based on NAND or NOR memory configurations;• a hard drive 114 that is a conventional data storage device used for storing and retrieving digital information based on electromechanical magnetic disks;• a power supply connector 116 that includes one or more direct current (DC) power supply interfaces each of which is configured to receive a distinct DC supply voltage;• power management integrated circuit (PMIC) 118 that modulates the distinct DC supply voltages received via the DC power supply interfaces to other desired internal supply voltages, e.g., 5V, 3.3V or 1.8V, as required by various components or circuits (e.g., processor cores in the SoC 102) within electronic device;• a graphics module 120 that generates a feed of output images to one or more display devices according to their desirable image/video formats; and• a sound module 122 that facilitates the input and output of audio signals to and from the electronic device under control of computer programs.[0026] It is noted that communication buses 150 also interconnect and control communications among various system components including components 110-122.[0027] One skilled in the art knows that other non-transitory computer readable storage media can be used, as new data storage technologies are developed for storing information in the non-transitory computer readable storage media in the memory modules 104 and in SSDs 112. These new non-transitory computer readable storage media include, but are not limited to, those manufactured from biological materials, nanowires, carbon nanotubes and individual molecules, even though the respective data storage technologies are currently under development and yet to be commercialized.[0028] In some implementations, SoC 102 is implemented in a semiconductor package including one or more integrated circuits, and each integrated circuit integrates a subset of: one or more microprocessor or CPU cores, memory, input/output ports and secondary storage on a single substrate. PMIC 118 is also implemented in a semiconductor package including one or more integrated circuits each of which is formed on a single substrate. SoC 102 is configured to receive one or more internal supply voltages (also called
rail voltages) provided by PMIC 118 via one or more power rails. In some implementations, both SoC 102 and PMIC 118 are mounted on a main logic board, e.g., on two distinct areas of the main logic board, and electrically coupled to each other via conductive wires formed in the main logic board. This arrangement introduces parasitic effects and electrical noise that could compromise performance of the SoC, e.g., cause a voltage drop at an internal supply voltage. Alternatively, in accordance with various implementations described below, semiconductor dies of SoC 102 and PMIC 118 are vertically packaged in an integrated semiconductor device 140 (e.g., in Figure 3), such that they are electrically coupled to each other via electrical connections that are not formed in the main logic board. Such vertical arrangement of the semiconductor dies of SoC 102 and PMIC 118 reduces a length of electrical connections between SoC 102 and PMIC 118 and avoids performance degradation caused by routing conductive wires on the main logic board.[0029] In some implementations, a generic PMIC 118 is configured to drive different types of SoC 102 in different types of electronic devices. Regardless of whether PMIC 118 and SoC 102 are arranged side by side or vertically, PMIC 118 occupies the same footprint with respect to the main circuit board, while SoC 102 may have a distinct footprint based on the electronic modules integrated therein. PMIC 118 includes a plurality of voltage regulator units that are arranged in a field programmable array. The plurality of voltage regulator units are identical to each other, or includes more than one type of voltage regulator units. In a specific electronic device, control signals are determined based on rail voltages and rail currents of power rails required to power SOC 102 and other electronic modules, if any. For each of these power rails, a corresponding control signal is used to select a subset of voltage regulator units in the field programmable array of PMIC 118, and the selected voltage regulator units provide a rail current at a rail voltage to the respective power rail collectively. As such, PMIC 118 is reconfigured by these control signals to provide the rail voltages and currents to the power rails of SoC 102, and each voltage regulator unit in a plurality of configurable voltage regulators in PMIC 118 is either redundant or selected to drive one of the power rails by one of the control signals.[0030] Figure 2 is a block diagram of an example electronic device 200 having one or more processing clusters 202 (e.g., first processing cluster 202-1, M-th processing cluster 202-M), in accordance with some implementations. Electronic device 200 further includes a cache 208 and a memory 104 in addition to processing clusters 202. Cache 208 is coupled to
processing clusters 202 on SOC 102, which is further coupled to memory 104 that is external to SOC 102. Each processing cluster 202 includes one or more processors (also called processing cores) 204 and a cluster cache 206. Cluster cache 206 is coupled to one or more processors 204, and maintains one or more request queues for one or more processors 204. In some implementations, each processor 204 further includes a core cache (not shown in Figure 2) that is optionally split into an instruction cache and a data cache, and core cache stores instructions and data that can be immediately executed by the respective processor 204. In an example, first processing cluster 202-1 includes first processor 204-1 N-th processor 204-N, and first cluster cache 206-1, where N is an integer greater than 1. In some implementations, SOC 102 only includes a single processing cluster 202-1. Alternatively, in some implementations, SOC 102 includes at least an additional processing cluster 202, e.g., M-th processing cluster 202-M. M-th processing cluster 202-M includes a first processor, an N’-th processor, and an M-th cluster cache, where N’ is an integer greater than 1. [0031] In some implementations, the one or more processing clusters 202 are configured to provide a central processing unit for an electronic device and are associated with a hierarchy of caches. For example, the hierarchy of caches includes three levels that are distinguished based on their distinct operational speeds and sizes. For the purposes of this application, a reference to “the speed” of a memory (including a cache memory) relates to the time required to write data to or read data from the memory (e.g., a faster memory has shorter write and/or read times than a slower memory), and a reference to “the size” of a memory relates to the storage capacity of the memory (e.g., a smaller memory provides less storage space than a larger memory). The core cache, cluster cache 206, and cache 208 correspond to a first level (LI) cache, a second level (L2) cache, and a third level (L3) cache, respectively. Each core cache holds instructions and data to be executed directly by a respective processor 204, and has the fastest operational speed and smallest size among the three levels of memory. For each processing cluster 202, the cluster cache 206 is slower operationally than the core cache and bigger in size, and holds data that is more likely to be accessed by processors 204 of respective processing cluster 202. The cache 208 is shared by the plurality of processing clusters 202, and bigger in size and slower in speed than each core cache and cluster cache 206.[0032] The processing clusters 202 issue prefetch requests to extract the instructions and data to be held by each core cache from the cluster cache 206, cache 208 or memory 104.
If the prefetch requests are satisfied by the cluster cache 206, the cluster cache 206 provides the instructions and data to the respective core cache for execution by the processors 204. Conversely, if the prefetch requests are not satisfied by the cluster cache 206, the prefetch requests are sent to the cache 208 to extract the instructions and data. If the prefetch requests are satisfied by the cache 208, the cache 208 provides the instructions and data to the cluster cache 206, which further passes the instructions and data to the respective core cache for execution by the processors 204. Conversely, if the prefetch requests are not satisfied by the cache 208, the prefetch requests are sent to the memory 104 external to the SoC 102 to extract the instructions and data. The memory 104 provides the instructions and data to the cache 208, which passes the instructions and data to the cluster cache 206 and then to the respective core cache.[0033] Additionally, the processing clusters 202 issue memory access requests to write data into and read data from the cluster cache 206, cache 208 or memory 104 during normal operation of each processing cluster. Each memory access request is passed sequentially from the cluster cache 206, cache 208, and memory 104, until the respective memory access request reaches a target cache or memory. A data to be written into the target cache or memory is also passed sequentially from the cluster cache 206, cache 208, and memory 104, until the respective data reach the target cache or memory. In contrast, a data read from the target cache or memory is provided directly to the respective core caches to be used by the processors 204.[0034] In various implementations of this application, operations of the processing clusters 202, PMIC 118, cache 208, and memory 104 consume power and create heat on the electronic device 200, and a power management engine 210 is applied to manage power consumptions of the electronic device 200 from both a firmware level and a hardware level. Specifically, the power management engine 210 is configured to receive the plurality of power samples from a plurality of power sensors distributed on an electronic device 200. The SOC 102, PMIC 118, and memory 104 are partitioned to a plurality of power domains. The power samples are processed based on locations of the corresponding power sensors to generate one or more power profiles and a plurality of power throttling thresholds for the individual power domains. Each power profile is optionally a system power profile of the entire electronic device 200 or a combination of multiple domains (e.g., a processor cluster 202, an SoC 102) or a local power profile of an individual power domain (e.g., a processor
204). Based on the one or more power profiles, the power management engine 210 implements a global power control operation having a first rate by determining power budgets among the plurality of power domains and enabling operations of the plurality of power domains according to the power budgets. Further, based on the local power profiles, the power management engine 210 enables a plurality of local power control operations having second rates on the plurality of power domains (e.g., the memory 104, PMIC 118, processing cluster 202-M) based on the plurality of power throttling thresholds. The local power control operations are more direct than the global power control, and each second rate is greater than the first rate. For example, the first rate of the global power control operation is 50 μs and a corresponding thermal response lasts for 500 μs, while the second rate of the local power control operations is 20 μs and a corresponding thermal response lasts for 100 μs. By these means, the electronic device 200 enables a hierarchical scheme to manage power consumption from both a firmware level and a hardware level.[0035] In some implementations, the one or more power profiles include a system power profile tracking an average power consumption or an average total current of a subset or all of the plurality of power domains of the electronic system. The power management engine 210 is configured to, in accordance with the system power profile, enable the global power control operation and the plurality of local power control operations based on a requirement for a power control rate, the first rate of the global power control operation, and the second rates of the local power control operations. If the requirement for the power control rate is faster than the first rate, then the local power control operations need to be implemented directly to reduce the power consumption or total current, i.e., by a “hard throttling” process implemented directly on the hardware level. If the requirement for the power control rate is less than the first rate, a global power control operation may be taken to adjust the power budgets (e.g., P-states of the power domains) and enable local power control operations based on the power budgets, i.e., by a “soft throttling” process initiated from the firmware level. The requirement for the power control rate is determined with reference to a maximal temperature TMAX, a maximal power consumption PMAX, and a maximal current value IMAXtolerated by the electronic system. By these means, the system power profile is controlled below a predefined upper limit for the subset or all of the plurality of power domains of the electronic system.
[0036] In some implementations, the one or more power profiles include a local current profile tracking a current of a first power domain. The power management engine 210 is configured to in accordance with the local current profile, enable the global power control operation and a local power control operation focused on the first power domain based on a requirement for a power control rate, the first rate of the global power control operation, and the second rates of the local power control operations. The requirement for the power control rate is determined with reference to a maximal temperature TMAX, a maximal power consumption PMAX, and a maximal current value IMAXtolerated by the first power domain. By these means, the local current profile is controlled below a predefined current limit for the first power domain.[0037] Figure 3 is a cross sectional view of an integrated semiconductor device 300, in accordance with some implementations. Semiconductor device 300 integrates at least one SoC die 202 and at least one PMIC die 118 in a semiconductor package, and includes at least a package substrate 304 having a first surface 304A and a second surface 304B that is opposite to first surface 304A. SoC die 202 is disposed on first surface 304A of package substrate 304, and PMIC die 118 is coupled to second surface 304B of package substrate 304. In some implementations, a first interposer 324 is disposed between SoC die 302 and first surface 304A of package substrate 304. In some implementations, a second interposer 328 is disposed between PMIC die 118 and second surface 304B of package substrate 304. In some implementations, the integrated semiconductor device 300 is disposed on a printed circuit board (PCB) with memory 104 and a power management engine 210. The power management engine 210 is configured to manage power consumption of an entire electronic system formed on the PCB on both a firmware level (i.e., a board level) and a hardware level (i.e., on an individual hardware level, such as on an SoC level and on a memory level). In some implementations, the integrated semiconductor device 300 includes one or more power domains, and the power management engine 210 is configured to manage power consumption of each individual power domain on the hardware level.[0038] Package substrate 304 further includes a plurality of first via interconnects 306 that pass through a body of package substrate 304 and is exposed on both first and second surfaces 304A and 304B, respectively. PMIC die 118 is electrically coupled to SoC die 202 via the plurality of first via interconnects 306 of package substrate 304. Specifically, PMIC die 118 includes a plurality of DC connections 308 configured to output a plurality of rail
voltages, provided to power rails. When PMIC die 118 is mounted on second surface 304B of package substrate 304, DC connections 308 are electrically coupled to the plurality of first via interconnects 306 of package substrate 304. In some implementations, SoC die 202 includes a plurality of power connections 312 configured to receive the plurality of rail voltages. When SoC die 202 is mounted on first surface 304A of package substrate 304, power connections 312 are electrically coupled to the plurality of first via interconnects 306 of package substrate 304. As such, PMIC die 118 is configured to provide DC power (i.e., rail voltages and rail current of power rails) to SoC die 202 via DC connections 308 of PMIC die 118, power connections 312 of SoC die 202, and first via interconnects 306 of package substrate 304. Further, by using very low impedance DC connections 308, the quality of the DC power provided PMIC die 118 to SoC die 202 is substantially improved relative to systems in which PMIC die 118 and SoC die 202 are separately packaged and positioned side by side on a main circuit board.[0039] In some implementations, a power management interface on PMIC die 118 is controlled by a master power management interface of SoC die 202, and configured to receive digital power control signals from SoC die 202. A subset of first via interconnects 306 is configured to transfer digital power control signals from SoC die 202 to PMIC die 118. [0040] SoC die 202 has a first footprint on package substrate 304, and PMIC 118 has a second footprint on package substrate 304. The first and second footprints at least partially overlap for the purposes of coupling DC connections 308 of PMIC die 118 and power connections 312 of SoC die 202 directly using the plurality of first via interconnects 306. In some situations, the first footprint of SoC die 202 is larger than and entirely encloses the second footprint of PMIC die 118. Alternatively, in some situations, the first footprint of SoC die 202 is offset from the second footprint of PMIC die 118, but at least partially overlaps the second footprint of PMIC die 118. DC connections 308 of PMIC die 118, power connections 312 of SoC die 202, and first via interconnects 306 of package substrate 304 are aligned and enclosed in an overlapped area of the first and second footprints.[0041] In some implementations, integrated semiconductor device 300 further includes a cover 314 coupled to first surface 304A of package substrate 304. Cover 314 is configured to conceal SoC die 202 and at least part of first surface 304A of package substrate 304, thereby protecting SoC die 202 and at least part of first surface 304A. Further, in some implementations, cover 314 is made of an electrically conductive material and configured to
be grounded to provide electrostatic shielding for SoC die 202 and any other circuit on first surface 304A, if completely concealed by cover 314, or the part of first surface 304A concealed by cover 314, if first surface 304A is only partially concealed by cover 314. In some situations, cover 314 is made of a thermally conductive material configured to dissipate heat generated by SoC die 202.[0042] In some implementations, semiconductor device 300 further includes a socket substrate 318. Socket substrate 318 has a third surface 318A facing second surface 304B of package substrate 304. Package substrate 304 is electrically coupled to socket substrate 318 via a plurality of electrical connectors 320. Specifically, second surface 304B of package substrate 304 includes a first area (e.g., a central area) to which PMIC die 118 is mechanically coupled and a second area (e.g., a peripheral area) where the plurality of electrical connectors 320 are located. In an example, the second area is adjacent to and surrounds the first area. It is noted that under some circumstances, semiconductor device 300 is provided with socket substrate 318. However, under some circumstances, socket substrate 318 is fixed on a circuit board of the electronic device in Figure 1, and is not part of integrated semiconductor device 300. Rather, semiconductor device 300 is a replaceable part that is provided to offer functions of a combination of PMIC die 118 and SoC die 202.[0043] In some implementations, third surface 318A of socket substrate 318 is substantially flat, and PMIC die 118 is disposed between second surface 304B of package substrate 304 and third surface 318A of socket substrate 318. Alternatively, in some implementations, socket substrate 318 includes a recessed portion 322 that is formed on third surface 318A and configured to receive PMIC die 118 when PMIC die 118 is mechanically and electrically coupled to second surface 304B of package substrate 304. In some situations, PMIC die 118 is suspended in recessed portion 322, i.e., separated from a bottom surface of recessed portion 322 by an air gap. Alternatively, in some situations, PMIC die 118 comes into contact with the bottom surface of recessed portion 322 directly or via an intermediate layer (e.g., an adhesive layer, a thermal spreader layer, or a layer that is both adhesive and a thermal spreader).[0044] In some implementations, semiconductor device 300 further includes one or more discrete electronic modules 330 (e.g., resistor, capacitor, inductor, transistors, and logic chip). Discrete electronic modules 330 may be electrically coupled in an input/output interface circuit of SoC die 202 to control input/output coupling for SoC die 202. Optionally,
a subset of discrete electronic modules 330 (e.g., components 330A) is disposed on first surface 304A of package substrate 304. Each component 330A may be contained within cover 314 or located outside cover 314. Optionally, a subset of discrete electronic modules 330 (e.g., components 330B) is mechanically coupled to second surface 304B of package substrate 304. If a respective component 330B has a low profile (e.g., thinner than a length of electrical connectors 320), component 330B may fit into a gap between second surface 304B of package substrate 304 and third surface 318A of socket substrate 318. Otherwise, if component 330B does not have a low profile (e.g., thicker than the length of electrical connectors 320), a respective component 330B can be received by recessed portion 322 of socket substrate 318 and disposed adjacent to PMIC die 118.[0045] SoC die 202 and PMIC die 118 are vertically arranged in semiconductor device 300. Power connections 312 of SoC die 202 and DC connections 308 of PMIC die 118 are aligned and positioned in proximity to each other, thereby reducing parasitic resistance and capacitance coupled to each power rail that provides a rail voltage to SoC die 202. It is noted that in some implementations, a plurality of PMIC dies 118 can be disposed in recessed portion 322 of socket substrate 318 and electrically coupled to one or more SoC dies 202 disposed on first surface 304A of package substrate 304. For example, two PMIC die 118 are disposed in recessed portion 322 of socket substrate 318 to power four SoC dies 202 collectively. One of SoC dies 202 optionally corresponds to a microprocessor or CPU core or a cluster of microprocessor or CPU cores.[0046] Additionally, in some implementations of this application, PMIC die 118 includes a field programmable array of voltage regulators that is configurable by control signals to drive different types of SoC dies 202. In some situations, the same PMIC die 118, package substrate 304, and socket substrate 318 are used to support the different types of SoC dies 202. Recessed portion 322 formed on socket substrate 318 has a fixed size to accommodate the same PMIC die 118, and first via interconnects 306 that pass through the body of package substrate 304 have fixed locations. Alternatively, in some situations, while footprint sizes of package substrate 304 and socket substrate 318 are varied for the different types of SoC dies, the same PMIC die 118 allows recessed portion 322 and first via interconnects 306 of package substrate 304 to remain unchanged, thereby avoiding custom designing PMIC die 118 and the entire package for each individual type of SoC die 202. As such, application of the field programmable array of voltage regulators in PMIC die 118
simplifies an assembly process and enhances cost efficiency of the semiconductor device 300.[0047] Figure 4 is a block diagram of a processor system 400 of an electronic device including a plurality of distributed power sensors 402 and a power management engine 210, in accordance with some implementations. The processor system 400 includes at least an SoC 102 and a power management engine 210. The SoC 102 has at least one or more processing clusters 202, system cache 208, and one or more Peripheral Component Interconnects (PCIs) and socket-to-socket controller 404. The SoC 102 is powered by one or more power rails that are powered by the PMIC 118. Power consumptions of the SoC 102 can be directly monitored by the power sensors 402 and reported to the power management engine 210. [0048] The SoC 102 is optionally coupled to one or more additional components that include, but are not limited to, memory 104 external to the processing clusters 202, PMIC 118 that is optionally integrated with the SoC 102, a system control, manageability and debug (CMD) component, a security processor, and an input/output (IO) controller 106. In some implementations, these components of the processor system 400 are mounted on a circuit board. These components in the processor system 400 are also powered by a plurality of power rails provided by the PMIC 118. Specifically, the PMIC 118 receives one or more input supply voltage and generates a plurality of power supply voltages to drive the plurality of power rails of the SoC 102, memory 104, PMIC 118, PCIs 404, and any other components in the processor system 400. As such, the power management engine 210 may monitor power consumptions of the components of the processor system 400 directly from the power rails driven by the PMIC 118.[0049] The plurality of power sensors 402 are distributed on a subset of the processor system 400, i.e., on one or more of the SoC 102, memory 104, PMIC 118, PCIs 404, system CMD component, security processor, IO controller 106, and the like. In some implementations, the power sensors 402 include a set of activity monitor units 406 (AMus, also called telemetry sources) and a set of temperature sensors 408. The AMUs 406 are configured to measure power consumptions, current values, or both associated with different power rails. In some embodiments, the AMUs 406 are configured to measure activity levels of the corresponding subset of the processor system 400, and the activity levels are used to estimate the power consumptions and/or current values of the corresponding subset of the processor system 400. The temperature sensors 408 are configured to measure temperature
values locally at the domains wherein the temperature sensors are disposed. For example, in Figure 4, the SoC 102 includes three processing clusters 202 A, 202B, and 202C, a system cache 208, and a PCI or socket-to-socket controller 404. Each of the processing clusters 202, system cache 208, and PCI or socket-to-socket controller 404 is coupled to one or more AMUs 406 configured to measure the power consumptions and/or current values of one or more power rails of the respective component and one or more temperature sensors 408 configured to measure the temperature values of the respective component.[0050] In some implementations, a subset of AMUs 406 are adjacent to each other.One of the subset of AMUs 406 is a regional AMU (R-AMU) 406, while other AMUs 406 in the subset are local AMUs 406. The regional AMU 406 collects power samples from the local AMUs 406, and optionally preprocess the collected power samples. For example, in the SoC 102, the AMU 406B coupled to a power rail of the second processing cluster 202B acts as a regional AMU of the subset of AMUs 406A-406E that are distributed on the SoC 102. The power samples collected from the subset of AMUs 406A-406E are optionally consolidated by the regional AMU 406B and sent to the power management engine 210. In some implementations, a subset of temperature sensors 408 are adjacent to each other and subject to control of one of temperature sensors 408, and the one of the subset of temperature sensors 408 is a temperature sensor hub 408. For example, in the SoC 102, the temperature sensor 408C coupled to the third processing cluster 202C acts as a temperature sensor hub of the subset of temperature sensors 408A-408E that are distributed on the SoC 102. The temperature samples collected from the subset of temperature sensors 408A-408E are optionally consolidated by the temperature sensor hub 408C and sent to the power management engine 210. In some situations, the temperature sensor hub 408C also collects and/or consolidates power samples from the AMUs 406 around the hub 408C, and the regional AMU 406B also collects and/or consolidates power samples from the temperature sensors 408 around the regional AMU 406B.[0051] In some implementations, each processing cluster 202 includes a plurality of processors 204A-204D (also called processor cores 204) and cluster cache 206. A number of temperature sensors 408 are distributed on the processors 204 and cluster cache 206. For example, each processor 204 has two temperature sensors 408, and each cluster cache 206 has a single temperature sensor 408. A temperature sensor hub 408H includes two controllers
and is configured to consolidate the temperature samples collected by the temperature sensors 408 of the entire processing cluster 202.[0052] In some implementations, power samples (e.g., power consumption, current values, and temperature values) measured by the AMUs 406 or temperature sensors 408 are applied locally on the hardware level to control power consumption or current level of a corresponding processor 204 or a processing cluster 202. For example, the power samples are compared directly with a current throttling threshold ITRTto disable operation of a processor 204 or vary a power performance state (P-state) of the processor 204 (e.g., switch among a set of different predefined P-states). The power samples may be averaged over a time window or across two or more distinct AMUs to obtain an averaged power sample. The averaged power sample is compared with the current throttle threshold ITRTto disable operation of the processor 204 or vary the P-state of the processor 204. Such a local hardware-level power control operation is implemented on individual processors 204, processor clusters 202, and SoC 102, except that the current throttle threshold ITRTmay be predetermined by the power management engine.[0053] The components coupled to the power management engine 210 are partitioned into a plurality of power domains. For example, an SoC 102, a single processing cluster 202, or a processor 204 is one of the domains. Each power domain has a respective set of power sensors 402 including one or more AMUs 406 and one or more temperature sensors 408. In some implementations, both the one or more AMUs 406 and one or more temperature sensors 408 are physically located at the respective power domain. In some implementations, the one or more temperature sensors 408 are physically located at the respective power domain, while the one or more AMUs 406 are located at a portion of the PMIC 118 configured to provide the power rails to the respective power domain, and electrically coupled to the power rails on the PMIC 118. In some implementations, the power samples collected from each power domain are pooled and sent to the power management engine 210 by a regional AMU 406 or a temperature sensor hub 408 according to a global pooling frequency.[0054] The power management engine 210 includes an aggregator 410 and a throttle policy controller 412. The aggregator 410 is configured to collect the power samples collected by the distributed power sensors 402 or power samples consolidated from the collected power samples. In some implementations, the aggregator 410 generates a system power profile indicating overall power performance of the entire processor system 400 or a
combination of multiple power domains. An example of the system power profile is a system temperature profile (e.g., curve 602 in Figure 6) indicating a temporal variation of an average temperature of an entire SoC 102. In some implementations, the aggregator 410 generates one or more local power profiles each indicating local power performance of an individual domain. For example, a local temperature profile of a processor 204 (e.g., curve 606 in Figure 6) indicates a temporal variation of an average temperature determined from all of the temperature sensors 408 of the processor 204A in Figure 4. In some implementations, the aggregator 410 defines and/or adjusts a plurality of power throttling thresholds for the plurality of domains. The throttle policy controller 412 is configured to provide each of the power throttling thresholds to a respective domain or a respective subset of domains to control power consumption of the respective domain or subset of domains.[0055] In some implementations, each processor cluster 202 includes a global module 414 coupled to the one or more processors 204, cluster cache 206, and the plurality of power sensors 402. The global module 414 is configured to collect the power samples measured by the power sensors 402 and/or the power samples consolidated by the temperature sensor hub 408H or regional AMU 406 and send the collected power samples to the aggregator 410 of the power management engine 210. The global module 414 is also configured to receive the plurality of power throttling thresholds and control signals from the throttle policy controller 412 of the power management engine 210 and enable local power control operations including architecture throttling, clock throttling, performance point throttling, and activation of different predefined P-states. It is noted that, in some embodiments, throttling actions in each domain are controlled by the PDP 416 during a global power control operation and by the global module 414 during a local power control operation.[0056] For clarification, in some embodiments, the global power control operations are implemented by the entire SoC 102 or by a processor cluster 202, and involve the power management engine 210. The local power control operations are implemented locally in each processor cluster 202 or each processor 102 of the processing cluster 202, without involving the power management engine 210. Alternatively, a regional power control operation refers to power control operations associated with a subset (not all) of adjacent power domains (e.g., each processor cluster 202 in Figure 4), and the local power control operations are limited to each individual domain (e.g., processor 102).
[0057] Figures 5 A and 5B are block diagrams of power management systems 500 and520 configured to manage power of an SoC -based electronic device on a firmware level and a hardware level, in accordance with some implementations, respectively. Figure 5C illustrates a comprehensive power management scheme 560 in which power of an SoC -based electronic device 200 is managed on both a firmware level and a hardware level, in accordance with some implementations. As explained above with reference to Figure 2, the power management engine 210 is configured to enable both firmware-level and hardware-level power management tasks. Specifically, the power management engine 210 collects a plurality of power samples from a plurality of power domains 502 and generates one or more power profiles and a plurality of power throttling thresholds for the individual power domains 502, and each power profile is optionally a system power profile of the entire electronic device 200 or a combination of multiple domains (e.g., a processor cluster 202, an SoC 102) or a local power profile of an individual power domain (e.g., a processor 204). The plurality of power samples is measured by a plurality of power sensors 402 distributed across the domains or preprocessed from raw power samples measured by the power sensors 402. [0058] On the firmware level, the power management engine 210 implements a global power control operation having a first rate based on the one or more power profiles, e.g., by distributing (562) power budgets 504 among the plurality of power domains 502 and enabling operations of the plurality of power domains 502 according to the power budgets. Temporal lengths 506 of power management physical control loops (i.e., long control loops) range from tens of nanoseconds to several milliseconds. Typical temporal lengths 506 are in a range of 100 μs to 1 ms. In some implementations, the global power control operation is implemented jointly by the power management engine 210 and each domain’s Power and Debug Processor (PDP) 416. The global power control operation is implemented periodically according to a first loop period 508, e.g., every 100 μs or faster for an event associated with the PDP 416. In some implementations, the global power control operation includes selecting one of a plurality of predefined power performance states (P-states) 510 for each of a plurality of processors. Each of the P-states corresponds to predefined set of power and performance settings of the processors. The power budgets are distributed among the plurality of domains according to the predefined power and performance settings of the selected P-state 510 of each processor. In some implementations, the global power control operation includes determining what throttling operations to take on individual domains. The
power management engine 210 provides the plurality of power throttling thresholds 512 to different power domains 502 and enables the domains to implement such throttling operations.[0059] It is noted that in some implementations, the global power control operation is implemented in response to a local event occurring to a local power profile of a specific domain. The event may not be so critical that the response time associated with the global power control operation is sufficient to address the event. For example, an event occurring to a local power profile of a processor cluster 202 is associated with a PDF 416 of the processor cluster 202, and can be resolved by the global power control operation that is implemented with a loop period corresponding to 100 μs.[0060] On the hardware level, the individual domains 502 pre-load (564) the plurality of power throttling thresholds 512 set by the power management engine 210, and implement the local power control operations (e.g., the throttling actions) without involving extended firmware-level operations in real time. Referring to Figure 5C, the different power domains 502 monitored and controlled include two processing clusters 202A and 202B, a logic portion of the SoC 102, memory 104, and a socket-to-socket connector 404 (more specifically, a power rail VDD of the connector 404). The power management engine 210 or an individual power domain 502 monitors a local power profile and enables one or more local power control operations having second rates on the individual power domain 502 (e.g., the memory 104, PMIC 118, processing cluster 202-M) based on an associated power throttling threshold 512. For example, a current of the processing cluster 202 is monitored to exceed a predefined high peak current IMAXH(e.g., 1 A) for a duration longer than a predefined short burst time IBS (e.g., 20 μs). A current control signal is generated for the individual power domain 502 or the PMIC 118 to request reduction of the current of the processing cluster 202. The power throttling thresholds 512 (e.g., IMAXH) are predetermined for the local power control operations and can be applied to the individual domains 502 directly. No firmware-level power budget redistribution is needed in real time. In an example, a local power control operation is implemented periodically according to a second loop period 514, e.g., every 50 μs, and temporal lengths 516 of corresponding power management control loops are approximately 300 μs. As such, individual domains’ local power control operations respond more promptly on the hardware level than the global power control operation implemented on the firmware level.
[0061] In some implementations, for each domain 502, a local power control operation includes a throttling action selected from architecture throttling, power rail scaling, and clock throttling. Architecture throttling is applied to periodically block traffic to the respective domain including DRAM or suppress high current spikes in the respective domain including a processor. Clock throttling is applied to reduce a clock frequency of the respective domain. Performance point throttling is applied to adjust the clock frequency and power supply voltages of the respective domain jointly. In some situations, voltage regulators coupled to respective power rails of the respective domain are adjusted to vary power supply voltages and associated current injected into the respective power rails.[0062] Referring to Figure 5C, in some implementations, the global power control operation and local power control operations are applied jointly and correspond to different priorities in different situations. Global power control operation typically requires total budget calculation, subdomain budget partition, or budget reallocation. In some situations, when operations of the plurality of power domains 502 are enabled according to the power budgets, domain specific control loops are optionally applied with higher level algorithms having long control loops and complex computation. The power management engine 210 is involved to control the domain specific control loops on the firmware level. This also explains why the first rate of the global power control operation is less than the second rates of the local power control operations. Alternatively, in some situations, when operations of the plurality of power domains 502 are enabled according to the power budgets, the power throttling thresholds 512 applied in the local power control operations are applied or predefined P-states 510 are loaded according to predefined operation condition policies, which can effectively enhance the first rate of the global power control operation.[0063] In some implementations, a plurality of power samples are collected from a plurality of domains 502 according to a local sampling rate (e.g., 1 sample every 1 μs). Each local power profile includes a temporal sequence of local power samples, and each local power sample is combined from a respective subset of collected power samples of a respective domain according to a pooling rate. For example, each local power sample is an average of the respective subset of current samples measured for a current of a power rail of a processing processor 204, and averaged over a time window having a predefined temporal length (e.g., 10 μs). Such data collection and averaging are implemented on the hardware level, i.e., by individual domains 502, before or after the local power samples of each local
power profile are reported to the power management engine 210. Thus, in some implementations, the power management engine 210 has a period of a predefined controlling frequency that does not exceed the predefined temporal length. Local power control operations that are based on comparisons with power throttling thresholds have local controlling frequencies, and the local controlling frequencies do not exceed the predefined temporal length of the time window. The power management engine 210 is not directly involved in continuous periodic loops of local power value evaluation and power control on individual power domains, except that the power throttling thresholds 512 used in the local power control operation are predetermined by the power management engine 210 on the firmware level.[0064] In some situations, a loop control time constant of the firmware’s long control loop or the hardware’s short control loop is dynamically adjusted. For example, when an SoC 102 temperature has risen close to a maximal temperature TMAX, the loop control time constant is reduced to enable close monitoring. If the loop control time constant is too short for the global power control operation, primary control is passed to the local power control operations by individual domains. More details on an example temperature control process are described below with reference to Figure 6. In some situations, the power management engine 210 reduces the power throttling thresholds 512 of individual domains 502 to capture excursions first. In some situations, power control windows are shorted to allow less generous opportunistic performance boosts in place of more stringent limits enforcement. In some situations, the power management engine 210 uses throttle levels and event monitoring to detect excessive throttling activation by individual domains 502 and modify power throttling thresholds under its purview to attain a more efficient operations.[0065] Firmware-level power management control (Figure 5A) by the power management engine 210 corresponds to a long control loop. Hardware-level power management control (Figure 5B) by the individual domains corresponds to a short control loop. In the long control loop, hardware-level throttling mechanisms (e.g., implemented by the local power control operations) act as backup and fallback for the firmware-level power management control, thereby ensuring the plurality of domains 502 to comply with respective power limits, particularly, when the power management engine 210 has skipped a beat or when the long control loop has not properly identified an error size. In the short control loop, the hardware-level throttling mechanisms (e.g., implemented by the local power control
operations) act as primary control agents and provide short time-duration loop enforcement and fast responses. For example, a multi-level throttling mechanism is applied to implement a level of hysteresis and complements the firmware-level power management control.[0066] In some situations, power management is tasked with maximizing the electronic device’s performance on an incoming instruction stream, based on a given set of operating system (OS) performance directives, under a given set of external constraints. The incoming instruction stream varies greatly per domain, among processing cores 204, and even during execution from one program phase to another. The performance directives satisfy the OS performance level requirements and expectation. In some cases, the performance directives also satisfy performance and power preferences for each processing core 204 and/or cluster 202. Constraints may vary (e.g., correspond to different time windows) among different devices and domains (e.g., SoC, memory 104). Particularly, in an example, a processing core constraint has a time window that is too short to implement on a firmware level via the power management engine 210, and the time window can only be accomplished by applying the processing core constraint directly on a corresponding processing core. As such, power management of an SoC -based electronic device requires a combination of hardware and firmware policies, tracking physical constraints, OS requirements and directives, and instruction stream characteristics to optimize performance and power tradeoffs.[0067] In some implementations, an operating system uses a collaborative processor performance control (CPPC) infrastructure for requesting SoC performance changes. For example, the operating system and processors 204 of the SoC 102 can optimize power consumption through different p-states (power performance states), and the processors 204 are operated at different frequencies. A high-performance mode of a processor 204 reflects an absolute maximum performance the processor 204 may reach, assuming ideal conditions. This performance level does not sustain for long durations and may only be achievable by forcing other processors 204 or memory 104 into a specific state (e.g., an idle state). A nominal performance of a processor 204 reflects a maximum sustained performance level of the processor 204, assuming ideal operating conditions. In the absence of an external constraint (power, thermal, etc.), this is the performance level that the SoC-based electronic device maintains continuously. In some implementations, all processors 204 sustain their nominal performance mode simultaneously. A guaranteed performance mode of a processor
204 reflects a current maximum sustained performance level of the processor 204, taking into account all known external constraints (power budgeting, thermal constraints, DC or AC power source, etc.). In some implementations, all processors sustain their guaranteed performance levels simultaneously. The guaranteed performance level is required to fall in a performance range between a lowest performance level and a nominal performance level that corresponds to the nominal performance mode, inclusive. In some situations, the guaranteed performance mode is updated once per second to reflect thermal and power constraints. [0068] A processor system is configured to monitor the throttling actions controlled by the power management engine 210 over time and collaborate with the power management engine 210 in real time to maximize performance of the entire processor system while keeping temperature/power usage of its power domains within predefined operating ranges. In some implementations, if the processor system determines that the power management engine 210 is taking excessive throttling actions (e.g., in excess of a predefined percentage over a time duration), the processor system may reassign processes to different clusters 202 and/or processors 204 or bring on-line additional clusters 202 and/or SOCs 102 to reduce globally excessive workloads. For example, in some implementations, such a situation is determined to exist if a substantial percentage of the processing clusters 202 have one or more domains with a measured temperate that is consistently above a predefined threshold temperature TSET.[0069] Figure 6 is a temporal diagram of device temperatures 600 of an electronic device including an SoC 102, in accordance with some implementations. The electronic device is configured to monitor system temperature profiles 602 and 604 of an SoC 102 and a local temperature profile 606 of a processor 204. Global and local power control operations are applied to adjust power consumptions and thermal responses of the SoC 102 or processor 204 under different conditions. When the SoC 102 operates at a predefined operation frequency (e.g., 3.6 GHz), the temperature of the SoC 102 is configured to stabilize at a first threshold temperature TSET(e.g., 98-99 °C). In some implementations, temperature-based power control is applied to achieve stable performance close to the predefined operation frequency.[0070] In some situations (e.g., associated with the profile 602), the processors 204 of the SoC 102 are allowed to exceed power limits for short durations of time. The PMIC 118 can enhance a nominal current (e.g., ICC,nom) for a predefined time window (e.g., 135ICC,nomfor 300-400 μs, 1.2ICC,nomfor 1 ms). A maximal current tolerance ICC,MAXis disabled from limiting this enhanced current within the predefined time window. The temperature of the SoC 102 slowly increases towards a maximal temperature TMAXuntil a local power control operation 610 is applied to reduce a temperature increase rate. In some situations (e.g., associated with the profile 604), bursts of instruction sequences occur and cause a sudden increase of power consumption and a sudden temperature increase. Such bursts of instruction sequences normally settle and return to normal processing levels within a duration of time, e.g., 300-1000 μs. The temperature or power increase is monitored over a predefined window size LWcorresponding to the duration of time. If the temperature or power increase exceeds a predefined limit, the increase is determined as excessive, and throttling actions are taken to suppress the temperature or power increase.[0071] Specifically, a processor system (e.g., an SoC 102) includes one or more processing clusters 202 each of which includes one or more processors 204. The processors 204 of the SoC 102 are associated with a plurality of domains 502. A plurality of power samples are measured for the plurality of domains 502. In some embodiments, the plurality of power samples are averaged according to a global pooling rate at a local temperature sensor hub 408 or regional AMU 406. The measured or averaged power samples are sent to a power management engine 210. The power management engine 210 further processes the power samples associated with the plurality of domains to generate a system temperature profile 602. The system temperature profile 602 tracks a temperature level of the SoC 102, and therefore, includes a temporally-ordered sequence of system temperature values.[0072] During normal operation of the SoC 102, the power management engine 210 determines whether the system temperature profile 602 increases to and beyond the first temperature threshold TSET. If the system temperature profile 602 increases to and beyond the first temperature threshold TSETat a first time t1, the temperature values of the system temperature profile 602 are compared with a second temperature threshold TTHor a maximal temperature TMAXat a predefined controlling frequency (e.g., every 480 μs). If the respective system temperature value is between the first temperature threshold TSETand second temperature threshold TTH, a global power control operation is enabled to determine power budgets of the plurality of domains on a firmware level and enable operations of the plurality of domains according to the power budgets. If the respective system temperature value is greater than the second temperature threshold TTHor if the respective system temperature
value is greater than the first temperature threshold TSETfor longer than a threshold duration of time (e.g., 1 ms), a subset of domains are selected, and a respective power throttling action is applied to each of the subset of domains on a hardware level. By these means, when the respective system temperature value is greater than the second temperature threshold TTHor if the respective system temperature value is greater than the first temperature threshold TSETfor longer than a threshold duration of time (e.g., 1 ms), a short power control loop is applied on the hardware level to control the temperature value of the SoC 102 below the maximal temperature TMAX.[0073] For the system temperature profile 602, two global power control operations608A and 608B are applied on the firmware level within the threshold duration of time WT(e.g., 1 μs). The threshold duration of time WTis the longest duration of time allowed at a corresponding enhanced current of the SoC 102. After the threshold duration of time WTlocal power control operations 610 follow the two global power control operations 608 A and 608B to control the temperature value of the SoC 102 at a fester rate. The global power control operations 608 A and 608B have an example reaction time of 100 μs, and the local power control operations 610 have an example reaction time of 20 μs. In some embodiments, the temperature value of the system temperature profile 602 increases beyond a hard shutdown temperature THS, and a hard shutdown operation is applied to different power domains of the SoC 102 to cool down the SoC 102.[0074] Upon a burst of instructions in the SoC 102, the system temperature profile 602 changes to an alternative system temperature profile 604 that has a greater temperature increase rate. In an example, the system temperature profiles 602 and 604 correspond to overall power consumptions of 700 W and 900 W by the SoC 102, respectively. A predefined temperature increase limitΔT in the predefined window size LWcorresponds to an upper limit for a tolerable burst of instructions. In some implementations, the predefined temperature increase limitΔT is programmable. Beyond the predefined temperature increase limitΔT, prompt local power control operations (e.g., throttling actions) need to be applied. Specifically, in some implementations, a first temperature value T1and a second temperature value T2correspond to a start and an end of a time window having the predefined window size LWon the system temperature profile 604, respectively. The first temperature value T1is optionally equal to the first threshold temperature TSET, while the second temperature value T2is less than the second threshold temperature TTH. A temperature difference between the first
and second temperature values T1and T2is determined and compared with the predefined temperature increase limit ΔT, indicating whether a power surge occurs. If the temperature difference exceeds the predefined temperature increase limit ΔT, a subset of domains of the SoC 102 are selected, and a respective local power control operation (e.g., a power throttling action) is applied to each of the subset of domains on the hardware level. Examples of the respective power throttling action include architecture throttling, clock throttling, and performance point throttling. By these means, when the burst of instructions occurs in the SoC 102, the temperature value of the SoC 102 cannot exceed the maximal temperature TMAX, and the local power control operation is applied to bring down the power consumption, e.g., from 900W to 700W.[0075] During both normal operation and the burst of sequences of the SoC 102, the local power control operations correspond to a short power control loop intended to address power bursts. The short power control loop ensures that the temperature value of the SoC 102 does not increase beyond the maximal temperature TMAXin the threshold duration of time WTfollowing the first time t1when the SoC 102 reaches the first threshold temperature TSET. The global power control operations correspond to a long power control loop intended to maintain an average power level at a power limit corresponding to the first threshold temperature TSET. [0076] Additionally, in some situations, the burst of instructions occurs to a specific processor 204 in a first domain 502 as well. A local power profile 606 of the first domain 502 is obtained based on a first subset of the plurality of power values collected at the first domain 502. A predefined temperature increase limit ΔT' in the predefined window size LWalso corresponds to an upper limit for a tolerable burst of instructions of the processor 204. In some implementations, the predefined temperature increase limit ΔT' is programmable. Beyond the predefined temperature increase limit ΔT', prompt local power control operations (e.g., throttling actions) need to be applied to the first domain. In some implementations, a first temperature value T1' and a second temperature value T2' are identified on the local power profile 606, and correspond to a start and an end of a time window having the predefined window size LWon the local power profile 606, respectively. The first temperature value T1' is optionally equal to the first threshold temperature TSET, while the second temperature value T2' is less than the second threshold temperature TTH. A temperature difference is determined between the first and second temperature values and compared with the predefined temperature increase limit ΔT', indicating whether a power surge occurs to the
processor 204 on the first domain 502. If the temperature difference exceeds the predefined temperature increase limit, a local power control operation (e.g., a power throttling action) is applied to the processor 204 of the first domain on the hardware level.[0077] The system temperature profiles 602 and 604 and local temperature profile 606 do not reflect real-time power consumption performance of a corresponding processor system, because a temperature response is always delayed from a power consumption or current experienced by and measured from the processor system. In some implementations not shown in Figure 6, a system power profile is generated to monitor power consumption or current values of an SoC over a time duration directly, and a local power profile is generated to monitor power consumption or current values of a first domain (e.g., a processor 204) over a time duration directly. A power consumption or current increase (e.g., from P1to P2, from I1to I2) is monitored within the predefined window size LWto determine whether to initiate local power control operations (e.g., hard throttling) on a subset of domains or the first domain. Also, second and third criteria that are based on temperature are adjusted to be based on power consumption and current levels indicated by the system power profile. The second criterion is not as critical as the third criterion, and the corresponding power consumption and current Levels allow “soft” throttling initiated from the firmware level. In contrast, the third criterion triggers “hard” throttling on the hardware level, thereby controlling the power consumption and current levels below an upper limit at a much faster rate than “soft” throttling.[0078] In some situations, prior to the first time t1, the temperature value of the system temperature profile is compared with the first threshold temperature TSETconstantly according to a temperature monitoring frequency. After the first time t1, such a comparison at the temperature monitoring frequency is suspended, while a comparison with the second threshold temperature TTHoccurs with the predefined controlling frequency. In some implementations, when the respective system temperature value drops below the first temperature threshold TTH, the comparison operation is resumed, i.e., the temperature value of the system temperature profile is compared again with the first threshold temperature TSETconstantly according to the temperature monitoring frequency. Also, when the respective system temperature value is below the first temperature threshold TTH, the temperature value of the system temperature profile is not compared with the second threshold temperature TTHaccording to the predefined controlling frequency.
[0079] It is noted that the plurality of power samples are collected from the first domain according to a local sampling rate (e.g., every 10 μs). Each system temperature value is combined from a respective subset of power samples of the plurality of domains according to a global pooling rate (e.g., every 100 μs). The local sampling rate is greater than the global pooling rate, and the global pooling rate is greater than the predefined controlling frequency (e.g., every 500 μs).[0080] Figure 7 is a flow diagram of a method 700 of managing power consumption of an SoC -based electronic device, in accordance with some implementations. The method 700 is implemented at a processor system having a plurality of domains. In some implementations, the processor system includes a plurality of processor units, one or more memory units, and power management integrated circuit (PMIC), and each of the plurality of domains includes a distinct subset of the processor system. A plurality of power samples are collected (702) from the plurality of domains over a time duration. Each power sample includes at least one of a temperature, power consumption, and current value associated with a respective domain. In an example, each power sample includes all of a temperature, power consumption, current value associated with a processor 204 at a specific time. Optionally, these power samples are measured by power sensors located at the plurality of domains and sent to a power management engine 210. Optionally, power samples measured by power sensors are preprocessed at the domains, a hub (e.g., a regional AMU 406B and a temperature sensor hub 408C in Figure 4), or a global module 414, and the preprocessed power samples are sent to the power management engine 210. Optionally, a subset of the power samples are estimated, e.g., based on a set of power samples measured concurrently from adjacent power sensors or a history of power samples.[0081] A subset of the plurality of power samples of the plurality of domains are combined (704) to generate a system temperature profile 602 including a plurality of system temperature values. The power management engine 210 determines (706) whether the system temperature profile 602 satisfies a first criterion. In accordance with a determination (708) that the system temperature profile 602 satisfies the first criterion at a first time t1, at a predefined controlling frequency, the power management engine 210 determines (710) whether a respective system temperature value of the system temperature profile 602 satisfies a second criterion or a third criterion in real time. In some implementations, the respective system temperature value belongs to a temporally-ordered sequence of system temperature
values that are monitored subsequently to the first time t1on the system temperature profile 602 according to the predefined controlling frequency.[0082] In accordance with a determination that the respective system temperature value satisfies a second criterion, the power management engine 210 determines (712) power budgets of the plurality of domains on a firmware level and enabling operations of the plurality of domains according to the power budgets. In some implementations, these operations include power throttling actions implemented on individual domains, and however, are initiated on the firmware level and correspond to long control loops, e.g., in a global power control operation 608A or 608B in Figure 6. In accordance with a determination that the respective system temperature value satisfies a third criterion, the power management engine 210 selects (714) a subset of domains and enables a respective power throttling action to each of the subset of domains directly on a hardware level. This power throttling action is initiated directly on the hardware level and correspond to a short control loop, e.g., in a local power control operation 610 in Figure 6. Specifically, in some implementations, for each of the subset of domains, the respective throttling action includes (716) one or more of: architecture throttling, power rail scaling, and clock throttling. Architecture throttling is applied to periodically block traffic to the respective domain including DRAM or suppress high current spikes in the respective domain including a processor unit. Clock throttling is applied to reduce a clock frequency of the respective domain. Performance point throttling is applied to adjust the clock frequency and power supply voltages of the respective domain jointly.[0083] In some implementations, a first temperature value T1and a second temperature value T2are identified on the system temperature profile 604, and correspond to a start and an end of a time window having a predefined window size LW, respectively. The power management engine 210 determines a temperature difference between the first and second temperature values and whether the temperature difference exceeds a predefined temperature increase limit. In some implementations, the predefined temperature increase limit is programmable. In accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, which is optionally programmable, the subset of domains are selected to apply the respective power throttling action directly on the hardware level. The short control loops are applied to suppress the temperature increase,
Thereby ensuring that the temperature value does not cross a maximal temperature TMAXwithin threshold duration of time WTsubsequent to the first time t1.[0084] Alternatively, in some implementations, a first power value P1or I1and a second power value P2or I2are identified on a system power profile of power consumption or current values of the processor system (e.g., an SoC 102), and correspond to a start and an end of a time window having a predefined window size LW, respectively. The power management engine 210 determines a power difference between the first and second power values and whether the power difference exceeds a predefined power increase limit, which is optionally programmable. In accordance with a determination that the power difference exceeds the predefined power increase limit, the subset of domains are selected to apply the respective power throttling action directly on the hardware level. The short control loops are applied to suppress a power or current burst, thereby ensuring that the power consumption or current value does not cross a maximal power PMAXor IMAXwithin a threshold duration of time WTsubsequent to the first time t1.[0085] In some implementations, a local power profile 606 is generated for a first domain (e.g., a processor 204) based on a first subset of the plurality of power values collected at the first domain. A first temperature value T1' and a second temperature value T2' are identified on the local power profile 606, and correspond to a start and an end of a time window having a predefined window size, respectively. A temperature difference is determined between the first and second temperature values T1' and T2' and compared with a predefined temperature increase limit. In some implementations, the predefined temperature increase limit is programmable. In accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, a power throttling action is applied to the first domain directly on the hardware level. The short control loops are applied to suppress the temperature increase. Alternatively, in some implementations, the local power profile 606 is related to power consumption and current values of the first domain. A first power value P1' or I1' and a second power value P2' or I2' are identified on the local power profile 606, and correspond to a start and an end of a time window having a predefined window size, respectively. A power difference is determined between the first and second power values and compared with a predefined power increase limit, which is optionally programmable. In accordance with a determination that the power difference exceeds the predefined power increase limit, a power throttling action is applied to the first domain
directly on the hardware level. The short control loops are applied to suppress the power consumption or current increase.[0086] In some implementations, for each of the subset of domains, the respective throttling action is associated with a throttling threshold for a subset of power values corresponding to the respective domain. In accordance with a predefined power management policy, the power management engine 210 determines the throttling threshold associated with the respective throttling action of the respective domain on the firmware level. In accordance with a determination that the subset of power values of the respective domain exceeds the throttling threshold, the respective domain implements the respective throttling action on the hardware level.[0087] In some implementations, the power management engine 210 determines a total power budget for the entire processor system and dynamically assigns a respective portion of the total power budget to each of the plurality of domains. The power budgets of the domains are redistributed based on activity levels of the domains on the firmware level, and each domain is instructed to adjust its operation locally on the hardware level according to the assigned portion of the total power budget.[0088] In some implementations, based on the respective system temperature value, one of a plurality of predefined power performance states (P-states) is selected for each of a plurality of processors, and each of the P-states corresponds to a predefined set of power and performance settings of the processors. The power budgets are redistributed among the plurality of domains according to the predefined set of power and performance settings of the selected P-state for each of the plurality of processors.[0089] In some implementations, the first criterion requires that the system temperature profile increases to and beyond a first temperature threshold TSETat a corresponding time. The second criterion requires that a system temperature value at a corresponding time is between the first temperature threshold TSETand a second temperature threshold TTH. The third criterion requires that a system temperature value at a corresponding time is greater than the second temperature threshold TTHor that the system temperature value stays above the first temperature threshold TSETfor an extended time longer than a threshold duration of time. The first temperature threshold TSETis less than the second temperature threshold TTH, the second temperature threshold TTHless than a maximal temperature TMAXbelow which the processor system is controlled.
[0090] In some implementations, prior to the first time t1, whether the system temperature profile satisfies the first criterion is monitored according to a temperature monitoring frequency. After the first time t1, the power management engine 210 suspends determining whether the system temperature profile satisfies the first criterion according to the temperature monitoring frequency. In accordance with a determination that the respective system temperature value is below the first temperature threshold TTH, the power management engine 210 resumes determining whether the system temperature profile satisfies the first criterion according to the temperature monitoring frequency, and aborts determining whether the respective system temperature value satisfies the second and third criteria according to the predefined controlling frequency.[0091] In some implementations, the plurality of power samples are collected from the plurality of domains according to a local sampling rate. Each system temperature value is combined from a respective subset of power samples of the plurality of domains according to a global pooling rate. The local sampling rate is greater than the global pooling rate, and the global pooling rate is greater than the predefined controlling frequency.[0092] In some implementations, each domain is powered by one or more power rails that are driven by PMIC. For each power rail, a respective set of current values are collected for each power rail. In accordance with a determination that the respective set of current values have been greater than a first threshold current for a first duration of time (e.g., 1.35ICC,nomfor 300-400 μs) greater than a second threshold current for a second duration of time (e.g., 1.2ZICC,nomfor 1 ms), a power throttling action is implemented on the respective power rail of the respective domain. The first threshold current is greater than the second threshold current, and the first duration of time is shorter than the second duration of time. [0093] Temperature profiles do not reflect real-time power consumption or current performance of a processor system, because a temperature response is delayed from power consumption or current values experienced by and measured from the processor system. In some situations, a power management method is implemented to manage power of a processor system having a plurality of domains based on a system power profile directly. The system power profile includes a plurality of system power values that are not limited to temperature values and may be current values or power consumption values. A plurality of power samples are collected from the plurality of domains over a time duration. Each power sample includes at least one of temperature, power consumption, and current value associated
with a respective domain. A subset of the plurality of power samples of the plurality of domains are combined to generate a system power profile including a plurality of system power values (power consumptions or current values). A power management engine determines whether the system power profile satisfies a first criterion. In accordance with a determination that the system power profile satisfies the first criterion at a first time t1, the power management engine determines, at a predefined controlling frequency and in real time, whether a respective system power value of the system power profile satisfies a second criterion or a third criterion. In accordance with a determination that the respective system power value satisfies the second criterion, the power management engine determines power budgets of the plurality of domains on a firmware level, and enables operations of the plurality of domains according to the power budgets. In some embodiments, such operations my include throttling actions. In accordance with a determination that the respective system power value satisfies the third criterion, the power management engine determines selects a subset of domains and applies a respective power throttling action to each of the subset of domains on a hardware level.[0094] The first criterion is associated with initiation of a critical performance regime in which power performance of the processor system needs to be closely monitored. Both the second and second criteria are more critical than the first criterion, while the second criterion is not as critical as the third criterion. When the second criterion is satisfied, head room from a performance limit (e.g., a maximal temperature TMAX, a largest power burst) is still available, allowing the power management engine 210 to apply the global power control operation to control the power performance of the processor system using “soft” throttling from the firmware level. In contrast, when the third criterion is satisfied, the head room from the performance limit is limited, and “hard” throttling actions have to be taken directly in the hardware level to reduce temperature, power consumption or current values immediately on individual domains. The first rate of firmware-level “soft” throttling (e.g., ~ 1 ms) is not as fast as the second rates of the hardware-level “hard” throttling actions (e.g., ~ 50-100 μs). As such, “soft” or “hard” throttling actions can be applied based on an urgency level of a power condition of the processor system as indicated by the system power profile (e.g., the system temperature profile 602 and 604).[0095] Different types of temperature, power consumption, and current profiles can be monitored jointly to control temperature, power consumption, and/or current performance
of individual domains, a region of domains, or a processor system. In some implementations, referring to Figure 6, a system or local temperature profile is monitored to control temperature, power consumption, and/or current performance of a processor system (e.g., an SoC 102) or a domain (e.g., a processor 204), respectively. In some implementations, a power consumption or current profile is monitored for the processor system or individual domain to control power consumption and current performance of the processor system or individual. Optionally, a power consumption profile is monitored to control the power consumption performance of the processor system or individual domain directly and without involving monitoring of temperature or current values. Optionally, a current profile is monitored to control the current performance of the processor system or individual domain directly and without involving monitoring of temperature or power consumption.[0096] Figure 8 is a flow diagram of a method 800 of managing power consumption of an SoC -based electronic device, in accordance with some implementations. The method 800 is implemented at a power management engine of an electronic system. In some implementations, the processor system includes a plurality of processor units, one or more memory units, and power management integrated circuit (PMIC), and each of the plurality of domains includes a distinct subset of the processor system. A plurality of power samples are received (802) from the plurality of domains over a time duration. Each power sample includes at least one of a temperature, power consumption, and current value associated with a respective domain. In an example, each power sample includes all of a temperature, power consumption, current value associated with a processor 204 at a specific time. Optionally, these power samples are measured by power sensors located at the plurality of domains and sent to a power management engine 210. Optionally, power samples measured by power sensors are preprocessed at the domains, a hub (e.g., a regional AMU 406B and a temperature sensor hub 408C in Figure 4), or a global module 414, and the preprocessed power samples are sent to the power management engine 210. Optionally, a subset of the power samples are estimated, e.g., based on a set of power samples measured concurrently from adjacent power sensors or a history of power samples.[0097] The power samples are processed (804) based on locations of the corresponding power sensors to generate one or more power profiles (e.g., profiles 602-606 in Figure 6) and a plurality of power throttling thresholds. Based on the one or more power profiles, a global power control operation having a first rate is implemented (806) by
determining power budgets of a plurality of power domains on a firmware level and enabling operations of the plurality of power domains according to the power budgets. Based on the one or more power profiles, the plurality of power domains are enabled (808) to implement a plurality of local power control operations based on the plurality of power throttling thresholds on a hardware level. The local power control operations have second rates greater than the first rate.[0098] In some implementations, each processor cluster 202 includes one or more respective processors 204 and a cluster cache 206. The first memory 208 is coupled to the one or more processing clusters to receive data access requests from the one or more processor clusters 202. The PMIC is configured to provide a plurality of power rails to the one or more processor clusters 202 and second memory 104. The second memory 104 is configured to receive data retrieval requests from the plurality of processing clusters 202 to the first memory 208 that are not satisfied by the first memory 208. The plurality of power sensors 408 include a plurality of temperature sensors for measuring temperature values and a plurality of activity monitor units (AMUs) 406 for measuring power consumption and current values.[0099] In some implementations, each of the power domains includes a distinct subset of the one or more processor clusters 202, first memory 208, PMIC 118, and second memory 104. Each local power control operation is configured to be implemented on a respective power domain based on a corresponding local power profile generated from a subset of power samples collected by a subset of power sensors disposed on the respective power domain. The respective power domain is configured to receive a respective power throttling threshold from the power management engine 210. The one or more power profiles include the corresponding local power profile.[00100] In some implementations, the one or more processor clusters 202 and first memory 208 are integrated on a system on a chip (SoC) 102, and the SoC 102 is integrated with the PMIC 118 in an integrated semiconductor device 300.[00101] In some implementations, each domain is driven by one or more power rails. For each power rail, a respective set of current values is collected. In accordance with a determination that the respective set of current values have been greater than a first threshold current for a first duration of time or greater than a second threshold current for a second duration of time, a power throttling action is enabled on the respective power rail of the
respective domain. The first threshold current is greater than the second threshold current, and the first duration of time is shorter than the second duration of time.[00102] It should be understood that the particular order in which the operations in Figures 7 and 8 have been described are merely exemplary and are not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to manage power consumption of an SoC -based electronic device 200 as described herein. Additionally, it should be noted that details of other processes described above with respect to Figures 1-6 are also applicable in an analogous manner to method 700 or 800 described above with respect to Figure 7 or 8. For brevity, these details are not repeated here.[00103] Implementation examples are described in at least the following numbered clauses:[00104] Clause 1. A power management method, comprising, at a processor system having a plurality of domains: collecting a plurality of power samples from the plurality of domains over a time duration, each power sample including at least one of temperature, power consumption, and current values associated with a respective domain; combining a subset of the plurality of power samples of the plurality of domains to generate a system temperature profile including a plurality of system temperature values; determining whether the system temperature profile satisfies a first criterion; and in accordance with a determination that the system temperature profile satisfies the first criterion at a first time t1, at a predefined controlling frequency: in real time, determining whether a respective system temperature value of the system temperature profile satisfies a second criterion or a third criterion; in accordance with a determination that the respective system temperature value satisfies the second criterion, determining power budgets of the plurality of domains on a firmware level and enabling operations of the plurality of domains according to the power budgets; and in accordance with a determination that the respective system temperature value satisfies the third criterion, selecting a subset of domains and applying a respective power throttling action to each of the subset of domains directly on a hardware level.[00105] Clause 2. The method of clause 1, further comprising: generating a local power profile of a first domain based on a first subset of the plurality of power values collected at the first domain; identifying, on the local power profile, a first temperature value and a second temperature value corresponding to a start and an end of a time window having
a predefined window size, respectively; determining a temperature difference between the first and second temperature values; determining whether the temperature difference exceeds a predefined temperature increase limit; and in accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, applying a power throttling action to the first domain on the hardware level.[00106] Clause 3. The method of clause 1 or 2, further comprising: identifying, on the system temperature profile, a first temperature value and a second temperature value corresponding to a start and an end of a time window having a predefined window size, respectively; determining a temperature difference between the first and second temperature values; determining whether the temperature difference exceeds a predefined temperature increase limit; and in accordance with a determination that the temperature difference exceeds the predefined temperature increase limit, selecting the subset of domains and applying the respective power throttling action to each of the subset of domains on the hardware level.[00107] Clause 4. The method of any of clauses 1-3, wherein: for each of the subset of domains, the respective throttling action includes one or more of: architecture throttling, power rail scaling, and clock throttling; architecture throttling is applied to periodically block traffic to the respective domain including DRAM or suppress high current spikes in the respective domain including a processor unit; clock throttling is applied to reduce a clock frequency of the respective domain; and performance point throttling is applied to adjust the clock frequency and power supply voltages of the respective domain jointly.[00108] Clause 5. The method of any of clauses 1-4, wherein for each of the subset of domains, the respective throttling action is associated with a throttling threshold for a subset of power values corresponding to the respective domain, the method further comprising: in accordance with a predefined power management policy: determining by a power management engine the throttling threshold associated with the respective throttling action of the respective domain; and in accordance with a determination that the subset of power values of the respective domain exceeds the throttling threshold, implementing the respective throttling action on the respective domain.[00109] Clause 6. The method of any of clauses 1-5, further comprising: determining a total power budget for the entire processor system; and dynamically assigning a respective portion of the total power budget to each of the plurality of domains.
[00110] Clause 7. The method of any of clauses 1-6, determining the power budgets among the plurality of domains on the firmware level further comprising: based on the respective system temperature value, selecting one of a plurality of predefined power performance states (P-states) for each of a plurality of processors, each of the P-states corresponding to a predefined set of power and performance settings of the processors; and redistributing the power budgets among the plurality of domains according to the predefined set of power and performance settings of the selected P-state for each of the plurality of processors.[00111] Clause 8. The method of any of clauses 1-7, wherein: the first criterion requires that the system temperature profile increases to and beyond a first temperature threshold TSET at a corresponding time; the second criterion requires that a system temperature value at a corresponding time is between the first temperature threshold TSET and a second temperature threshold TTH; the third criterion requires that a system temperature value at a corresponding time is greater than the second temperature threshold TTH or that the system temperature value stays above the first temperature threshold TSET for an extended time longer than a threshold duration of time; the first temperature threshold TSET is less than the second temperature threshold TTH, the second temperature threshold TTH less than a maximal temperature TMAX below which the processor system is controlled.[00112] Clause 9. The method of any of clauses 1-8, wherein: the plurality of power samples are collected from the plurality of domains according to a local sampling rate; each system temperature value is combined from a respective subset of power samples of the plurality of domains according to a global pooling rate; and the local sampling rate is greater than the global pooling rate, and the global pooling rate is greater than the predefined controlling frequency.[00113] Clause 10. The method of any of clauses 1-9, wherein each domain is driven by one or more power rails, the method further comprising for each power rail: collecting a respective set of current values; and in accordance with a determination that the respective set of current values have been greater than a first threshold current for a first duration of time or greater than a second threshold current for a second duration of time, enabling a power throttling action on the respective power rail of the respective domain; wherein the first
threshold current is greater than the second threshold current, and the first duration of time is shorter than the second duration of time.[00114] Clause 11. The method of any of clauses 1-10, wherein the respective system temperature value belongs to a temporally-ordered sequence of system temperature values that are monitored subsequently to the first time t1on the system temperature profile according to the predefined controlling frequency.[00115] Clause 12. The method of any of clauses 1-11, wherein the processor system includes a plurality of processor units, one or more memory units, and power management integrated circuit (PMIC), and each of the plurality of domains includes a distinct subset of the processor system.[00116] Clause 13. A power management method, comprising, at a processor system having a plurality of domains: collecting a plurality of power samples from the plurality of domains over a time duration, each power sample including a temperature, power consumption, or current value associated with a respective domain; combining a subset of the plurality of power samples of the plurality of domains to generate a system power profile including a plurality of system power values; determining whether the system power profile satisfies a first criterion; and in accordance with a determination that the system power profile satisfies the first criterion at a first time tl, at a predefined controlling frequency: in real time, determining whether a respective system power value of the system power profile satisfies a second criterion or a third criterion; in accordance with a determination that the respective system power value satisfies the second criterion, determining power budgets of the plurality of domains on a firmware level, and enabling operations of the plurality of domains according to the power budgets; and in accordance with a determination that the respective system power value satisfies the third criterion, selecting a subset of domains and applying a respective power throttling action to each of the subset of domains on a hardware level.[00117] Clause 14. The method of clause 13, further comprising: generating a local power profile of a first domain based on a first subset of the plurality of power values collected at the first domain; identifying, on the local power profile, a first power value and a second power value corresponding to a start and an end of a time window having a predefined window size, respectively; determining a power difference between the first and second power values; determining whether the power difference exceeds a predefined power increase limit; and in accordance with a determination that the power difference exceeds the
predefined power increase limit, applying a power throttling action to the first domain on the hardware level.[00118] Clause 15. The method of clause 13 or 14, further comprising: identifying, on the system power profile, a first power value and a second power value corresponding to a start and an end of a time window having a predefined window size, respectively; determining a power difference between the first and second power values; determining whether the power difference exceeds a predefined power increase limit; and in accordance with a determination that the power difference exceeds the predefined power increase limit, selecting the subset of domains and applying the respective power throttling action to each of the subset of domains on the hardware level.[00119] Clause 16. The method of any of clauses 13-15, wherein: for each of the subset of domains, the respective throttling action includes one or more of: architecture throttling, power rail scaling, and clock throttling; architecture throttling is applied to periodically block traffic to the respective domain including DRAM or suppress high current spikes in the respective domain including a processor unit; clock throttling is applied to reduce a clock frequency of the respective domain; and performance point throttling is applied to adjust the clock frequency and power supply voltages of the respective domain jointly.[00120] Clause 17. The method of any of clauses 13-16, wherein for each of the subset of domains, the respective throttling action is associated with a throttling threshold for a subset of power values corresponding to the respective domain, the method further comprising: in accordance with a predefined power management policy: determining by a power management engine the throttling threshold associated with the respective throttling action of the respective domain; and in accordance with a determination that the subset of power values of the respective domain exceeds the throttling threshold, implementing the respective throttling action on the respective domain.[00121] Clause 18. The method of any of clauses 13-17, further comprising: determining a total power budget for the entire processor system; and dynamically assigning a respective portion of the total power budget to each of the plurality of domains.[00122] Clause 19. The method of any of clauses 13-18, determining the power budgets among the plurality of domains on the firmware level further comprising: based on the respective system power value, selecting one of a plurality of predefined power
performance states (P-states) for each of a plurality of processors, each of the P-states corresponding to a predefined set of power and performance settings of the processors; and redistributing the power budgets among the plurality of domains according to the predefined set of power and performance settings of the selected P-state for each of the plurality of processors.[00123] Clause 20. The method of any of clauses 13-19, wherein: the first criterion requires that the system power profile increases to and beyond a first power threshold PSET at a corresponding time; the second criterion requires that a system power value at a corresponding time is between the first power threshold PSET and a second power threshold PTH; the third criterion requires that a system power value at a corresponding time is greater than the second power threshold PTH or that the system power value stays above the first power threshold PSET for an extended time longer than a threshold duration of time; the first power threshold PSET is less than the second power threshold PTH, the second power threshold PTH less than a maximal power threshold PMAX below which the processor system is controlled.[00124] Clause 21. The method of any of clauses 13-20, wherein: the plurality of power samples are collected from the plurality of domains according to a local sampling rate; each system temperature value is combined from a respective subset of power samples of the plurality of domains according to a global pooling rate; and the local sampling rate is greater than the global pooling rate, and the global pooling rate is greater than the predefined controlling frequency.[00125] Clause 22. The method of any of clauses 13-21, wherein each domain is driven by one or more power rails, the method further comprising for each power rail: collecting a respective set of current values; and in accordance with a determination that the respective set of current values have been greater than a first threshold current for a first duration of time or greater than a second threshold current for a second duration of time, enabling a power throttling action on the respective power rail of the respective domain; wherein the first threshold current is greater than the second threshold current, and the first duration of time is shorter than the second duration of time.[00126] Clause 23. The method of any of clauses 13-22, wherein the respective system power value belongs to a temporally-ordered sequence of system power values that are
monitored subsequently to the first time t1on the system power profile according to the predefined controlling frequency.[00127] Clause 24. The method of any of clauses 13-23, wherein the processor system includes a plurality of processor units, one or more memory units, and power management integrated circuit (PMIC), and each of the plurality of domains includes a distinct subset of the processor system.[00128] Clause 25. An electronic system, comprising: one or more processor clusters; a plurality of power sensors distributed on the electronic system, wherein the power sensors are configured to collect a plurality of power samples from a plurality of power domains of the electronic system, each power sample including at least one of temperature, power consumption, and current values associated with a respective power domain; and a power management engine coupled to the plurality of power sensors, wherein the power management engine is configured to perform a method in any of clauses 1-24.[00129] Clause 26. A non-transitory computer-readable storage medium, having instructions stored thereon, which when executed by a processor system having a plurality of domains cause the processor system to perform a method in any of clauses 1-24.[00130] Clause 27. An apparatus for managing power at a processor system having a plurality of domains, the apparatus comprising means for performing a method in any of clauses 1-24.[00131] The terminology used in the description of the various described implementations herein is for the purpose of describing particular implementations only and is not intended to be limiting. As used in the description of the various described implementations and the appended claims, the singular forms “a”, “an" and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Additionally, it will be understood that, although the terms “first,” “second,” etc. may be used
herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.[00132] As used herein, the term “if’ is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected," depending on the context.[00133] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain principles of operation and practical applications, to thereby enable others skilled in the art.[00134] Although various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages can be implemented in hardware, firmware, software or any combination thereof. |
A system provides a digital multi-bit connection between two or more graphics adapters. Each graphics adapter is manufactured as a printed circuit board including a finger-type edge connector. When two or more graphics adapters are installed in a system the edge connectors of each graphics adapter may be coupled to each other via a connection device that provides a portion of the digital multi-bit connection. The remainder of the digital multi-bit connection is provided by conductive traces coupling each finger of the edge connector to a graphics processing unit that is affixed to the graphics adapter. The connection device may be installed by an end-user as each additional graphics adapter is installed in the system. |
CLAIMS: 1. A graphics adapter printed circuit board, comprising: a system connector configured to couple the graphics adapter printed circuit board to a motherboard; a first graphics processing unit affixed to the graphics adapter printed circuit board and coupled to the system connector; and a graphics edge connector, coupled to the first graphics processing unit and configured to couple to a first socket within a removable connection device, the removable connection device providing a portion of a multi-bit digital connection between the first graphics processing unit and a second graphics processing unit affixed to another graphics adapter printed circuit board when the edge connector is coupled to the first socket within the removable connection device, the removable connection device including a second socket, wherein the first socket and the second socket are electrically connected to provide the portion of the multi-bit connection between the first graphics processing unit and the second graphics processing unit. 2. The graphics adapter printed circuit board of claim 1 , wherein the graphics edge connector is positioned on a side of the graphics adapter printed circuit board opposing the system connector. 3. The graphics adapter printed circuit board of claim 1 , wherein the graphics edge connector is positioned adjacent to the system connector on a side of the graphics adapter printed circuit board. 4. The graphics adapter printed circuit board of claim 1 , wherein the graphics edge connector is positioned adjacent to a display output connector on a side of the graphics adapter printed circuit board. 5. The graphics adapter printed circuit board of claim 1 , wherein the graphics edge connector is positioned on a side of the graphics adapter printed circuit board opposing a display output connector. 6. The graphics adapter printed circuit board of claim 1 , further comprising an additional graphics edge connector configured to couple to another removable connection device. 7. The graphics adapter printed circuit board of claim 1 , wherein the connection device includes a printed circuit board and the first socket and the second socket are affixed to the printed circuit board. 8. The graphics adapter printed circuit board of claim 1 , wherein the connection device includes a flexible multi-bit cable and the first socket and the second socket are affixed to opposing ends of the multi-bit cable. 9. The graphics adapter printed circuit board of claim 7, wherein the flexible multi-bit cable is sized to span between adjacent slots of a motherboard, each slot configured to receive the graphics adapter printed circuit board. |
MULTIPLE GRAPHICS ADAPTER CONNECTION SYSTEMSBACKGROUND OF THE INVENTION Field of the Invention[0001] One or more aspects of the invention generally relate to graphics processing, and more particularly to connecting graphics adapters in a multi-adapter graphics processing system.Description of the Related Art [0002] Conventional graphics adapters are not configured such that an end-user having an existing system with a single graphics adapter may install an additional graphics adapter to improve performance for graphics processing for a single display device. Prior art graphics processing systems, such as 3dfx's VooDoo2(TM) graphics adapter product configured for scan line interleave (SLI) or Metabyte/Wicked 3D's parallel graphics configuration (PGC), increase graphics processing performance by using two graphics adapters in a fixed configuration that is not modular such that an end-user can install the second graphics adapter.[0003] The prior art graphics processing systems use a proprietary interface and cabling to transfer an analog pixel stream produced by each graphics adapter to a pixel multiplexing device which selects one of the two analog pixel streams for output to the single display device. Because the prior art configuration is fixed, the end-user cannot install and connect an additional graphics adapter to the proprietary interface to improve graphics processing performance for the single display device. Furthermore, because the pixel streams are combined in the analog domain visual artifacts may result due to mismatches between digital to analog converters on each graphics adapter.[0004] Accordingly, it is desirable to facilitate end-user installation of additional graphics adapters including installation of a multi-bit digital connection between two or more graphics adapters to improve graphics processing performance. SUMMARY OF THE INVENTION[0005] The current invention involves new systems and methods for providing a multi- bit digital interface between two or more graphics adapters. A connection device couples to finger-type edge connectors on each graphics adapter to couple the graphics adapters to each other, providing the multi-bit digital interface. Use of the finger-type edge connector does not require any additional socket or socket-type connector on the graphics adapter. An end-user may install an additional graphics adapter and connect it to an existing graphics adapter by installing the connection device. One of the graphics adapters may be configured to digitally combine pixel data produced by each graphics adapter for output to a single display device.[0006] Various embodiments of the invention include a graphics adapter printed circuit board including a system connector, a first graphics processing unit coupled to the system connector, and a graphics edge connector that is coupled to the first graphics processing unit. The system connector is configured to couple the graphics adapter printed circuit board to a motherboard. The first graphics processing unit is affixed to the graphics adapter printed circuit board. The graphics edge connector is configured to couple to a first socket within a removable connection device, the removable connection device providing a portion of a multi-bit digital connection between the first graphics processing unit and a second graphics processing unit affixed to another graphics adapter printed circuit board when the graphics edge connector is coupled to the first socket within the removable connection device, the removable connection device including a second socket, wherein the first socket and the second socket are electrically connected to provide the portion of the multi-bit connection between the first graphics processing unit and the second graphics processing unit.[0007] Various embodiments of the invention include a connection device for providing a multi-bit digital connection between a first graphics adapter and a second graphics adapter. The connection device includes a first socket, a second socket, and electrically conductive connections coupling each bit of the first socket to the second socket to provide the multi-bit digital connection between the first graphics adapter and the second graphics adapter. The first socket is configured to couple to a graphics edge connector included within a printed circuit board supporting the first graphics adapter. The second socket is configured to couple to a graphics edge connector included within a printed circuit board supporting the second graphics adapter.BRIEF DESCRIPTION OF THE DRAWINGS[0008] Accompanying drawing(s) show exemplary embodiment(s) in accordance with one or more aspects of the present invention; however, the accompanying drawing(s) should not be taken to limit the present invention to the embodiment(s) shown, but are for explanation and understanding only.[0009] Figure 1 is an exemplary embodiment of a graphics adapter in accordance with one or more aspects of the present invention.[0010] Figures 2A and 2B are exemplary embodiments of connection devices in accordance with one or more aspects of the present invention.[0011] Figure 3 is an exemplary embodiment of a graphics processing system in accordance with one or more aspects of the present invention.[0012] Figures 4A, 4B, 4C, 4D, and 4E are other exemplary embodiments of graphics adapters in accordance with one or more aspects of the present invention.[0013] Figures 5A and 5B are other exemplary embodiments of connection device configurations in accordance with one or more aspects of the present invention.[0014] Figures 6A and 6B are other exemplary embodiments of connection devices in accordance with one or more aspects of the present invention. DETAILED DESCRIPTION[0015] In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.[0016] Each graphics adapter is manufactured as a printed circuit board (PCB) including a finger-type edge connector configured to couple to a removable connection device designed for installation by an end-user. The incremental cost of the edge connector is less than a traditional socket-type connector that may be affixed to the graphics adapter. When two or more graphics adapters are installed in a system the graphics edge connectors of each graphics adapter may be coupled to each other via the connection device providing a dedicated digital multi-bit connection between the graphics adapters. The connection device may be installed by an end-user as each additional graphics adapter is installed in the system. Furthermore, the digital multi-bit connection may be used to transfer image data from one graphics adapter to another for output to a display device. Image data produced by one graphics adapter may be combined with image data produced by another graphics adapter in the digital domain, prior to digital to analog conversion for output to a display.[0017] Figure 1 is an exemplary embodiment of a graphics adapter 100 in accordance with one or more aspects of the present invention. Typically a PCB including a finger-type system connector 120 that is configured to connect to a system motherboard slot supports Graphics adapter 100. Conductive "fingers" are affixed to the PCB when graphics adapter 100 is manufactured to produce system connector 120. System connector 120 typically conforms to an industry standard interface specification, such as peripheral component interface express (PCI- Express(TM)). In some embodiments of the present invention system connector 120 is replaced with a socket-type connector or a connector that is affixed to the PCB during the manufacturing process. [0018] A GPU (graphics processing unit) 150 is affixed to a PCB supporting graphics adapter 100 and is coupled to system connector 120 by wire traces on the PCB. GPU 150 typically receives graphics data and instructions from a host processor through system connector 120. GPU 150 is also coupled to a finger-type graphics edge connector 110 by wire traces on the PCB. A display output connector 130 is typically exposed through an enclosure containing graphics adapter 100 installed on a motherboard so that an end-user may connect a display device input connector to display output connector 130.[0019] When multiple graphics adapters are installed in a system and GPU 150 is configured as a slave device, GPU 150 outputs image data (processed graphics data) to graphics edge connector 110. When one or more graphics adapters are installed in a system and GPU 150 is configured as a master device, GPU 150 outputs image data to a display output connector 130 using wire traces on the PCB.In some embodiments of the present invention, when GPU 150 is configured as a master device and multiple graphics adapters are installed in a system, GPU 150 outputs display synchronization signals, e.g. horizontal and vertical sync, to graphics edge connector 110.[0020] In some embodiments of the present invention, graphics edge connector 110 includes signals for two ports, one for use when GPU 150 is configured as a slave device and another for use when GPU 150 is configured as a master device. In other embodiments of the present invention, graphics edge connector 110 includes signals for a single port and the signals input and output by GPU 150 to/from the port vary depending on whether GPU 150 is configured as a master device or as a slave device.[0021] Figures 2A and 2B are exemplary embodiments of connection devices in accordance with one or more aspects of the present invention. A connection device may be installed between two graphics adapters to couple signals within each graphics edge connector 110. An embodiment of a connection device, designed for installation or removal by an end-user, shown in Figure 2A includes a connector PCB 210 with a socket 220 affixed to opposing ends of connector PCB 210. Conductive traces are fabricated as part of connector PCB 210 to directly connect pins of socket 220 on one end of connector PCB 210 to pins of socket 220 on the opposing end of connector PCB 210. In some embodiments of the present invention, additional components may be included on connector PCB 210, such as termination devices, pull-up or pull-down resistors, or the like. In other embodiments of the present invention, still other components may be included on connector PCB 210, as described in conjunction with Figures 6A and 6B.[0022] Another embodiment of a connection device, designed for installation or removal by an end-user, shown in Figure 2B includes a connector flexible cable 240 with a socket 230 affixed to each end of connector flexible cable 240. Connector flexible cable 240 includes wires within a flexible insulating wrapping that directly connect pins of socket 230 on one end of connector flexible cable 240 to pins of socket 230 on the opposing end of connector flexible cable 240. Those skilled in the art will recognize that other components and mechanisms may be employed to produce a connection device.[0023] A connection device, such as those illustrated in Figures 2A and 2B provides a multi-bit connection for several signals. For example, image data may be transferred from a slave device to a master device or to another slave device using a number of single bit connections for data, a data valid signal, and a clock. The data and data valid may be transferred on one or both edges of the clock. One or more buffer management signals may also be connected between GPUs using the connection device. In some embodiments of the present invention, a buffer management signal indicates when all of the GPUs producing image data for a display should swap buffers, i.e., swap the back buffer with the front buffer. Raster synchronization signals may also be transferred from a master device to a slave device to communicate the display raster position.[0024] Figure 3 is an exemplary embodiment of a graphics processing system in accordance with one or more aspects of the present invention. A motherboard 300 may be included within a . desktop computer, server, laptop computer, palm-sized computer, tablet computer, game console, cellular telephone, computer based simulator, or the like. Motherboard 300 includes a host processor 320, a main memory 310, and a chipset 330 that is directly coupled to a bridge 335. In some embodiments of motherboard 300, chipset 330 may include a system memory bridge and an input/output (I/O) bridge that may include several interfaces such as, Advanced Technology Attachment (ATA) bus, Universal Serial Bus (USB), Peripheral component interface (PCI), or the like. A bridge 335 provides an interface between chipset 330 and any graphics adapter installed in a slot 350.[0025] A master graphics adapter 340 is coupled to motherboard 300 via a slot 350. A slave graphics adapter 360 is coupled to motherboard via another slot 350. Although only a single slave graphics adapter 360 is illustrated, additional slave graphics adapters 360 and additional master graphics adapters may be installed on motherboard 300. An end-user can easily install each graphics adapter and a connection device 345, such as the connection devices shown in Figures 2A and 2B, as desired to improve rendering performance in terms of image quality or rendering speed. For example, two or more graphics adapters may be used to render images with improved image quality or two or more graphics adapters may be used to render images at a higher frame rate. Furthermore, the graphics edge connector of a graphics adapter may be positioned such that when connection device 345 is coupled to the graphics edge connector, connection device 345 does not protrude more that one centimeter beyond the graphics adapter. Therefore, connection device 345 may be entirely enclosed within a system enclosure containing motherboard 300, master graphics adapter 340, and slave graphics adapter 360.[0026] In some embodiments of the present invention, master graphics adapter 340 is directly coupled to at least one display device and slave graphics adapter 360 is directly coupled to at least one display device. In other embodiments of the present invention, master graphics adapter 340, is directly coupled to two or more display devices. One or more slave graphics adapters 360 provide image data to master graphics adapter 340 via connection device 345, embodiments of which are described in conjunction with Figures 2A, 2B, 6A, and 6B. Connection device 345 is coupled to graphics edge connectors on master graphics adapter 340 and slave graphics adapter 360. In some embodiments of the present invention a connection device such as that shown in Figure 2B (including connector flexible cable) may be used to couple graphics adapters of different heights or with misaligned graphics edge connectors.[0027] A primary connection between master graphics adapter 340 and one or more slave graphics adapters 360 is provided by bridge 335. In some embodiments of the present invention, the primary connection couples master graphics adapter 340 and one or more slave graphics adapters 360 through bridge 335, chipset 330, and main memory 310 and data transfers between master graphics adapter 340 and the one or more slave graphics adapters 360 are controlled by host processor 320.[0028] Master graphics adapter 340 outputs image data to a display device, such as a cathode ray tube (CRT), flat panel display, or the like. Slave graphics adapter 360 may process a larger portion of an image than master graphics adapter 340 and transfer the larger portion of the image to master graphics adapter 340 via connection device 345. In some embodiments of the present invention, processing of the image may be distributed between master graphics adapter 340 and one or more slave graphics adapters 360 based on the processing capability of each graphics adapter. Furthermore, synchronization signals, e.g., buffer swap, horizontal sync, and vertical sync, may be transferred between slave graphics adapter 360 and primary graphics adapter 340 using connection device 345.[0029] In one embodiment of the present invention, 12 bits of image data, a data enable signal, and a clock are output by slave graphics adapter 360. A horizontal sync and vertical sync are output by master graphics adapter 340 to slave graphics adapter 360. A buffer swap signal is a tristate signal, specifically a wired AND using a pull-up component that is pulled low by each graphics adapter that is ready to swap buffers. Each graphics adapter also samples the buffer swap signal to determine when all of the graphics adapters are ready to swap buffers. [0030] In some embodiments of the present invention, connection device 345 configures each graphics adapter coupled to it as either a master graphics adapter or as a slave graphics adapter. For example, a single bit connection within each socket of connection device 345 configures master graphics adapter 340 as a master graphics adapter and configures slave graphics adapter 360. Specifically, a graphics driver reads the state of the single bit connection set by connection device 345 and configures each graphics adapter accordingly. In those embodiments of the present invention, the configuration of master and slave may be reversed by installing connection device 345 after rotating it by 180 degrees.[0031] Figures 4A, 4B, 4C, and 4D are other exemplary embodiments of graphics adapters in accordance with one or more aspects of the present invention. Graphics adapter 425, shown in Figure 4A includes a GPU 405 that is coupled to a system connector 420 and to a graphics edge connector 401. System connector 420 is typically an industry standard connector configured to couple graphics adapter 400 to a motherboard. In some embodiments of the present invention GPU 405 is directly coupled to a display output connector 430. In other embodiments of the present invention, one or more additional devices may be affixed to the PCB supporting graphics adapter 425 and one of the additional devices may be coupled between GPU 405 and graphics edge connector 401.[0032] Graphics edge connector 401 is positioned on the bracket side of graphics adapter 425, adjacent to display edge connector 430. Although graphics edge connector 401 need not protrude outside of an enclosure containing a motherboard coupled to graphics adapter 425, a connection device coupling graphics adapter 425 to another graphics adapter may protrude outside of the enclosure. An advantage of this configuration is that an end-user does not necessarily need to open the enclosure to install the connection device. Also, graphics edge connector 401 may be used when a connection device coupled to a graphics edge connector, such as graphics edge connector 110 would obstruct an enclosure containing motherboard 300, thereby preventing proper installation of the enclosure. [0033] Graphics adapter 435, shown in Figure 4B also includes GPU 405 coupled to system connector 420 and to a graphics edge connector 402. GPU 405 is directly or indirectly coupled to display output connector 430. Graphics edge connector 402 is positioned on a side of graphics adapter 435 opposing display output connector 430. Therefore, a connection device coupling graphics adapter 425 to another graphics adapter will not protrude outside of an enclosure containing a motherboard coupled to graphics adapter 435.[0034] Graphics adapter 415, shown in Figure 4C also includes GPU 405 coupled to system connector 420 and to a graphics edge connector 403. GPU 405 is directly or indirectly coupled to display output connector 430. Graphics edge connector 403 is positioned on a side of graphics adapter 415 adjacent to system connector 420. Therefore, a connection device coupling graphics adapter 415 to another graphics adapter will not protrude outside of an enclosure containing a motherboard coupled to graphics adapter 415. Although a single device, GPU 405 is shown in Figures 4A, 4B, and 4C, those skilled in the art will recognize that additional devices and components for performing a variety of functions, e.g. data storage, signal conversion, or the like, may be included in various embodiments of the present invention.[0035] Graphics adapter 400, shown in Figure 4D includes two GPUs, a GPU 401 and a GPU 402. The two GPUs are not necessarily configured to perform the same function, however each GPU is coupled to system connector 420 via a bridge device, bridge 403. Bridge 403 performs functions similar to bridge 335 shown in Figure 3, interfacing between bridge 335 and each GPU included in graphics adapter 400. In other embodiments of the present invention, the functionality of bridge 403 is integrated into one or both of the GPUs and bridge 403 is omitted.[0036] In some embodiments of the present invention each GPU is coupled to a single graphics edge connector. For example, GPU 401 is coupled to graphics edge connector 411 and GPU 402 is coupled to graphics edge connector 412. GPU 401 may be configured as a slave device and GPU 402 may be configured as a master device or visa versa. GPU 401 and GPU 402 may both be configured as slave devices or GPU 401 and GPU 402 may both be configured as master devices, each outputting image data to a different display device connected to display output connector 430.[0037] In some embodiments of the present invention GPU 401 and GPU 402 are each coupled to both graphics edge connector 411 and graphics connector 412. When three or more graphics adapters are installed in a system and graphics adapter 400 is positioned between a slave graphics adapter and a master graphics adapter, image data is output via a first edge connector to the master graphics adapter. Image data is received from the slave graphics adapter via the second edge connector.[0038] Graphics adapter 445, shown in Figure 4E includes a single GPU, GPU 405 coupled to system connector 420 and to graphics edge connectors 411 and 412. Graphics adapter 445 may be positioned between a slave graphics adapter and either a master graphics adapter or another slave graphics adapter. GPU 405 provides signals for two ports and each port is coupled to a single graphics edge connector. GPU 405 may be configured to receive image data via one of connectors 411 or 412 and to output image data via connector 412 or 411 , respectively. GPU 405 is directly or indirectly coupled to display output connector 430.[0039] Although graphics edge connectors 411 and 412 are positioned on the side of graphics adapter 445 opposing system connector 420, one or both graphics edge connectors 411 and 412 may be positioned on a different side of graphics adapter 445. Furthermore, display output connector 430 may be omitted from any of graphics adapters 400, 415, 425, 435, and 445 to provide a graphics adapter that functions as an accelerator without a display output.[0040] Figure 5A is an exemplary embodiment of a connection device configuration in accordance with one or more aspects of the present invention. Multiple graphics adapters, graphics adapters 511 , 512, and 513, are coupled together using two connection devices 510. Each connection device 510 provides a point-to-point connection between two of the multiple graphics adapters. Specifically, a first connection device 510 couples a first graphics adapter, graphics adapter 511 and a second graphics adapter 512 via graphics edge connectors. Graphics adapter 511 may be a graphics adapter 100, 445, or 400 and graphics adapter 512 may be a graphics adapter 445 or 400. Therefore, the first connection device 510 may couple either graphics edge connector 110 to graphics edge connector 411 or graphics edge connectors 411. Graphics adapter 511 may be configured as a slave graphics adapter providing image data to graphics adapter 512 which may be configured as either a slave graphics adapter or a master graphics adapter. Alternatively, graphics adapter 511 may be configured as a master graphics adapter receiving image data from graphics adapter 512.[0041] A second connection device 510 couples the second graphics adapter, graphics adapter 512 and a third graphics adapter, graphics adapter 513 via graphics edge connectors 411. Graphics adapter 513 may be a graphics adapter 100, 445, or 400. Therefore, the second connection device 510 may couple either graphics edge connector 110 to graphics edge connector 412 or graphics edge connectors 412. Graphics adapter 512 may be configured as a slave graphics adapter providing image data to graphics adapter 513 which may be configured as either a slave graphics adapter or a master graphics adapter. Alternatively, graphics adapter 512 may be configured as a master graphics adapter receiving image data from graphics adapter 513. When graphics adapter 513 is a graphics adapter 400, one GPU, such as GPU 401 may output image data to display output connector 430 while the other GPU, GPU 402 outputs image data to graphics adapter 512.[0042] When the graphics adapter 512 is configured as a master graphics adapter it may receive image data from graphics adapters 511 and 513. When either graphics adapter 511 or graphics adapter 513 is configured as a master graphics adapter it may receive image data from graphics adapter 513 or graphics adapter 511 , respectively, through graphics adapter 512. Alternatively, graphics adapters 511 and513 may each be configured as master graphics adapters and when graphics adapter 512 is a graphics adapter 400, GPU 401 may output image data to graphics adapter 511 and GPU 402 may output image data to graphics adapter 513. Those skilled in the art will recognize that other configurations of the multiple graphics adapters may be used to produce image data for one or more display devices.[0043] Figure 5B is another exemplary embodiment of a connection device configuration in accordance with one or more aspects of the present invention.Multiple graphics adapters, such as graphics adapter 415, 425, 435, or 400, are coupled together using a connection device 520. In some embodiments of connection device 520, three sockets (one for each graphics adapter) are each configured to support signals for two ports and connection device 520 provides point- to-point connections between the graphics adapters. Specifically, connection device520 provides a point-to-point connection between graphics adapter 500 and graphics adapter 501 and between graphics adapter 501 and graphics adapter 502.[0044] In other embodiments of connection device 520, the connection to each graphics adapter may be selectively disabled using a switch, such as a programmable quickswitch component, as described in conjunction with Figure 6A. For example, graphics adapter 500 may be configured as a master graphics adapter and graphics adapter 502 may be configured as a slave graphics adapter providing image data to graphics adapter 500 through connection device 520. Graphics adapter 501 may be configured as a master device, functioning independent from graphics adapters 500 and 502 with any connection to connection device 520 disabled.[0045] Figure 6A is another exemplary embodiment of a connection device in accordance with one or more aspects of the present invention. The connection device shown in Figure 6A includes a connector PCB 610 with sockets 602 affixed to opposing ends of connector PCB 210. Although Figures 6A shows two sockets 602, additional sockets 602 may be used in other embodiments of the present invention. Conductive traces are fabricated as part of connector PCB 610 to directly connect pins of the socket 602 on one end of connector PCB 210 to pins of the socket 602 on the opposing end of connector PCB 610. [0046] A switch 605 is affixed to connector PCB 610 and coupled to a socket 602 and a switch 607 is affixed to connector PCB 610 and coupled to the other socket 602. In some embodiments of the present invention, switch 605 and switch 607 may be manually set by an end-user to enable or disable a connection via socket 602. In other embodiments of the present invention, switch 605 and switch 607 may be configured through socket 602 to enable or disable a connection via socket 602.[0047] Figure 6B is another exemplary embodiment of a connection device in accordance with one or more aspects of the present invention. The connection device shown in Figure 6B includes a connector PCB 620 with sockets 622 affixed to opposing ends of connector PCB 220. Although Figures 6B shows two sockets 622, additional sockets 622 may be used in other embodiments of the present invention. Conductive traces are fabricated as part of connector PCB 620 to directly connect pins of the socket 622 on one end of connector PCB 220 to pins of the socket 622 on the opposing end of connector PCB 620.[0048] An indication light, LED (light emitting diode) 615, is affixed to connector PCB 620 and coupled to one socket 622. Another indication light, LED 617 is affixed to connector PCB 620 and coupled to the other socket 622. A first signal controlling LED 615 may be generated by a first graphics adapter coupled to the one socket 622 and a second signal controlling LED 617 may be generated by a second graphics adapter coupled to the other socket 622. In some embodiments of the present invention, the first and the second signal may be generated to indicate whether each first graphics adapter or the second graphics is configured as a slave or as a master graphics adapter. In other embodiments of the present invention, the fist and the second signal may be generated to indicate whether each socket 622 is coupled to a graphics adapter. In still other embodiments of the present invention, the first and the second signal may be generated to indicate whether the first graphics adapter and the second graphics adapter are active, respectively.[0049] An end-user may install one or more additional graphics adapters and one or more connection devices in a system to improve graphics processing performance, e.g., frame rate, image quality, or the like. Each connection device, such as those shown in Figures 2A, 2B, 3, 5A, 5B, 6A, and 6B provides a multi-bit digital connection through a finger-type edge connector included in each graphics adapter for the transfer of graphics image data, synchronization signals, and buffer management signals. Furthermore, inclusion of the finger-type edge connector as a graphics edge connector increases the cost of the graphics adapter less than the use of a traditional socket type of connector. Therefore, end-users of systems including a single graphics adapter incur only a small cost for the option to upgrade their graphics performance at a later date by installing an additional graphics adapter and connection device.[0050] The invention has been described above with reference to specific embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The listing of steps in method claims do not imply performing the steps in any particular order, unless explicitly stated in the claim.[0051] All trademarks are the respective property of their owners. |
A three-dimensional metal-insulator-metal (MIM) capacitor is formed in an integrated circuit structure. The 3D MIM capacitor can include a bottom conductor including a bottom plate portion (e.g., formed in a metal interconnect layer) and a vertically extending sidewall portion extending from the bottom plate portion. An insulator layer is formed on the bottom plate portion and the vertically extending sidewall portions of the bottom conductor. A top conductor is formed over the insulator layer such that the top conductor is capacitively coupled to both the bottom plate portion and the vertically extending sidewall portions of the bottom conductor, thereby defining an increased capacitive coupling region between the top conductor and the bottom conductor. The vertically extending sidewall portions of the bottom conductor can be formed in a single metal layer, or from components of multiple metal layers. |
1. A metal-insulator-metal (MIM) capacitor comprising:a bottom conductor comprising:the floor portion; andat least one vertically extending side wall portion extending upwardly from the floor portion;the top conductor; andAn insulator layer disposed between the bottom plate portion and the at least one vertically extending sidewall portion of the top and bottom conductors.2. The MIM capacitor of claim 1, wherein the top conductor is formed in a bond pad layer.3. The MIM capacitor of claim 2, wherein the base plate portion of the bottom conductor comprises copper, the cup portion of the bottom conductor comprises tungsten, and the top conductor comprises aluminum.4. The MIM capacitor according to any one of claims 1 to 3, wherein said bottom conductor comprises a cup-shaped portion formed on said bottom plate portion and comprising said at least one vertically extending side wall section.5. The MIM capacitor of any one of claims 1 to 4, wherein the base plate portion of the bottom conductor is defined by a portion of a copper interconnect layer.6. The MIM capacitor of any one of claims 1 to 5, wherein the at least one vertically extending sidewall portion of the bottom conductor comprises elements of a plurality of metal layers of an integrated circuit device.7. The MIM capacitor according to any one of claims 1 to 6, wherein said at least one sidewall portion of said bottom conductor is formed in a wide barrel opening having a range of 0.5 to 2.0 The height-to-width aspect ratio within.8. The MIM capacitor according to any one of claims 1 to 6, wherein said at least one sidewall portion of said bottom conductor is formed in a wide barrel opening having a range of 0.8 to 1.2 The height-to-width aspect ratio within.9. The MIM capacitor of any one of claims 1 to 8, further comprising a bond pad laterally offset from the top conductor and conductively connected by a conductive via to the bottom conductor.10. The MIM capacitor of claim 9, wherein:the bond pad is formed from the same material as the top conductor;The conductive via is formed from the same material as the at least one vertically extending sidewall portion of the bottom conductor.11. A MIM capacitor according to any one of claims 10 to 11, wherein:the at least one vertically extending sidewall portion of the bottom conductor is formed in the bottom conductor opening; andthe conductive via is formed in the via opening;Wherein the lateral width of the bottom conductor opening is at least twice as large as the lateral width of the conductive via.12. The MIM capacitor of claim 11, wherein the bottom conductor opening and the conductive via are formed in a passivation layer.13. The MIM capacitor of any one of claims 11 to 12, wherein the lateral width of the bottom conductor opening is at least five times greater than the lateral width of the conductive via.14. A MIM capacitor according to any one of claims 1 to 13, wherein:the at least one vertically extending sidewall portion of the bottom conductor is formed in the bottom conductor opening; andat least a portion of the insulator layer is located in the bottom conductor opening; andAt least a portion of the top conductor is located in the bottom conductor opening and covers at least a portion of the insulator layer.15. The MIM capacitor of claim 14, wherein the top conductor includes a first portion positioned above a top portion of the insulator layer and a second portion extending down into the bottom conductor opening.16. A MIM capacitor according to any one of claims 1 to 15, wherein:the insulator layer is cup-shaped and defines an opening; andAt least a portion of the top conductor is located in the opening of the cup-shaped insulator layer.17. An integrated circuit device comprising:multiple integrated circuit components; andA metal-insulator-metal (MIM) capacitor according to any one of claims 1 to 16.18. A method of forming a metal-insulator-metal (MIM) capacitor, the method comprising:form the bottom conductor of the chassis twoat least one vertically extending sidewall portion extending upwardly from the base plate forming the bottom conductor;forming an insulator layer having a first insulator portion on the base plate and at least one vertically extending second insulator portion on the at least one vertically extending sidewall portion of the bottom conductor ;as well asA top conductor is formed on the insulator layer such that the insulator layer is disposed between both the base plate and the at least one vertically extending sidewall portion of the top conductor and the bottom conductor.19. The method of claim 18, wherein forming the top conductor comprises:depositing a bond pad layer; andPortions of the bond pad layer are removed to define the top conductor and a plurality of bond pads conductively connected to a plurality of integrated circuit components, wherein the top conductor extends down into an opening defined by the insulator layer .20. The method of claim 19, wherein forming the base plate comprises:forming the top metal layer of the multilayer interconnect structure; andPortions of the top metal layer of the multilayer interconnect structure are removed to define the backplane.21. The method of any one of claims 18 to 20, further comprising:forming a bottom conductor opening and a via opening laterally offset from the bottom conductor opening;forming the vertically extending sidewall portion of the bottom conductor in the bottom conductor opening;forming a conductive via in the via opening laterally offset from the bottom conductor opening;depositing a bond pad layer; andPortions of the bond pad layer are removed to define (a) the top conductor and (b) a MIM bond pad laterally offset from the top conductor and in contact with the conductive via, wherein the MIM A bond pad is conductively connected to the base plate of the bottom conductor through the conductive via.22. The method of claim 21, comprising:forming the bottom conductor opening and the via opening simultaneously; andThe vertically extending bottom conductor sidewall and the conductive via are formed simultaneously. |
Three-dimensional Metal-Insulator-Metal (MIM) CapacitorsRelated Patent ApplicationsThis application claims priority to commonly owned U.S. Provisional Patent Application Serial No. 63/070,294, filed August 26, 2020, which is hereby incorporated by reference in its entirety for all purposes.technical fieldThe present disclosure relates to metal-insulator-metal (MIM) capacitors, and more particularly, to three-dimensional (3D) MIM capacitors.Background techniqueA metal-insulator-metal (MIM) capacitor is a capacitor constructed from a metal top plate, a metal bottom plate, and an insulator (dielectric) sandwiched between the two metal plates.MIM capacitors are important components in many circuits, such as many analog, mixed-signal and radio frequency complementary metal-oxide-semiconductor (RF CMOS) circuits. Due to lower resistance, better matching, and/or better signal-to-noise ratio, MIM capacitors typically offer better performance than alternatives such as POP (polymer-oxide-polymer) capacitors and MOM (metal-oxide-metal lateral communication) capacitors. amount) better performance of the capacitor.MIM capacitors are usually placed just below the top metal layer, for example using the existing top-1 metal layer as the bottom plate, constructed with different metals such as titanium or titanium nitride (Ti/TiN), tantalum or titanium nitride (Ta/TaN ) or tungsten (W)) and connect the overlying top metal layer to the top and bottom plates of the capacitor through corresponding vias. The top plate typically has a higher resistance than the bottom plate, for example because the top plate may be limited by thickness constraints and material selection for integration, thereby limiting the performance of conventional MIM capacitors.Figures 1A and 1B show two examples of conventional MIM capacitor structures. FIG. 1A shows a conventional MIM capacitor 100A built on aluminum interconnects. The MIM capacitor 100A includes an insulator layer 112A formed between an aluminum base plate 114A (top-1 metal layer) and a metal top plate 116A (eg, Ti, TiN, or aluminum (Al)). A1 Bottom plate 114A and metal top plate 116A are each connected to respective contacts 120A and 122A (top metal layer) by one or more vias 124A and 126A, for example each by filling the via hole with tungsten or other suitable metal. form. The insulator layer 112A may be, for example, a SiN layer having a thickness of about 1000 Å.FIG. 1B shows another conventional MIM capacitor 100B built on copper (Cu) interconnects. The MIM capacitor 100B includes an insulator layer 112B formed between a Cu base plate 114B (top-1 metal layer) and a metal top plate 116B (eg, Ta, TaN, or TiN). The Cu bottom plate 114B and the metal top plate 116B are each connected to corresponding contacts 120B and 122B (top metal layer) through one or more vias 124B and 126B, each via being conductively filled, for example, with tungsten, copper, or other suitable metal. hole formed. As with the capacitor 100A built on Al interconnects, the insulator layer 112B of the capacitor 10B built on Cu interconnects may be, for example, a SiN layer having a thickness of about 2000 Å. Layer 112B also serves as a dielectric diffusion barrier for copper base plate 114B.As used herein, "via" refers to a conductive via formed by plugging or otherwise depositing a conductive material (such as tungsten) in the via that The holes have a small diameter or width (eg, a diameter or width below 1 μm), and thus have a relatively high electrical resistance, eg, a resistance of at least 1 ohm per via. For example, conventional vias (e.g., vias 124A, 126A, 124B, and/or 126B shown in FIGS. or other high resistance material, may have a resistance of about 10 ohms/via. Therefore, conventional MIM capacitors typically include multiple vias (eg, multiple vias between top plate and top plate contacts and/or multiple vias between bottom plate and bottom plate contacts) to somewhat reduce the overall resistance. As used herein, in the context of a MIM capacitor, "via connection" refers to a via extending from a capacitor plate (top or bottom plate) to an overlying conductive contact.Additionally, MIM capacitors are typically expensive to construct, eg, compared to certain other types of capacitors. For example, MIM capacitors generally require additional masking layers and many additional process steps compared to POP capacitors and MOM capacitors. MIM capacitors also typically require a relatively large silicon area, resulting in inefficient area usage, especially in the case of larger MIM capacitors.Additionally, in conventional MIM capacitor 100B, insulator layer 112B is in direct contact with the upper surface of copper base plate 114B, resulting in a lower breakdown voltage typically due to the Cu screwlock (collision surface) at the upper surface of base plate 114, e.g., as Indicated by "H" in FIG. 1B. Furthermore, in conventional MIM capacitors, the top plate is thinner and thus provides a higher series resistance because the vertical thickness of the top plate is limited by the difference between the adjacent metal layers (e.g., the top metal layer and the top-1 metal layer) in which the MIM capacitor is formed. The vertical distance between them is limited.There is a need for MIM capacitors that can be manufactured at lower cost, have improved space density, and have improved breakdown voltage.Contents of the inventionEmbodiments of the present invention provide a three-dimensional (3D) MIM capacitor formed in an integrated circuit structure. The 3D MIM capacitor may include:(a) a bottom conductor comprising (i) a horizontally extending floor portion and (ii) at least one vertically extending sidewall portion projecting upwardly from said floor portion,(b) the top conductor, and(c) an insulator layer disposed between the top conductor and both the horizontally extending floor portion and the vertically extending sidewall portion of the bottom conductor.According to this structure, the top conductor is capacitively coupled to both the bottom plate portion and the sidewall portion of the bottom conductor, which defines a substantially Larger capacitive coupling area.The 3D MIM capacitors disclosed herein are referred to as "three-dimensional" in contrast to prior art "two-dimensional" (2D) MIM capacitors in which the capacitor extends only in the horizontal plane (x, y directions). A 3D MIM capacitor has not only a horizontal portion of the capacitor, but also a “sidewall” portion of the capacitor, where the capacitor extends vertically (z-direction). Accordingly, the capacitors disclosed herein are referred to as 3D MIMs because they extend in all 3 dimensions (x, y, and z).In some embodiments, a 3D MIM capacitor is built in only one layer between two adjacent metal interconnect layers (inclusively), and is labeled as a single layer 3D MIM capacitor. In some embodiments, a 3D MIM capacitor is constructed using multiple interconnect layers (involving two or more layers between two adjacent metal layers, and involving more than two adjacent metal interconnect layers), and Labeled as a multilayer 3D MIM capacitor. Compared with single-layer 3D MIM capacitors, multilayer 3D MIM capacitors extend further in the vertical direction and achieve better area efficiency at the cost of process complexity.Some embodiments provide a single layer 3D MIM capacitor and methods of fabrication, while other embodiments provide multilayer 3D MIM capacitors and methods of fabrication. In some embodiments, the 3D MIM capacitor bottom plate is formed of copper and lined with W or TiN to improve breakdown voltage (e.g., to mitigate negative effects from Cu screwlock), and the top plate is formed of aluminum, which can interface with the bond pads Metals are fabricated simultaneously and provide lower series resistance. In some embodiments, due to 3D integration, 3DMIM capacitors have significant area efficiency over conventional 2D MIM capacitors, and thus reduce cost.In one aspect, a 3D MIM capacitor includes: (a) a bottom conductor including a bottom plate portion and at least one vertically extending sidewall portion extending upwardly from the bottom plate portion; (b) a top conductor and (c) an insulator layer disposed between both the floor portion and the at least one vertically extending sidewall portion of the top and bottom conductors.In some implementations, the top conductor is formed in the bond pad layer.In some embodiments, the bottom conductor includes a cup-shaped portion formed on the floor portion and including the at least one vertically extending sidewall portion.In some implementations, the floor portion of the bottom conductor comprises copper, the cup portion of the bottom conductor comprises tungsten or TiN, and the top conductor comprises aluminum.In some implementations, the floor portion of the bottom conductor is defined by a portion of a copper interconnect layer.In some embodiments, at least one vertically extending sidewall portion of the bottom conductor comprises elements of a plurality of metal layers of an integrated circuit device.In some embodiments, the at least one sidewall portion of the bottom conductor is formed in a wide barrel opening having a height in the range of 0.5 to 2.0, such as in the range of 0.8 to 1.2 vs. width aspect ratio.In some embodiments, the 3D MIM capacitor further includes a bond pad laterally offset from the top conductor and conductively connected to the bottom conductor by at least a first vertically extending conductive via.In some embodiments, the bond pad is formed of the same material as the top conductor and the conductive via is formed of the same material as the at least one vertically extending sidewall portion of the bottom conductor .In some embodiments, the vertically extending sidewall portion of the bottom conductor is formed in a bottom conductor opening, and the conductive via is formed in the via opening, wherein the lateral width of the bottom conductor opening is is at least twice as large as the lateral width of the conductive via. In some implementations, the lateral width of the bottom conductor opening is at least five times greater than the lateral width of the conductive via.In some implementations, the bottom conductor opening and the conductive via are formed in a common passivation layer.In some embodiments, the at least one vertically extending sidewall portion of the bottom conductor is formed in a bottom conductor opening, at least a portion of the insulator layer is located in the bottom conductor opening, and the top conductor's At least a portion is located in the bottom conductor opening and covers at least a portion of the insulator layer. In some embodiments, the top conductor includes a first portion positioned above the top portion of the insulator layer and a second portion extending downward into the bottom conductor opening.In some embodiments, the insulator layer is cup-shaped and defines an opening, and at least a portion of the top conductor is located in the opening of the cup-shaped insulator layer.In another aspect, an integrated circuit device includes a plurality of electronic devices and a 3DMIM capacitor as disclosed herein.In another aspect, a method of forming a 3D MIM capacitor is provided. The method may include forming a base plate of a bottom conductor; forming at least one vertically extending sidewall portion of the base conductor extending upwardly from the base plate; forming an insulator layer having an insulator layer on the base plate a first insulator portion, and at least one vertically extending second insulator portion on said at least one vertically extending sidewall portion of said bottom conductor; and forming a top conductor on said insulator layer such that said An insulator layer is disposed between both the base plate and the at least one vertically extending sidewall portion of the top and bottom conductors.In some embodiments, forming the top conductor includes depositing a bond pad layer, and removing portions of the bond pad layer to define the top conductor and a plurality of bond pads conductively connected to a plurality of integrated circuit components, wherein The top conductor extends down into the opening defined by the insulator layer.In some embodiments, forming the bottom conductor plate includes forming a top metal layer of the multilayer interconnect structure, and removing portions of the top metal layer of the multilayer interconnect structure to define the bottom conductor plate.In some embodiments, the method further includes forming a bottom conductor opening and a via opening laterally offset from the bottom conductor opening; forming the vertically extending portion of the bottom conductor in the bottom conductor opening. sidewall portions; forming a conductive via in said via opening laterally offset from said bottom conductor opening; depositing a bond pad layer; and removing a portion of said bond pad layer to define (a) and (b) a MIM bond pad laterally offset from the top conductor and in contact with the conductive via, wherein the MIM bond pad is conductively connected to the bottom conductor through the conductive via of the bottom plate.In some embodiments, the method includes simultaneously forming the bottom conductor opening and the via opening, and simultaneously forming the vertically extending bottom conductor sidewall and the conductive via.Description of drawingsA more complete understanding of the present disclosure can be obtained by referring to the following description taken in conjunction with the accompanying drawings, in which:1A and 1B show cross-sectional views of two conventional MIM capacitor structures;2 shows a cross-sectional view of a conventional structure of an aluminum bond pad connected to a copper interconnect structure through a tungsten via;3A-3C show cross-sectional views of an example single-layer 3D MIM capacitor according to one embodiment of the invention;4A-4I show an example process for forming the example single-layer 3D MIM capacitor shown in FIGS. 3A-3C according to one embodiment of the invention; and5A-5H show an example process for forming an example multilayer 3D MIM capacitor according to one embodiment of the invention.It should be understood that reference numerals to any illustrated element that appear in multiple different figures have the same meaning across the multiple figures, and that any reference to or discussion of any illustrated element herein in the context of any particular figure also applies to every other drawing, if any, where the same illustrated elements are shown.detailed descriptionIn industry, copper (Cu) interconnects are often terminated with aluminum (Al) bond pads for full compatibility with conventional packaging processes. A set of tungsten (W) vias is typically used to connect the Al bond pad to the top metal layer (MTOP) of the Cu interconnect. 2 shows a cross-sectional view of a conventional structure 2000 connected to an Al bond pad 220 of a Cu interconnect MTOP structure 202 through a W via 212 formed in passivation layer 206 .3A-3C collectively show an example single-layer 3D MIM capacitor 300 according to one embodiment of the invention. Specifically, Figure 3A shows a first cross-sectional side view, Figure 3B shows a second cross-sectional side view taken through the cut line 3B-3B shown in Figure 3A, and Figure 3C shows a second cross-sectional side view taken through the cut shown in Figure 3A Top cross-sectional view taken at line 3C-3C. As collectively shown in FIGS. 3A to 3C , the bottom conductor 301 of a MIM capacitor includes (a) a horizontally extending bottom plate 302 and (b) a cup-shaped conductor 314 formed on the bottom plate 302 and having (i) formed A bottom portion 314A on the bottom plate 302 and (ii) a vertically extending side wall portion 314B extending upward from the bottom plate 302 . In some embodiments, the backplane 302 may be formed in a copper interconnect layer, such as a top copper interconnect layer referred to herein as a "Cu MTOP layer." A base plate 302 may be formed over a barrier layer 304 such as a Ta or TaN barrier layer. A cup-shaped conductor 314 may be formed in the bottom conductor opening 310 formed in the passivation layer 306 . The bottom conductor opening 310 may be a wide "bucket" opening, as discussed herein. Cup conductor 314 may be formed of tungsten (W) or other suitable material, such as the same material as conductive via 324 and formed at the same time as the conductive via, as discussed below. The cup-shaped conductor 314 is in electrical contact with the base plate 302 .The 3D MIM capacitor 300 is referred to as a "single-layer" MIM capacitor because it uses only a single metal interconnect layer to form the capacitor 300 .An insulator layer 320 is formed in the cup-shaped conductor 314 and includes (i) a bottom portion 320A formed on the bottom portion 314A of the cup-shaped conductor 314 and (ii) a vertically extending sidewall portion 314B covering the cup-shaped conductor 314 . side wall portion 320B. The insulator layer 320 may be a conformal layer formed of SiN or other suitable dielectric material.Top conductor 330 is formed over insulator layer 320 and extends down into a cup-shaped opening formed by insulator layer 320 and, in particular, defined by the top surface of bottom portion 320A and the surfaces of sidewall portion 320B. As shown, the cup-shaped conductor 314 of the bottom conductor 301 significantly increases the capacitive area between the top conductor 330 and the bottom conductor 301 by the horizontally extending bottom portion 320A and the vertically extending sidewall portion 320B of the insulator layer 320 . Conductor 330 may be formed from aluminum or other suitable material.Bottom conductor 301 may also be conductively connected to topside bond pad 334 , such as by at least one conductive via 324 connecting bond pad 334 to bottom plate 302 . In some embodiments, the bottom conductor opening 310 in which the cup-shaped conductor 314 is formed may be formed simultaneously with at least one narrow via opening in which at least one conductive via 324 is formed. Bottom conductor opening 310 and via opening may be filled simultaneously, eg, by tungsten deposition, to form cup-shaped conductor 314 and conductive via 324 .As shown in FIG. 3C , bottom conductor opening 310 formed with cup-shaped conductor 314 (followed by insulator layer 320 and top conductor 330 ) may have a square shape from a top view. In other embodiments, the bucket opening may have a rounded square shape, a rectangular shape, a rounded rectangular shape, a circular shape, an oval shape, a cross shape, or any other suitable shape.As shown in FIG. 3A , the three-dimensional structure of the MIM capacitor 300 not only defines a displacement current path through the bottom portion 320A of the insulator layer 320 (generally indicated by the dashed arrow CP bottom), but also defines a vertical extension through the insulator layer 320. The displacement current path of the sidewall portion 320B of (generally indicated by the dashed arrow CP sidewall). Each insulator layer sidewall 320B provides an additional capacitive coupling area between the top conductor 330 and the bottom conductor 301 . Bottom portion 320A of insulator layer 320 effectively defines a plate capacitor with top and bottom plates extending horizontally, and each insulator layer sidewall 320B effectively defines an additional plate capacitor with top and bottom plates extending vertically. Thus, the three-dimensional structure of the MIM capacitor 300 thereby defines a significantly increased capacitive coupling area between the top conductor 330 and the bottom conductor 301 , eg compared to conventional MIM capacitors.Compared to existing IC fabrication processes, the example 3D MIM capacitor 300 shown in FIG. 3 can be constructed using minimal additional process steps, eg, using only four additional process steps, including only one additional mask layer.FIGS. 4A-4I show schematic diagrams for forming an integrated circuit (IC) device including a single-layer 3D MIM capacitor 450 (eg, similar to the example 3D MIM capacitor 300 shown in FIG. 3 ) according to one embodiment of the present invention. Cross-sectional view of an example process. Each of FIGS. 4A-4I shows a cross-sectional view of an integrated circuit structure 400 under construction according to an embodiment of the present invention at two locations, the first location (labeled "bond pad"). ”), where the first bond pad (e.g., aluminum) is connected to the top interconnect layer (e.g., Cu MTOP layer) by conductive vias, which is a typical process in integrated circuit construction; and the first step of forming a single-layer 3D MIM capacitor 450 Two positions (labeled "3D MIM Capacitor").FIG. 4A shows a selected portion of a top interconnect layer 402 (eg, a Cu MTOP layer) in an IC structure 400 under construction. The first interconnect structure 402A of the top interconnect layer 402 is designated as a typical bond pad, while the second interconnect structure 402B of the top interconnect layer 402 under construction forms a floor for the bottom conductor of the MIM capacitor 450 . Interconnect structures 402A and 402B may be formed over respective barrier layers 404 (eg, Ta/TaN barrier layers), eg, by a process comprising Cu deposition on barrier layer 404 followed by copper CMP (chemical mechanical planarization) craft.As shown in FIG. 4B , after the top interconnect layer 402 is formed, a passivation region 406 may be deposited over the top interconnect layer 402 . The passivation region 406 may include a first passivation region portion 406A over the first interconnect structure 402A and a second passivation region portion 406B over the second interconnect structure 402B. Passivation region 406 is typically a combination of multiple dielectric films configured to protect underlying active integrated circuits. For example, passivation region 406 may include the following four layers deposited in the following order: (1) 0.1 μm silicon nitride, (2) 0.1 μm silicon-rich oxide (SRO), (3) 0.68 μm phosphosilicate Salt glass (PSG) and (4) 0.59 μm silicon oxynitride (SiON).Next, as shown in FIG. 4C , a photoresist layer may be deposited and patterned, followed by at least one etch to define the via openings 408A, 408B and width of the 3D MIM capacitor 450 under configuration. The number of barrel openings 410 . Via openings 408A, 408B and barrel opening 410 may be etched simultaneously. The shape and size of the wide barrel opening 410 may be selected based on various parameters, such as for efficient fabrication of the MIM capacitor 450 (e.g., efficient deposition of top plate material (e.g., aluminum) into the wide barrel opening 410) and/or for The desired performance characteristics of the resulting MIM capacitor 450. In some embodiments, wide barrel opening 410 may be formed to have a width W barrel in a range of 1 μm to 10 μm, and a vertical height H barrel in a range of 1 μm to 10 μm. In some embodiments, the wide barrel opening 410 has a width in the inward direction through the page in the range of 1 μm to 10 μm, which may be the same as the width W barrel shown, for example in a square or circular opening 410 case.In some embodiments, wide barrel opening 410 may be formed to have a height to width aspect ratio (H barrel/W barrel) of less than or equal to 2.0, for example, to allow efficient filling of wide barrel opening 210 with conformal material. For example, the wide barrel opening 410 may be formed to have an aspect ratio H barrel/W barrel in the range of 0.1 to 2.0, eg, in the range of 0.5 to 2.0. In some embodiments, wide barrel opening 410 may be formed to have an aspect ratio Hbarrel/Wbarrel less than or equal to 1.5, eg, for efficient filling of barrel opening 210 with conformal material. For example, the wide barrel opening 410 may be formed to have an aspect ratio H barrel/W barrel in the range of 0.5 to 1.5, or more specifically in the range of 0.8 to 1.2.In some implementations, the via openings 408A, 408B may be formed to have a width W via in the range of 0.1 μm to 0.8 μm. The width Wbucket of the wide barrel opening 410 is greater than the width Wththrough of the via openings 408A and 408B. For example, in some embodiments, the width Wbucket of wide barrel opening 410 is at least twice as large as the width Wththrough of via openings 408A and 408B. In a particular embodiment, the width Wbucket of barrel opening 410 is at least five times greater than the width Wththrough of via openings 408A and 408B.Next, as shown in FIG. 4D , a conductive conformal material 412 (eg, TiN, W, or other suitable metal) is deposited over structure 400 such that material 412 fills via openings 408A, 408B to form one or more vias. holes 424A, 424B and form a conformal layer on the bottom and sidewall surfaces of the wide barrel opening 410 . Accordingly, the conductive conformal material 412 is in electrical contact with the second interconnect structure 402B.As shown in FIG. 4E , chemical mechanical planarization (CMP) may be performed to remove portions of conductive material (eg, tungsten) 412 on the top side of structure 400, such as via openings 408A, 408B and outside of wide barrel opening 410. Part of Material 412. The remaining material 412 in the barrel opening 410 defines a cup-shaped conductor 414 that includes a bottom portion 414A and a sidewall portion 414B extending upwardly from the bottom portion 414A (ie, extending upwardly from the bottom plate 402B). The cup-shaped conductor 414 (eg, tungsten) and the underlying second interconnect structure 402B (eg, copper) together define the bottom conductor 401 of the formed 3D MIM capacitor 450 . As indicated above, the second interconnect structure 402B and the bottom portion 414A together form the floor of the bottom conductor 401 .Next, as shown in Figure 4F, an insulator layer 420, such as a layer of silicon nitride (SiN) or other conformal dielectric material, is deposited over the structure 400 and extends down into the wide barrel opening 410, covering the cup Shaped conductor 414. A bottom portion 420A of the insulator layer 420 is formed on the surface of the bottom portion 414A of the cup-shaped conductor 314 , and a sidewall portion 420B of the insulator layer 420 is formed to cover the vertically extending sidewall portion 414B of the cup-shaped conductor 414 . The insulator layer 420 defines the insulator layer in the formed 3DMIM capacitor. The insulator layer 420 may have any suitable thickness, such as a thickness in the range of 1000 Å to, for example, 2000 Å to 2000 Å, or about 2000 Å.Next, as shown in FIG. 4G , photoresist 418 may be deposited and etched (e.g., using an inexpensive i-line patterning stepper), followed by an insulator etch to remove the insulator layer 420 in the Selected portions on the top side of structure 400 . A resist strip may be performed to remove the remainder of the photoresist 418 .Next, as shown in FIG. 4H , bond pad metal 426 , such as aluminum, may be deposited extending into the remaining unfilled portion of wide barrel opening 410 to cover insulator layer 420 .Finally, as shown in FIG. 41 , bond pad metal 426 (eg, aluminum) can be patterned and etched to define bond pads 428 , 434 and capacitor top conductor 430 extending down into wide barrel opening 410 , A single layer 3DMIM capacitor 450 is thus formed. As shown, a second interconnect structure 402B (eg, copper) forming part of the backplane may be conductively connected to a topside bond pad 434 through at least one conductive via 424B. Accordingly, capacitor top conductor 430 is formed in the bond pad layer.5A-5H show cross-sectional views of an example process for forming (a) a multilayer 3D MIM capacitor 550 formed in a "3D MIM capacitor region," and (b) according to one embodiment of the invention. Nearby IC components 560 in example IC device 500 are connected to topside bond pads 528 formed in a "bond pad region." The completed multilayer 3D MIM capacitor 550 and IC element 560 connected to bond pad 528 are shown in FIG. 5H discussed below. The 3D MIM capacitor 550 is referred to as a "multilayer" MIM capacitor because it uses multiple metal interconnect layers to form the multilayer 3D MIM capacitor 550 . Specifically, as discussed below, multilayer 3D MIM capacitor 550 uses three metal interconnect layers to form the cup-shaped bottom conductor of the capacitor. IC component 560 may include any type of integrated circuit component or component such as a transistor, resistor, capacitor, inductor, diode, A/D converter, D/A converter, connected to one or more integrated circuit components, Interconnects of circuit elements or any other type of integrated circuit element. IC device 500 may include any number and type of IC components 560 .Referring first to FIG. 5A , an IC device 500 under construction includes a multilayer copper (Cu) interconnect structure 503 that includes Cu interconnect layers 503A, 503B, 503C; and/or additional lower layers ( not shown); and a passivation region 506 deposited over the Cu interconnect structure 503 . The top Cu interconnect layer 503C may be referred to as a Cu MTOP layer. As shown, the multilayer Cu interconnect structure 503 is configured to form (a) a cup-shaped conductor structure 502 comprising features 502A, 502B and 502C; and (b) IC element contact 505 including features 505B and 505C formed in Cu interconnect layers 503B and 503C, respectively. As shown, a barrier layer 504, such as a Ta/TaN barrier layer, may be deposited prior to deposition of each respective Cu interconnect feature.The cup-shaped conductor structure 502 defines the cup-shaped bottom conductor of the formed 3D MIM capacitor 550 . In the example shown, feature 502A defines the floor portion of cup-shaped conductor structure 502 in Cu interconnect layer 503A, feature 502B is formed as a first copper ring in Cu interconnect layer 503B, and feature 502C is formed as a Cu interconnect layer 503A. The second copper ring in connecting layer 503C. The first copper ring 502B and the second copper ring 502C may have any suitable shape (as viewed from above), such as circular, oval, square, rectangular, cross, or any other shape. The first copper ring 502B and the second copper ring 502C collectively define sidewalls extending upwardly from the floor portion 502A, and are in electrical contact with each other. Thus, in the illustrated embodiment, two Cu interconnect layers 503B and 503C are used to form the vertically extending sidewalls of the cup-shaped conductor structure 502 of the MIM capacitor 550 . In other words, the conductive sidewalls of the cup-shaped conductor structure 502 are two metal layers high and are in electrical contact with the bottom plate of the cup-shaped conductor structure 502 formed in the Cu interconnect layer 503A, thereby together forming a cup-shaped bottom conductor. It should be understood that any number of metal interconnect layers (eg, one, two (as shown), three, four, five, or more interconnect layers) may be used to form the vertical extension of the cup-shaped conductor structure 502 , for example to provide the desired height to width aspect ratio of the barrel opening 510 (see FIG. 5B discussed below) formed in the cup-shaped bottom conductor. That is, the height of the conductive sidewalls of the cup-shaped bottom conductor can be one, two, three, four, five or more metal layers.In the illustrated embodiment, the top copper ring 502C may include an optional lateral extension, indicated at 502C', suitable for connection to a topside bond pad, as discussed in FIG. 5H discussed below. exhibit.As shown in FIG. 5B , a layer of photoresist 509 may be deposited and patterned, followed by etching to form deep trenches defining wide barrel openings 510 in cup-shaped conductor structures 502 . In some implementations, multilayer deep trenches can be etched efficiently using oxide etching due to the high selectivity of oxide etching to Ta/TaN and Cu.As shown in FIG. 5C , a resist strip can be performed to remove the photoresist material 509 and a barrier layer 511 (eg, a TiN liner) can be deposited over the IC device 500 and extending down to the wide barrel opening 510 middle. The barrier layer 511 may have a thickness in a range of 1000 Å to 1000 Å or about 2000 Å.As shown in FIG. 5D , an insulator layer 512 , such as a SiN layer or other conformal material, is deposited over barrier layer 511 and extends down into barrel opening 510 . The as-deposited insulator layer 512 may have any suitable thickness, such as a thickness in the range of 1000 Å to, eg, 1000 Å to, eg, 1000 Å to 2000 Å, or about Å.As shown in FIG. 5E , a layer of photoresist 518 may be deposited and patterned to form bond pad openings 519 over structure 500 . As shown in FIG. 5F , a bond pad etch can be performed through bond pad opening 519, insulator layer 512, barrier layer 511 and passivation layer 506 to expose selected surfaces of top Cu interconnect layer 503C, especially Top surfaces of components 505C and 502C are shown. In one embodiment, optional lateral extensions 502C' are exposed.As shown in FIG. 5G , a bond pad metal 526 may be deposited, such as aluminum or other conformal metal, extending into the wide barrel opening 510 covering the insulator layer 510 . Bond pad metal 526 similarly extends into bond pad opening 519 to contact components 502C and 505C, respectively. Accordingly, a portion of bond pad metal 526 is formed in the bond pad layer that extends into wide barrel opening 510 and forms the top conductor of multilayer 3D MIM capacitor 550 as will be described below.Finally, as shown in FIG. 5H , the bond pad metal 526 (eg, aluminum) can be patterned and etched to define (a) the top conductor 530 formed from the cup-shaped conductor 502 and the top of the multilayer 3D MIM capacitor 550 . Side bond pad 534 , and (b) bond pad 528 connected to IC component 560 . The top side bonding pad 534 of the multilayer 3D MIM capacitor 550 is connected to the lateral extension 502C' of the top copper ring 502C of the cup conductor 502 . In other embodiments, the topside bond pads may be connected to any other component cup conductor 502 . As shown, the top conductor 530 of the multilayer 3D MIM capacitor 550 includes a first portion 530A located above the top portion of the insulator layer 512 , and a second portion 530B extending down into the wide barrel opening 510 . |
An anti-HBV antisense-oligonucleotide able to suppress the duplication and expression of fibronection which is the external matrix of the sinusoid cells in human liver and is specifically combined with the from S2 region of HBV, its sequence and structure, and its application in preparing the medicines for treating the disease, associated with HBV are disclosed. |
1.An antisense oligonucleotide complementary to the 5 'non-coding region and the coding region of fibronectin mRNA, the sequence of the antisense oligonucleotide is selected from one of the following:1)FN1: 5’-GCT CAT CTC CCT CCT CAC TC-3 ’;2)FN2: 5’-TTC GTT CCC ACT CAT CTC CA-3 ’;3)FN3: 5’-CTG GGG CTG AAC CAT TTG CT-3 ’;4)FN4: 5’-GCC TTC AAT AGT CAT TTC TG-3 ’;5)FN5: 5’-GAC GGT CCC ACT TCT CTC CA-3 ’.2.The antisense oligonucleotide according to claim 1, wherein the sequence structure of the antisense oligonucleotide is selected from one of the following:1)FN1: 5’-GCT CAT CTC CCT CCT CAC TC-3 ’;5)FN5: 5’-GAC GGT CCC ACT TCT CTC CA-3 ’.3.The antisense oligonucleotide according to claim 1, wherein the antisense oligonucleotide is chemically modified.4.The antisense oligonucleotide according to claim 3, wherein the chemical modification is a thio modification.5.Use of any of the antisense oligonucleotides described in claims 1, 2, 3, and 4 in the preparation of a medicament for the treatment of hepatitis B and related diseases. |
Structure and use of antisense oligonucleotide to inhibit fibronectin expressionTechnical field:The invention relates to the field of bioengineering medicine, in particular to the sequence, structure and structure of an antisense oligonucleotide (ASODN, antisense oligodexynucleotide) targeting fibronectin to treat hepatitis B virus (HBV) infection medicine.Background technique:HBV infection seriously threatens human health and is an important cause of hepatitis, cirrhosis and liver cancer. At present, there is still a lack of particularly effective therapeutic drugs, so new anti-HBV drugs have significant social and economic benefits.Foreign literature reports that in the body, fibronectin in the human liver may bind to the antigenic determinants encoded by the pre-S2 region of HBV in a species-restricted manner (Budkowska A, Bedossa P, Groh F .; J Virol 1995; Feb; 69 (2): 840-8); Panorton et al. Found that HBxAg can activate the expression of fibronectin (PANorton, HMGPVReis, Journal of Viral Hepatitis, 2004, 11, 332-341). Through screening and verification, our laboratory found that fibronectin has good specificity and drugability in anti-HBV infection, and can develop into a potential target for anti-HBV treatment.ASODN is a kind of artificially synthesized oligonucleotide fragments, mostly 15 to 30 nucleotides in length. Through the principle of base complementation, interfering with the transcription and translation of related genes, or the replication of the entire genome, its advantage lies in its theoretically high target specificity, which is an ideal gene-targeted therapeutic drug with precise selectivity. Due to the high specificity of the action of ASODN, it is considered to be a promising new antitumor and antiviral drug. Some well-known foreign pharmaceutical companies have taken antisense drugs as one of the key directions of their new drug research and development.The purpose of the present invention is to design an ASODN for fibronectin based on the published fibronectin gene mRNA sequence. By inhibiting the expression of fibronectin, it can prevent HBV infection, inhibit HBV replication and expression, and provide new special effects medicines for the treatment of chronic HBV infection.Summary of the invention:The main content of the present invention is: by searching the nucleic acid sequence database in GeneBank, the fibronectin mRNA reference sequence X02761 published by NCBI is selected, and the computer simulation of RNA secondary structure is carried out, and 5 unstable stem-loop structures are selected as ASODN Target of action. By comparing with the GeneBank online blast sequence, the selected target sequences have good specificity and will not interfere with the expression of other normal genes in humans. The thio antisense oligonucleotide sequence (S-ASODN) was synthesized on an automatic DNA synthesizer. The HepG2.2.15 cell model transfected with HBV DNA, which can stably express HBV protein and intact Dane's granules, was used to screen and evaluate the activity of the above ASODNs. The results showed that among the five ASODNs, FN1 and FN5 had a significant inhibitory effect on HBV DNA, HBsAg, HBeAg, fibronectin protein at 0.8 μmol / L, and the inhibitory activity on HBV DNA, HBsAg, HBeAg was greater than that of lamivudine active. In the concentration range of 0.2-0.8μmol / L, FN1 and FN5 have specific dose-dependent inhibitory activity against HBVDNA, HBsAg, HBeAg and fibronectin proteins. Sequence and properties of thioantisense oligonucleotides Number and name Target position (nt) Length (nt) Properties Antisense sequence (5'-3 ') 1234567 FN1FN2FN3FN4FN5FN1sFN5s 2188-22076478-649720-404704-47236607-66262188 20202020202020 AAAAASS GCT CAT CTC CCT CCT CAC TTCTC GTT CCC ACT CAT CTC CACTG GGG CTG AAC CAT TTG CTGCC TTC AAG AGT CAT TTC TGGAC GAG TAG CAG ACT GTC GCC CC ACT ACTA: antisense; S: justiceAccording to the present invention, inhibiting the expression of fibronectin can specifically inhibit the replication and expression of hepatitis B virus, and fibronectin may become a new drug target for the treatment and prevention of HBV-related diseases.According to the present invention, ASODNs targeting fibronectin mRNA can specifically inhibit the replication and expression of hepatitis B virus, and may become a new bioengineering drug for the treatment and prevention of HBV-related diseases.According to the present invention, the length of the antisense oligonucleotide is related to factors such as its cell permeability, target sequence binding affinity, and specificity of action. The length of FN1, FN5 is determined according to experiments, and the present invention includes FN1, FN5. Any length oligonucleotide with the same sequence.According to the present invention, in order to enhance nuclease resistance, bioavailability and tissue targeting of antisense oligonucleotides, the present invention includes thio modifications of FN1 and FN5.According to the present invention, the oligonucleotide of the present invention and its modification can be formulated into a preparation for parenteral administration according to methods known in the art.According to the present invention, the therapeutic composition of the oligonucleotide of the present invention and its modifications can be applied in the form of a separate active ingredient or composition including in combination with other antisense oligonucleotides and their derivative forms.According to the present invention, the treatment composition of the present invention includes the pharmacokinetics, pharmacokinetics, administration mode, administration route of the specific drug, the age, weight, liver and kidney function status of the recipient, the nature of the disease, The degree and duration of treatment, etc., should be administered at a suitable dose.The implementation of the invention has important social and economic benefits for the treatment of hepatitis B and related diseases that seriously endanger human health.BRIEF DESCRIPTION:Figure 1 Fibronectin mRNA expression in HepG2 cells, HepG2.2.15 cells and HepG2.2.15 cells before and after drug treatmentFigure 2 Fibronectin protein expression in HepG2 cells, HepG2.2.15 cells and HepG2.2.15 cells before and after drug treatmentFigure 3 Inhibitory effect of thiofibronectin antisense oligonucleotide sequence on HBV DNA in HepG2.2.15 cellsFigure 4 Inhibitory effect of thiofibronectin antisense oligonucleotide sequence on HBsAg in HepG2.2.15 cellsFigure 5 Inhibitory effect of thiofibronectin antisense oligonucleotide sequence on HBeAg in HepG2.2.15 cellsFig. 6 Inhibitory effect of thiofibronectin antisense oligonucleotide sequences FN1 and FN5 on fibronectin protein in HepG2.2.15 cellsFigure 7 The effect of thiofibronectin antisense oligonucleotide sequence FN1 on the proliferation of HepG2.2.15 cellsFigure 8 The effect of thiofibronectin antisense oligonucleotide sequence FN5 on the proliferation of HepG2.2.15 cellsFigure 9 Inhibitory effect of thiofibronectin antisense oligonucleotide sequences FN1, FN5 and its sense sequence on fibronectin protein in HepG2.2.15 cellsFigure 10 Inhibitory effect of thiofibronectin antisense oligonucleotide sequences FN1, FN5 and its sense sequence on HBsAg secretion by HepG2.2.15 cellsFigure 11 Inhibitory effect of thiofibronectin antisense oligonucleotide sequences FN1, FN5 and their sense sequences on HBeAg secretion by HepG2.2.15 cellsFigure 12 The inhibitory effect of thiofibronectin antisense oligonucleotide sequence FN1 on fibronectin protein in HepG2.2.15 cells was dose-dependentFigure 13 The inhibitory effect of thiofibronectin antisense oligonucleotide sequence FN5 on fibronectin protein in HepG2.2.15 cells was dose-dependentFigure 14 The inhibitory effect of thiofibronectin antisense oligonucleotide sequence FN1 on the secretion of HBV DNA, HBsAg and HBeAg by HepG2.2.15 cells is dose-dependentFigure 15 The inhibitory effect of thiofibronectin antisense oligonucleotide sequence FN5 on the secretion of HBV DNA, HBsAg and HBeAg by HepG2.2.15 cells was dose-dependentFigure 16 Sequences and properties of 5 thiofibronectin antisense oligonucleotidesFigure 17 Sense and sequence of thio antisense oligonucleotides FN1, FN5detailed description:Example oneMaterials and Methods1.Drug preparationLamivudine is dissolved in PBS to a final concentration of 10 mmol / L; adefovir is dissolved in DMSO to a final concentration of 100 mmol / L, and when added, it is diluted to 1 mmol / L in culture medium to ensure that DMSO is in the culture medium The concentration is not higher than 0.1%; IBE5 is dissolved in DMSO to a final concentration of 250mg / mL.2.Cell cultureThe cells used were HepG2 cell line of liver cancer cell line and HepG2.2.15 cell line transfected with HBV DNA. HepG2.2.15 cells are derived from HepG2 cells and contain integrated HBV DNA. During cell culture, Dane ’s particles and HBsAg, HBV DNA, etc. can be secreted into the culture medium continuously and stably. HepG2 cells were cultured in DMEM cell culture medium containing 10% fetal bovine serum (FBS, Gibco), and HepG2.2.15 cells were cultured in MEM cell culture medium containing 10% fetal bovine serum, 380 μg / ml G418 (Promega).3.Cell dosingAfter HepG2.2.15 cells were overgrown, they were passaged 1: 3. After 48 hours, they were replaced with MEM culture medium containing 2% FBS, and lamivudine, adefovir or IBE5 were added at the same time. The final concentration of lamivudine was 25 μmol / L, the final concentration of adefovir was 1 μmol / L, the final concentration of IBE5 was 250 μg / ml, and the corresponding control cells were established. The cell culture medium containing the same concentration of drugs was changed 4 days after the drug addition, and the cells were collected on the 8th day.4.RT-PCR to detect the expression of fibronectin mRNA in HepG2, HepG2.2.15 cells and HepG2.2.15 cells treated with drugsAfter HepG2 cells, HepG2.2.15 cells are overgrown or HepG2.2.15 cells are added on the 8th day, the culture solution is aspirated, washed twice with PBS, and the total RNA of the cells is extracted according to the instructions of the Trizol kit (Invitrogen), UV spectrophotometry Calculated quantitatively, OD260 / 280 is between 1.8-2.0, RNA formaldehyde denaturing electrophoresis shows no degradation. Then reverse transcription is carried out, the specific method is as follows: each take 1μg total RNA, OligodT (15) 0.5μg, RNase inhibitor (40U) 0.1μl, a total of 10.3μl, mix, incubate at 70 ℃ 10min, immediately cooled on ice. Add 5μl of first-strand reaction buffer, 2.5μl of DTT, 0.7μl of RNase inhibitor, 1.0μl of dNTP (A, G, C, T10mM), mix well, react at 42 ℃ for 2min, add 0.5μl of reverse transcriptase superscriptII, 42 Incubate at ℃ for 1h, and finally denature at 70 ℃ for 15min. Reverse transcription products take 0.5μl as a template for PCR amplification, and double PCR with housekeeping gene GAPDH as an internal standard. The upstream primer of the target gene fibronectin is 5'TAGCCCTGTCCAGGAGTTCA3 ', the downstream primer is 5'CTGCAAGCCTTCAATAGTCA3', and the amplified fragment is 307bp. The upstream primer of GAPDH is 5'ACCACAGTCCATGCCATCAC3 ', the downstream primer is 5'TCCACCACCCTGTTGCTGTA3', and the amplified fragment is 449bp. The total volume of the PCR reaction system is 20 μl, the final concentration of the upstream and downstream primers is 1 μm, the concentration of Mg2 + is 1.5 mM, and 0.5 μl of reverse transcription product is added, Taq 1U. The PCR cycle parameters were 94 ° C pre-denaturation for 2 min, 94 ° C for 20 s, 61 ° C for 30 s, 72 ° C for 20 s, 22 cycles, and finally 72 ° C for 2 min. 2% agarose gel (Sigma) electrophoresis.5.Western blot method was used to detect the expression of fibronectin in HepG2 cells, HepG2.2.15 cells and HepG2.2.15 cells treated with drugsAfter HepG2 cells, HepG2.2.15 cells are overgrown or HepG2.2.15 cells are added on the 8th day, the culture medium is aspirated, washed twice with PBS, and the total protein of the cells is extracted by RIPA-PICT (Pharmacia) method. Each extract was adjusted to the same concentration. 30-50μg total protein per well, after 10% polyacrylamide gel electrophoresis, the protein was transferred to nitrocellulose membrane (PROTRAN BA-S 83 Reinforced NC, Schleicher & Schuell) by semi-dry transfer method. Blocked overnight at 4 ° C. Blocking fluid composition: 10% skimmed milk powder, 1 × TBST. Then combine with mouse anti-human fibronectin antibody (Santa) or mouse anti-human β-actin antibody (Sigma) for 1h, wash the membrane 3 times with TBST, each time for 10min, and then add anti-mouse secondary antibody labeled with horseradish peroxidase (Zhongshan) ) Combined with 1h, TBST washed the membrane 3 times, each time for 10min, the color developed by ECL (Pharmacia) color development system, and exposed by X-ray film.result1.Fibronectin mRNA expression in HepG2 cells, HepG2.2.15 cells and drug-treated HepG2.2.15 cellsAfter the same amount of HepG2 cells and HepG2.2.15 cells total RNA reverse transcription PCR, 2% agarose gel electrophoresis was used to detect the expression, and the housekeeping gene GAPDH was used as the internal standard. The results are shown in Figure 1. The expression of fibronectin mRNA was almost undetectable in HepG2 cells, while the expression level in HepG2.2.15 cells was close to that of GAPDH. After treatment with 25 μmol / L lamivudine and 250 μg / ml IBE5, fibronectin mRNA expression in HepG2.2.15 cells was significantly down-regulated. However, the expression of fibronectin mRNA in HepG2.2.15 cells was only slightly decreased after treatment with 1 μM adefovir. In Figure 1, control represents the relative expression of fibronectin mRNA in control cells; lamivudine, adefovir, and IBE5 represent the relative expression of fibronectin mRNA in the three drug treatment groups; HepG2, HepG2.2.15 represent fibronectin mRNA in HepG2, Relative expression in HepG2.2.15 cells.2.Fibronectin protein expression in HepG2 cells, HepG2.2.15 cells and drug-treated HepG2.2.15 cellsFigure 2 shows that the relative expression of fibronectin protein is higher in HepG2.2.15 cells, while the expression of Fibronectin protein is almost undetectable in HepG2 cells. Fibronectin protein was significantly down-regulated after treatment with 25 μmol / L lamivudine and 250 μg / ml IBE5 in HepG2.2.15 cells, and the expression of Fibronectin protein in HepG2.2.15 cells was slightly down-regulated after 1 μmol / L adefovir treatment.in conclusionFibronectin is up-regulated after HBV infection and significantly down-regulated after drug intervention, so it may become a new drug target for the treatment and prevention of HBV-related diseases.Example 2Materials and Methods1.Design and synthesis of S-ASODNThe nucleic acid sequence database in GeneBank was searched, and the fibronectin mRNA reference sequence X02761 published by NCBI was selected. Through computer-aided design based on multiple predictive RNA secondary structure, 5 unstable stem-loop structures were selected as the targets of ASODN . By comparing with the GeneBank online blast sequence, the selected target sequences have good specificity and will not interfere with the expression of other normal genes in humans (see Figure 16). All oligonucleotides were synthesized using an ABI8909 automatic DNA synthesizer and thio-modified during the synthesis. The process is as follows: thio reagent (Beaucage Regent, Transgenomic) was dissolved in anhydrous acetonitrile to a final concentration of 1g / 100ml , Placed in the AUX position of the DNA synthesizer, using the DNA vulcanization program provided on the synthesizer to automatically synthesize thio oligonucleotides. After the synthesis, concentrated ammonia water was cut and deprotected at 55 ℃ for 15 hours, then purified by Micro PureP reverse phase purification column (Oligo Prep OP120, SAVANT), quantified by ultraviolet after vacuum drying, and stored at -20 ℃ for future use.2.ASODN transfectionHep2.2.15 cells were cultured in a MEM culture solution containing 10% fetal bovine serum (Gibco) and 380 μg / ml in a 5% CO 2 incubator at 37 ° C. Observe that the cells are growing well. After cultivating to the logarithmic growth phase, inoculate Hep2.2.15 cells in a 6-well plate, 1.5 × 105 cells / well, incubate at 37 ° C, 5% CO2 for 48-72 hours, and grow to 40-60% After the cells were confluent, in the serum-free state, the liposome Lipofectin (Invitrogen, 1 mg / ml) reagent was used for transfection according to the instructions. The concentrations of antisense oligonucleotides were 0.2 μM, 0.4 μM, 0.8 μM and cell control and liposome control were set. 22 hours after transfection, change to normal cell culture medium (MEM cell culture medium containing 10% FBS), and incubate at 37 ° C and 5% CO2 for 72 hours. Collect the cell culture fluid and store at -20 ° C until use. Extract the total RNA of Hep2.2.15 cells (Trizol RNA extraction kit, Invitrogen) and the total protein of Hep2.2.15 cells, and store at -20 ℃ until use.3.Detection of the inhibitory effect of ASODN on the secretion of HBV DNA from Hep2.2.15 cellsTake the cell culture fluid, boil at 100 ° C for 15min, centrifuge at 12000r / min for 10min, and take the supernatant as a template for fluorescent quantitative PCR. The experimental process is operated according to the method of quantitative detection of HBV by the composite probe PCR established by our laboratory (He Yunyan, Wang Shengqi, Chinese Journal of Hepatology, 2001, V9N6: 376-377). The primer sequences for quantitative detection of HBV DNA are: P1: 5'-GGAGTA TGG TATT CGC ACT CCT C-3 '; P2: 5'-TTG TTT TTG TTAG GGGG ACC TGC CT-3'; fluorescent probe sequence F: 5 ' -ACT, TCC, GGA, AAC, TAC, TGT, TAG, ACG, A-3 '; quenching probe sequence Q: 5'-GTA, GTT, TCCGGA, AGT-3'. 20μl reaction system contains 200nmol / L primer, 670nmol / L fluorescent probe F, 180nmol / L quenching probe, 200μmol / LdNTP, 4.0mmol / L Mg2 +, 2μl template, after mixing, each reaction tube reacts with the standard curve The tubes are put together in the iCycle automatic PCR instrument for amplification. The amplification conditions are: 94 ℃ 30s, 55 ℃ 30s, 72 ℃ 30s, a total of 40 cycles. After the reaction is completed, the computer automatically calculates the quantitative results.4.Detection of the inhibitory effect of ASODN on the secretion of HBsAg and HBeAg from Hep2.2.15 cellsTake the collected cell culture fluid, and follow the operation steps of HBsAg and HBeAg ELISA detection kit (Huamei). Take 50μl of cell culture solution into a 96-well plate coated with hepatitis B surface antigen or e-antigen, add 50μl of enzyme-labeled conjugate corresponding to surface antigen or e-antigen to each well, and incubate at 37 ℃ for 1h, then wash the plate 5 times with washing solution. Add 50μl of chromogenic solution A, then add 50μl of chromogenic solution B, incubate at 37 ° C for 15min, add 50μl of stop solution, and measure the 1s absorbance value A at 450nm on a multi-label detection enzyme-linked immunoassay detector (VICTORTM Wallac 1420 Multilabel Counter, Wallac) The inhibition rate was calculated according to IR = (A450 control-A450 administration) / A450 control.5.Detection of the inhibitory effect of ASODN on fibronectin protein in Hep2.2.15 cellsRIPA-PICT (Pharmacia) extracts the total protein of the cell. After the protein is quantified, each extract is adjusted to the same concentration. Then conduct a western blot experiment. For the experimental method, refer to Example 1.result1.Inhibition of ASODN on HBV DNA secretion from Hep2.2.15 cellsFive antisense sequences FN1-FN5 targeting ASGPR mRNA were administered to Hep2.2.15 cells with 0.8μM administration respectively, and cell control, liposome control and positive drug lamivudine control groups were established at the same time. Cell culture was collected after 72 hours Liquid, fluorescence quantitative PCR to detect the HBV DNA secretion number in each group of cells, according to the formula IR = (C dosing group-C cell control group) / C cell control group to calculate the inhibition rate, IR in the formula represents the inhibition rate, C represents The HBV DNA copy number in the detected cell culture fluid. Repeat the experiment three times to calculate the average inhibition rate. In FIG. 3, C represents the cell control group, and LIP represents the liposome control group, and its inhibition rate on HBVDNA is close to 0, and there is no obvious inhibition effect. LAM10 and LAM25 represent the inhibitory effects of 10 μM and 25 μM lamivudine on the secretion of HBV DNA by HepG2.2.15 cells. FN1-FN5 represents the inhibitory effect of five antisense sequences on HBVDNA at 0.8 μM. From the figure, it can be shown that the inhibitory effect of FN1 and FN5 on HBVDNA is greater than or equal to 25 μM lamivudine on HBVDNA in cell culture medium .2.Inhibition of ASODN on HBsAg secretion from Hep2.2.15 cellsFive antisense sequences FN1-FN5 targeting ASGPR mRNA were used to treat HepG2.2.15 cells with 0.8 μM dosing. Cell control, liposome control and positive drug lamivudine control group were also established. Cell culture was collected 72 hours later Solution, HBsAg ELISA detection kit to detect the expression of HBsAg in Hep2.2.15 cell culture medium, according to the formula IR = (A dosing group-A cell control group) / A cell control group to calculate the inhibition rate, IR in the formula represents the inhibition rate, A represents the absorbance of each detection well at 450nm. Repeat the experiment three times to calculate the average value of the inhibition rate, see Figure 4. In the figure, LIP represents the inhibitory effect of the liposome control group on the secretion of HBsAg by cells, showing no significant inhibitory effect. LAM represents the inhibitory effect of 25 μM lamivudine on the secretion of HBsAg by cells, the inhibition rate is greater than 50%. FN1-FN5 represents the inhibitory effect of five antisense sequences on the secretion of HBsAg by cells, in which the average inhibition rate of FN1 and FN5 is greater than 50%, and greater than 25 μM lamivudine inhibits the secretion of HBsAg by cells.3.Inhibitory effect of ASODN on HBeAg secretion by Hep2.2.15 cellsFive antisense sequences FN1-FN5 targeting ASGPR mRNA were administered to Hep2.2.15 cells with 0.8 μM administration respectively. After 72 hours, the cell culture fluid was collected. The HBeAg ELISA test kit was used to detect the expression of HBeAg in Hep2.2.15 cell culture fluid. The inhibition rate is calculated according to the formula IR = (A dosing group-A cell control group) / A cell control group. In the formula, IR represents the inhibition rate, and A represents the absorbance of each detection well at 450 nm. Repeat the experiment three times to calculate the average inhibition rate, see Figure 5. In the figure, LIP represents the inhibitory effect of the liposome control group on the secretion of HBeAg by cells, showing no significant inhibitory effect. LAM represents the inhibitory effect of 25 μM lamivudine on the secretion of HBsAg by cells, the inhibition rate is greater than 50%. FN1-FN5 represents the inhibitory effect of five antisense sequences on the secretion of HBsAg by cells. The average inhibition rate of FN1 and FN5 is greater than 50%, and it is close to the inhibition rate of 25 μM lamivudine on HBsAg secretion by cells.4.Inhibitory effect of ASODN on fibronectin proteinFive antisense sequences FN1-FN5 targeting fibronectin mRNA were administered to Hep2.2.15 cells with 0.8 μM administration respectively. The total protein was extracted 72 hours later. After 10% polyacrylamide gel electrophoresis, the protein was transferred by semi-dry transfer method On the nitrocellulose membrane, using βactin (43kD) as a control, the effect of thioantisense oligonucleotide on the expression of target protein ASGPR was detected by Western Blot method. As can be seen from Figure 6, FN1 and FN5 can significantly inhibit the expression of fibronectin protein, FN2 and FN4 have no significant inhibitory effect on fibronectin protein, and FN3 has no inhibitory effect on fibronectin protein.in conclusion1.Inhibition of fibronectin expression can inhibit HBV replication and expression in HepG2.2.15 cells2.Among the five thiofibronectin antisense oligonucleotide sequences, FN1 and FN5 have obvious effects on inhibiting HBV replication and expression in HepG2.2.15 cells.Example ThreeMaterials and Methods1.Design and synthesis of ASODNAccording to the results of Example 2, the antisense oligonucleotide sequences FN1 and FN5 with better effect are selected and their sense oligonucleotides are synthesized (see FIG. 17). The synthesis of all thiooligonucleotides is the same as in Example 2.2.Cytotoxicity detection of thioantisense oligonucleotide sequences FN1 and FN5Hep2.2.15 cells were cultured in MEM medium containing 10% fetal bovine serum (Gibco) and 380 μg / ml in a 5% CO 2 incubator at 37 ° C. Observe that the cells are growing well. After cultivating to the logarithmic growth phase, inoculate a 96-well plate, 0.75 × 105 cells / well, incubate at 37 ° C, 5% CO2 for 48-72 hours, and wait until the cells grow to 40-60% confluence. In the serum-free state, lipofectin (Invitrogen, 1 mg / ml) reagent was used and transfection was performed according to the instructions. The concentration of antisense oligonucleotides FN1 and FN5 were 0.2 μM, 0.4 μM, 0.8 μM, 1.6 μM, 10 μM and cell controls were set, and three wells were repeated for each concentration. 22 hours after transfection, change to normal cell culture medium (MEM cell culture medium containing 10% FBS), incubate at 37 ° C, 5% CO2 for 72 hours, and refer to the instruction manual of MTS (Promega). Add MTS 20μl / 100μl culture medium to each well Incubate at 37 ° C in the dark for 1.5 hours, and detect the absorbance at 490nm in a multi-label detection enzyme-linked immunoassay detector. At the same time, the cell morphology was observed under an inverted microscope daily after transfection with ASODN.3.Study on the specificity of thioantisense oligonucleotide sequences FN1 and FN5The thio antisense oligonucleotide sequences FN1, FN5 and their sense sequences FN1s, FN5s were transfected into HepG2.2.15 cells at 0.8 μM, respectively, and the transfection method was the same as in Example 2. Collect the culture broth, extract total RNA and total protein, and refer to the methods of Examples 1 and 2 to detect the inhibitory effect of FN1, FN5 and its sense sequence on HBsAg and HBeAg in cell culture fluid, FN1, FN5 and its sense sequence on fibronectin Protein inhibition.4.Dose-dependent detection of thioantisense oligonucleotide sequences FN1 and FN5The thio antisense oligonucleotide sequences FN1 and FN5 were transfected into HepG2.2.15 cells at 0.2 μM, 0.4 μM and 0.8 μM, and the transfection method was the same as in Example 1. Collect the culture solution, extract the total protein, and refer to the methods in Examples 1 and 2 to detect the inhibitory effects of FN1 and FN5 on HBV DNA, HBsAg, and HBeAg in the cell culture fluid and the inhibition of fibronectin protein.result1.Effects of thioantisense oligonucleotides FN1 and FN5 on the proliferation of HepG2.2.15 cellsThioantisense oligonucleotides FN1, FN5 were treated with 0.2μM, 0.4μM, 0.8μM, 1.6μM, 10μM during cell treatment. Cell morphology was observed daily under an inverted microscope. No significant changes. The results of cell proliferation experiments are shown in Figure 7 and Figure 8. The figure shows that in the range of 0.2μM-10μM, the OD value of each dose group of FN1 and FN5 is basically the same as the normal cell control.2.Specificity of thioantisense oligonucleotide sequences FN1, FN5The thio-antisense oligonucleotide sequences FN1, FN5 and its sense sequences FN1s, FN5s were transfected into HepG2.2.15 cells at 0.8 μM, the culture medium was collected, and the expression of HBsAg and HBeAg in the culture medium was detected by ELISA detection kit. The content of HBsAg and HBeAg in the control cell culture solution is 100%, and the content of HBsAg and HBeAg in the experimental group is: A experimental group / A cell control group × 100%, where A represents the absorbance at 450 nm. The results are shown in Figures 10 and 11. The figure shows that there is no significant difference in HBsAg, HBeAg and normal cell controls in the cell culture medium of the FN1 and FN5 sense thio oligonucleotide sequences (FN1s, FN5s) treatment group (P≥0.05). The thio antisense oligonucleotide sequences FN1 and FN5 have a good inhibitory effect (inhibition rate ≥50%). Similarly, 72 hours after transfection of HepG2.2.15 cells with 0.8 μM of thioantisense oligonucleotide sequences FN1, FN5 and their sense sequences FN1s and FN5s, respectively, the total protein was extracted. Western detection results showed that FN1s and FN5s treated cell groups The expression of fibronectin protein is basically the same as that of the control group, while FN1 and FN5 show a good effect of inhibiting the expression of fibronectin protein (see Figure 9).3.The dose-dependent effect of thioantisense oligonucleotide sequences FN1, FN5 on HBV replication and expression.Thioantisense oligonucleotide sequences FN1, FN5 were treated with 0μM, 0.2μM, 0.4μM, 0.8μM treatment HepG2.2.15 cells, the culture medium was collected, fluorescence quantitative PCR detection HBV DNA copy number in the culture medium, ELISA The detection kit detects the expression of HBsAg and HBeAg in the culture solution, and calculates the inhibition rate according to the formula in Example 2. Repeat the experiment three times to calculate the average inhibition rate. As shown in Figures 14 and 15, the inhibitory effects of FN1 and FN5 on HBVDNA, HBsAg and HBeAg in cell culture fluid decrease with the increase of the concentration of thioantisense oligonucleotide sequences FN1 and FN5, showing a significant dose Dependencies. Similarly, the total protein was extracted 72 hours after the cells were treated with 0 μM, 0.2 μM, 0.4 μM, 0.8 μM, FN1, and FN5. Western blot technique was used to detect the expression of fibronectin protein. The results are shown in Figures 12 and 13, as FN1, FN5 As the concentration increased, the expression of fibronectin protein gradually decreased. At 0.8 μM, the expression of fibronectin protein was almost undetectable in the FN5-treated cell group.in conclusion1.The thio-antisense oligonucleotide sequences FN1 and FN5 have no obvious effect on the proliferation of HepG2.2.15 cells2.Thioantisense oligonucleotide sequences FN1 and FN5 have sequence-specific effects on inhibiting HBV replication and expression in HepG2.2.15 cells3.In the concentration range of 0.2-0.8μmol / L, FN1 and FN5 have specific dose-dependent inhibitory activity against HBVDNA, HBsAg, HBeAg and fibronectin proteins. Sequence Listing<110> Institute of Radiation and Radiation Medicine, Academy of Military Medical Sciences, Chinese People's Liberation Army<120> Structure and use of antisense oligonucleotide to inhibit fibronectin expression<130><160>5<170> Patent version 3.1<210>1<211>20<212> DNA<213><400>1gctcat ctccctcctcactcc tc<210>2<211>20<212> DNA<213><400>2ttc, gtt, ccc, act, cat, ctc, ca, ca ,,,,,,,,,,,,,,,,,,, 20<210>3<211>20<212> DNA<213><400>3ctg ggg ctg aac cat ttg ct ct c<210>4<211>20<212> DNA<213><400>4gcc, ttc, aat, agg, cat, ttc, tg, tg, tg, tg, tg, tg, tg, tg, tg, tg, tg, tg, tc, tg, tg, tc, tg, tc, tg, tc, tg, tc, tg, tc, tg, tc, tg, tcc, tg, tcc, tg, tcc, tg, tcc, tg, tcc, tg, tcc, tg, tcc, tg, tcc, tg, tcc, tg, tcc, tcc<210>5<211>20<212> DNA<213><400>5gac, ggt, ccc, act, tct, ctc, ca ,,,,,,,,,,,,,,,,,,,,, 20 |
A technique to provide power management for multiple dice. The technique provides for determining for each respective die of the multiple dice, power consumption for operating each respective die; and generating a respective signal from each respective die that corresponds to the power consumption of each respective die. The technique further provides, for each respective signal, driving a respective open-drain transistor to conduct, in which an output of each open-drain transistor connects to the common node and the common node connects to a reference voltage, to change a voltage of a common node corresponding to the respective signal; and utilizing the voltage of the common node to indicate total power consumption of the dice. |
CLAIMSWhat is claimed is:1. A method comprising: determining for each respective die of a plurality of dice, power consumption for operating each respective die; generating a respective signal from each respective die that corresponds to the power consumption of each respective die; for each respective signal, driving a respective open-drain transistor to conduct, an output of each open-drain transistor coupled to a common node, the common node coupled to a reference voltage, wherein driving one or more of the open- drain transistors to conduct changes a voltage of the common node corresponding to the respective signals; and utilizing a cumulative change of the voltage of the common node from the reference voltage to indicate total power consumption of the plurality of dice.2. The method of claim 1, further comprising comparing the voltage of the common node to a threshold level for power consumption set for the plurality of dice.3. The method of claim 2, wherein the threshold level is peak power consumption for the plurality of dice.4. The method of claim 2, further comprising, for at least one die, performing a power consuming operation in response to determining that the voltage of the common node does not exceed the threshold level.5. The method of claim 1, wherein the reference voltage is a power supply voltage for the plurality of dice.6. The method of claim 1, wherein each respective signal drives a respective plurality of open-drain transistors.7. The method of claim 6, wherein each respective signal drives the open-drain transistors into saturation when conducting.8. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, are capable of causing the processing device to perform operations comprising: determining for each respective die of a plurality of dice, power consumption for operating each respective die; generating a respective signal from each respective die that corresponds to the power consumption of each respective die; for each respective signal, driving a respective open-drain transistor to conduct, an output of each open-drain transistor coupled to a common node, the common node coupled to a reference voltage, wherein driving one or more of the open- drain transistors to conduct changes a voltage of the common node corresponding to the respective signals; and utilizing a cumulative change of the voltage of the common node from the reference voltage to indicate total power consumption of the plurality of dice.9. The non-transitory computer-readable storage medium of claim 8, wherein the instructions are capable of further causing the processing device to perform operations comprising comparing the voltage of the common node to a threshold level for power consumption set for the plurality of dice.10. The non-transitory computer-readable storage medium of claim 9, wherein the threshold level is peak power consumption for the plurality of dice.11. The non-transitory computer-readable storage medium of claim 9, wherein the instructions are capable of further causing the processing device to perform operations comprising, for at least one die, performing a power consuming operation in response to determining that the voltage of the common node does not exceed the threshold level.12. The non-transitory computer-readable storage medium of claim 8, wherein the reference voltage is a power supply voltage for the plurality of dice.13. A system comprising: a plurality of dice, in which each die of the plurality of dice contains one or more non volatile memory components, wherein each die includes a power management logic to: determine power consumption for operating each respective die;
generate a respective signal that corresponds to the power consumption of the die; and drive a respective open-drain transistor to conduct to change a voltage of a common node corresponding to the respective signal; and a pull-up resistor to couple the common node to a reference voltage, wherein the voltage of the common node indicates total power consumption of the plurality of dice.14. The system of claim 13, wherein the plurality of dice compare the voltage of the common node to a threshold level for power consumption set for the plurality of dice.15. The system of claim 14, wherein the threshold level is peak power consumption for the plurality of dice.16. The system of claim 14, wherein at least one die performs a power consuming operation in response to determining that the voltage of the common node does not exceed the threshold level.17. The system of claim 13, wherein the reference voltage is a power supply voltage for the plurality of dice.18. The system of claim 13, wherein each respective signal drives a respective plurality of open-drain transistors.19. The system of claim 18, wherein each respective signal drives the open-drain transistors into saturation when conducting.20. The system of claim 13, wherein each power management logic adjusts the respective signal for supply voltage and temperature fluctuations. |
OPEN-DRAIN TRANSISTOR MONITORING CIRCUIT IN A MULTI-CHIP PACKAGE TO CONTROL POWERTECHNICAL FIELD[0001] The present disclosure generally relates to die power management, and more specifically, relates to power management of a multiple-chip package.BACKGROUND ART[0002] A memory sub-system can include one or more memory components that store data. The memory components can be, for example, non-volatile memory components and volatile memory components. In general, a host system can utilize a memory subsystem to store data at the memory components and to retrieve data from the memory components.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only.[0004] FIG. 1 illustrates an example computing environment that includes a memory subsystem in accordance with some embodiments of the present disclosure.[0005] FIG. 2 is a flow diagram of an example method to manage power consumption for multiple dice in a package, operating from a power network, and utilizing a charge storage device at a common node, in accordance with some embodiments of the present disclosure. [0006] FIG. 3 is a flow diagram of an example method to manage power consumption for one die of multiple dice, by monitoring a common node that utilizes a charge storage device to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure.[0007] FIG. 4 is a block diagram of an example package that contains multiple dice, each die having a power management logic to perform the methods of FIG. 2 and/or FIG. 3, in accordance with some embodiments of the present disclosure.
[0008] FIG. 5 is a flow diagram of an example method to manage power consumption for multiple dice in a package, operating from a power network, and utilizing open-drain transistors connected to a common node, in accordance with some embodiments of the present disclosure. [0009] FIG. 6 is a flow diagram of an example method to manage power consumption for one die of multiple dice, by monitoring a common node having open-drain transistors to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure.[0010] FIG. 7 is a block diagram of an example package that contains multiple dice, each die having a power management logic to perform the methods of FIG. 5 and/or FIG. 6, in accordance with some embodiments of the present disclosure.[0011] FIG. 8 is a flow diagram of an example method to manage power consumption for multiple dice in a package, operating from a power network, and utilizing current summation at a common node, in accordance with some embodiments of the present disclosure.[0012] FIG. 9 is a flow diagram of an example method to manage power consumption for one die of multiple dice, by monitoring a common node that utilizes current summation to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure.[0013] FIG. 10 is a block diagram of an example package that contains multiple dice, each die having a power management logic to perform the methods of FIG. 8 and/or FIG. 9, in accordance with some embodiments of the present disclosure.[0014] FIG. 11 is a flow diagram of an example method to manage power consumption for multiple dice in a package and operating from a power network, by monitoring a fluctuation of a supply voltage at a common node, in accordance with some embodiments of the present disclosure.[0015] FIG. 12 is a flow diagram of an example method to manage power consumption for one die of multiple dice, by monitoring a fluctuation of a supply voltage at a common node to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure.[0016] FIG. 13 is a block diagram of an example package that contains multiple dice, each die having a power management logic to perform the methods of FIG. 11 and/or FIG. 12, in accordance with some embodiments of the present disclosure.[0017] FIG. 14 is a block diagram of an example computer system in which embodiments of the present disclosure may operate, in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION[0018] Aspects of the present disclosure are directed to manage usage of power in a package having multiple chips or dice for a memory subsystem. An example of a memory subsystem is a memory module that is connected to a central processing unit (CPU) via a memory bus. A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system can utilize a memory subsystem that includes one or more memory devices. The memory devices can include, for example, non-volatile memory devices (e.g., NAND). Other types of memory devices, including volatile memory devices, are described in greater detail below in conjunction with FIG. 1. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem.[0019] In one embodiment, multiple dice or chips (collectively referred to as “dice”) that are part of a power network implement the memory components. In one embodiment, the multiple dice reside in a single semiconductor or similar package, such as a System in Package or another three-dimensional integrated circuit package. For example, the package can contain stacked memory dice. The power network then provides the power to the package and cumulative power consumption is across the multiple dice. Because some operations, such as programming, erasing, and reading a memory component, are relatively high current operations, not all the dice within the package can perform them at the same time. Typically, the power system has a total power consumption limit (referred to as peak power) for the package and the operations by the dice in the package cannot cumulatively exceed this limit. As such, too many dice executing high current operations concurrently can result in power consumption exceeding the peak power consumption limit. A system can maintain a power consumption limit in the power network by limiting the number of dice performing the high current operations. One approach is to limit the number of active dice that can perform high current operations at any given time, based on the peak power consumption rating for the respective die. However, this approach has a disadvantage in that, during any given period for those dice selected to be active, the total actual power consumption for those active dice may not reach or approach the peak power limit. For example, the limit of concurrent high current operations can be based upon a worst-case scenario in which all dice are active. In those instances, in which less than all dice are active, or some operate with minimal power consumption, the power network has excess power capacity available but inefficiently blocks the higher current operations of other dice.
[0020] Aspects of the present disclosure address the above and other deficiencies by the dice within a package each providing an indication of their respective power consumption usage onto a shared common node (e.g., line, pin or terminal). The common node aggregates or accumulates the power consumption usage values to provide a total power consumption value for the package. Each die can then monitor the common node and determine if a higher current operation, if performed, would exceed the peak power consumption limit. If so, then a die can refrain from performing the higher current operation. If the operation does not exceed the peak power consumption limit, then that die can perform the operation. Each die can have predefined peak power consumption configured in the die or a controller can provide power consumption information to the memory component(s), (e.g., through a set command sequence). In this manner, each die stores or otherwise has access to the target system’s specific power limit. The die indicates the higher power usage on the common node, so that others monitoring the common node are aware of the added usage. The below description provides for a number of different approaches or embodiments to aggregate individual power usage onto the common node and individual dice monitoring the common node to schedule their individual higher current operations without exceeding the total power consumption limit or some other threshold. In this manner, all dice can efficiently use the power network to perform operations. The examples below refer to “die” and “dice,” but the use of “die” and “dice” are interchangeable with “chip” and “chips.”[0021] FIG. 1 illustrates an example computing environment 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. The memory subsystem 110 can include media, such as memory components 112A to 112N (also referred to as “memory devices”). The memory components 112A to 112N can be volatile memory components, non-volatile memory components, or a combination of such. A memory sub system 110 can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and a non-volatile dual in-line memory module (NVDIMM).[0022] The computing environment 100 can include a host system 120 that is coupled to one or more memory sub-systems 110. In some embodiments, the host system 120 is coupled to different types of memory sub-system 110. FIG. 1 illustrates one example of a host system 120 coupled to one memory sub-system 110. The host system 120 uses the memory sub-system 110,
for example, to write data to the memory sub-system 110 and read data from the memory sub system 110. As used herein, “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc.[0023] The host system 120 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes a memory and a processing device. The host system 120 can include or be coupled to the memory subsystem 110 so that the host system 120 can read data from or write data to the memory subsystem 110. The host system 120 can be coupled to the memory subsystem 110 via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. The physical host interface can be used to transmit data between the host system 120 and the memory subsystem 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access the memory components 112A to 112N when the memory subsystem 110 is coupled with the host system 120 by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory subsystem 110 and the host system 120.[0024] The memory components 112A to 112N can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative-and (NAND) type flash memory. Each of the memory components 112A to 112N can include one or more arrays of memory cells such as single level cells (SLCs), multi-level cells (MLCs), triple-level cells (TLCs), or quad-level cells (QLCs). In some embodiments, a particular memory component can include both a low bit density portion (e.g., an SLC portion) and a high bit density portion (e.g., an MLC portion) of memory cells. Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the host system 120. Although non-volatile memory components such as NAND type flash memory are described, the memory components 112A to 112N can be based on any other type of memory such as a volatile memory. In some embodiments, the memory components 112A to 112N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM),
negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. A cross-point array of non volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Furthermore, the memory cells of the memory components 112A to 112N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.[0025] The memory system controller 115 (hereinafter referred to as “controller”) can communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N and other such operations. The controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The controller 115 can include a processor (processing device) 117 configured to execute instructions stored in local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory subsystem 110, including handling communications between the memory subsystem 110 and the host system 120. In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory subsystem 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, a memory subsystem 110 may not include a controller 115, and may instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory subsystem).[0026] In general, the controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A to 112N. The controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block
address that are associated with the memory components 112A to 112N. The controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 112A to 112N as well as convert responses associated with the memory components 112A to 112N into information for the host system 120.[0027] The memory subsystem 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller 115 and decode the address to access the memory components 112A to 112N. In one embodiment, each memory component 112 includes a processor (or similar circuitry) and local memory. In one embodiment, each memory component 112 represents memory components constructed on a single die (or chip). The description below, pertaining to the subsequent Figures, references die 112 or dice 112. Such designation of “112” corresponds to one or more of the memory components 112A-112N in some embodiments.Thus, “die 112” and “dice 112” refer to one or more of the memory components 112A- 112N in the description below. In one embodiment, memory components 112A to 112N reside within a same housing or package, such as by stacking the dice (or chips).[0028] The memory subsystem 110 includes power management logic (PML) (also referred to as a power manager) 113 in each die 112 that can manage power consumption within the respective die 112. The dice 112A to 112N can reside in a single package and can operate by deriving power from a power network. The PML 113 of each die 112 connects to a common node 114 (e.g., line, pin, terminal, etc.) to transmit an indication of its die’s power consumption, in which aggregation or accumulation of the individual indications from the dice occurs at the common node to provide an indication of total power consumption of all of the dice 112. Each PML 113 also monitors the common node 114 to determine the current state or value of total power consumption for the dice 112. Each PML 113 then can use the monitored indication to determine if its die’ s planned memory operation will exceed or not exceed the total power consumption, or some threshold level. Each die can have predefined total power consumption or peak power consumption information configured in the die. Alternatively, the controller 115 can provide the power consumption information to the memory component(s) 112A-112N, (e.g., through a set command sequence), so that each die stores or otherwise has access to the target system’s specific power limit after the host system 120 communicates with the memory subsystem 110. The controller can pass this information to the memory component(s) 112A-
112N. The description below provides further details with regard to different embodiments for the operation of the PML 113 and configurations of the common node 114.[0029] FIG. 2 is a flow diagram of an example method 200 to manage power consumption for multiple dice in a package, operating from a power network, and utilizing a charge storage device at a common node, in accordance with some embodiments of the present disclosure. The method 200 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instmctions ran or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG. 1 performs the method 200 (the PML 113 and other processing logic collectively referred to as “processing device” below). In some embodiments, the circuitry of FIG. 4 performs the method 200. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0030] At operation 201, the processing device (e.g., the PML 113) of each die 112 determines an amount of power consumption for operating its respective die. The power consumption for the die 112 is dependent on the activity of the die 112. When in a higher current state, such as for performing a memory operation, the power consumption is higher than when in a non activity state. The processing device can use a variety of techniques to determine the power consumption for its die 112. For example, the processing device determines the power consumption for the next operation (if any) for the die 112 to perform. In one embodiment, the processing device uses a lookup table or other data structure to map an operation with a digital or analog value corresponding to the operation. In one technique, current measurement, e.g., supply current, provides an indication of the die’s power consumption.[0031] At operation 202, each processing device generates a signal that corresponds to the power consumption for its die 112. For example, the processing device can generate one of a variety of signals corresponding to power consumption, including voltage or current, analog or digital. In one embodiment, the processing device generates the signal using a digital-to-analog converter to convert a digital value to an analog value to indicate power consumption. In one embodiment, the signal is a current signal. In one embodiment, the current signal can be the supply current (or a fraction of the supply current) drawn by the die 112 to indicate the power
consumption. The amount of the supply current drawn by the die 112 corresponds to the determined power consumption.[0032] At operation 203, each processing device converts the signal to an analog signal to drive the common node 114. The conversion performed depends on the type of the signal used to indicate the die’s power consumption. For the embodiment that employs a current signal, the conversion is to a voltage. Thus, the value of the current converts to voltage when driven onto the common node 114. In one embodiment a transimpedance amplifier performs the current-to- voltage conversion for driving the common node 114. Thus, the analog voltage generated corresponds to an indication of that die’s power consumption value. With each processing device generating its respective die’s power consumption, the resultant voltage driven onto the common node by all of the dice 112 corresponds to a value indicative of the total power consumption by the dice in the package.[0033] At operation 204, a charge storage device, such as a capacitor, accumulates the charge driven onto common node 114 by each die 112. Therefore, the resulting voltage on the capacitor is an indication of the power consumption by all of the dice 112 in the package.[0034] FIG. 3 is a flow diagram of an example method 300 to manage power consumption for one die of multiple dice, by monitoring a common node that utilizes a charge storage device to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions ran or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG. 1 performs the method 300.In some embodiments, the circuitry of FIG. 4 performs the method 300. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0035] At operation 301, each processing device (e.g., PML 113) of the dice 112 contained in the package monitors the common node 114 shared by the dice. For example, each processing device monitors the accumulated voltage on the common node 114 that indicates total power consumption for all of the dice 112 in the package. In some embodiments, the method 200 of FIG. 2 provides the technique for driving an analog voltage on to the common node 114 to
charge a storage device, such as a capacitor. As described above, the charge storage device accumulates the charges at the common node.[0036] At operation 302, the processing device of one die utilizes the accumulated voltage of the common node 114 to determine the indicated total power consumption for the dice 112 of the package. In one embodiment, the processing device converts the analog voltage of the common node 114 to a digital value by use of an analog-to-digital converter.[0037] At operation 303, the processing device determines if an operation that the die 112 is about to perform will exceed a threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package operating on the power network. Because the die 112 has indicated its current power consumption (e.g., the method 200) onto the common node 114, along with indications of other dice, the processing device knows the total power consumption for the package. The processing device can determine if the required power for the intended operation, when considered with the aggregated value on the common node 114, will exceed the threshold. In one embodiment, the processing device uses a lookup table or other data structure to map an operation with a digital or analog value corresponding to the operation. If the potential increase in the power consumption can result in the total power exceeding the threshold value for the package, the die 112 does not perform the operation, delays performing the operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not, exceed the threshold value for the package, the die 112 performs the operation. When performing the operation, the processing device of the die 112 updates the die’s power consumption indication onto the common node 114 to reflect the new power consumption value for the die in performing the operation. In some embodiments, the processing device of the die 112 updates the die’s power consumption indication onto the common node 114 prior to performing the operation, in order to advertise or reserve power for the operation to prevent another die from executing another power consuming operation.[0038] FIG. 4 is a block diagram of an example package 400 that contains multiple dice, each die having a PML to perform the methods of FIG. 2 and/or FIG. 3. The package 400 shows only three dice, however, the actual number present can vary depending on the design. Each die 112 includes a sequencer 115, along with the PML 113. Each sequencer 115 is responsible for sequencing the various memory operations performed within its respective die 112. Such operations include the scheduling and performing of read, program, and erase operations related to memory cells. The PML 113 manages the power-related operations for the die 112. In some embodiments, the PML 113 and sequencer 115 are separate components. In some embodiments,
the PML 113 and sequencer 115 are a combined component. In some embodiments, the sequencers 115 of the dice 112 can communicate with one another. Each PML 113 couples to the common node 114.[0039] In operation, the PML 113 of each die 112 determines an amount of power consumption for operating its respective die. In some embodiments, the sequencer 115 can provide information to make the determination about power consumption. The power consumption for the die 112 is dependent on the activity of the die. When in a higher current state, such as for performing a program, read, or erase operation, the power consumption is higher than when in a non-activity state. The PML 113 can use a variety of techniques to determine the power consumption for its die 112. In one technique, an amount of current (e.g., supply current) drawn by the die can provide an indication of die’s power consumption.[0040] Each PML 113 generates a signal on line 406 that corresponds to the power consumption for its die 112. The PML 113 can generate one of a variety of signals corresponding to power consumption, including voltage or current and analog or digital. In one embodiment, the PML 113 generates the signal on line 406 using a digital-to-analog converter to convert a digital value to an analog value. In one embodiment, the signal on line 406 is a current signal. In one embodiment, the current signal can be the supply current (or a fraction of the supply current) drawn by the die 112 to indicate the power consumption. The amount of the current drawn by the die 112 corresponds to the power consumption of the die 112. In some embodiments, the sequencer 115 can provide the signal of line 406 to the PML 113.[0041] Each PML 113 converts the signal on line 406 to an analog signal on line 407 to drive the common node 114. The conversion performed depends on the type of the signal used to indicate the die’s power consumption. Lor the embodiment that employs a current signal, the conversion is from current to a voltage. Thus, the value of the current converts to voltage when driven on to the common node 114. In the shown embodiment a transimpedance amplifier 401 performs the current-to-voltage conversion for driving the common node 114. The analog voltage generated on line 407 corresponds to an indication of the die’s power consumption value. With each PML 113 generating its respective die’s power consumption, the resultant voltage driven onto the common node 114 by all of the dice 112 corresponds to a value indicative of the total power consumption by the dice in the package 400.[0042] The package 400 includes a charge storage device, shown as a capacitor 403, to accumulate the charge driven onto common node 114. The resulting voltage on the capacitor 403 provides an indication of the power consumption by all of the dice 112 in the package 400. A leakage resistor 404 in parallel with the capacitor 403 provides a discharge path for the
capacitor. Although the package includes both circuit components 403 and 404, one or both components 403, 404 can reside outside of the package. In some embodiments, a designated pin or terminal on each die 112 connects line 407 to the common node 114.[0043] To manage power consumption for each die 112, each PML 113 monitors the accumulated voltage on the common node 114. In some embodiments, a designated pin or terminal on each die connects line 408 to the common node 114 to monitor the voltage on the common node 114. A PML 113 of one die utilizes the accumulated voltage of the common node 114 to determine the indicated total power consumption for the dice 112 of the package 400. In one embodiment, the PML 113 converts the analog voltage on line 408 to a digital value by use of an analog-to-digital converter 402 and outputs the digital signal on line 409. The PML 113 can pass this information on line 409 to the sequencer 115.[0044] The PML 113 determines if the next operation that the die 112 is about to perform will exceed a threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package 400 operating on the power network. Because the die 112 has indicated its current power consumption onto the common node 114, along with indications of other dice, the PML 113 knows the total power consumption for the package. The PML 113 can determine if the required power for the next operation, when considered with the aggregated value on the common node 114, will exceed the threshold. If the potential increase in the power consumption can result in the total power exceeding the threshold value, the die 112 does not perform the operation, delays performing the operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not, exceed the threshold value, the die 112 performs the operation. When performing the operation, the PML 113 of the die 112 updates the die’s power consumption indication on the common node 114 to reflect the new power consumption value for the die in performing the operation. In some embodiments, the PML 113 of the die 112 updates the die’s power consumption indication onto the common node 114 prior to performing the operation, in order to advertise or reserve power for the operation to prevent another die from executing another power consuming operation. In some embodiments, the sequencer 115 can perform some or all of the operational functions, or aid in the performance of the operational functions of the die 112.[0045] The circuitry of FIG. 4 allows for analog control over the power management of the dice 112 in the package 400. Analog voltage and/or current monitoring at the common node 114 allows each die 112 to determine the current power consumption of the package 400, so that each individual die can decide on which memory operation(s) it can currently perform, based on
the monitored total power consumption value. When a die 112 cannot acquire adequate power to perform an operation, in some embodiments, the die 112 can delay performing the operation until power is available or perform a lower-power version of the operation. Furthermore, the circuitry of FIG. 4 can contain compensation devices/circuits/logic to adjust for process, temperature and/or voltage (PVT) fluctuations. PVT compensation allows for accurate performance of the PML 113 over fluctuating PVT conditions.[0046] FIG. 5 is a flow diagram of an example method 500 to manage power consumption for multiple dice in a package and operating from a power network and utilizing open-drain transistors connected to a common node, in accordance with some embodiments of the present disclosure. The method 500 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG. 1 performs the method 500 (the PML 113 and other processing logic collectively referred to as “processing device” below). In some embodiments, the circuitry of FIG. 7 performs the method 500. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0047] At operation 501, the processing device (e.g., the PML 113) of each die 112 determines an amount of power consumption for operating its respective die. The power consumption for the die 112 is dependent on the activity of the die 112. When in a higher current state, such as for performing a memory operation, the power consumption is higher than when in a non activity state. The processing device can use a variety of techniques to determine the power consumption for its die 112. For example, the processing device determines the power consumption for the next operation (if any) for the die 112 to perform. In one embodiment, the processing device uses a lookup table or other data structure to map an operation with a digital or analog value corresponding to the operation. In one technique, current measurement, e.g., supply current, provides an indication of the die’s power consumption.[0048] At operation 502, each processing device generates a signal that corresponds to the power consumption for its die 112. For example, the processing device can generate one of a variety of signals corresponding to power consumption, including voltage or current and analog
or digital. In one embodiment, the processing device generates the signal using a digital-to- analog converter to convert a digital value to an analog value. In one embodiment, the signal is an analog voltage signal, which voltage value corresponds to the current drawn by that die. The amount of the supply current drawn by the die 112 corresponds to the determined power consumption.[0049] At operation 503, each processing device uses the analog voltage to drive a gate of an open-drain transistor, so that the drain voltage corresponds to the gate drive voltage. The drain connection is to the common node 114, so that the transistor drives the voltage variation onto the common node 114. In one embodiment, the drains of the transistors of each die 112 connect to the common node 114 and the common node 114 connects to a reference voltage, such as a supply voltage. When the transistors are in the off state, the common node 114 is at the reference voltage. However, when the transistor(s) conduct, the voltage of the common node 114 drops from the reference voltage, in which the amount of the voltage drop corresponds to the amount of the conduction of the transistor(s). Because the amount of the transistor conduction depends on the gate signal to the transistor, each processing device causes a voltage change (e.g., voltage drop) from the reference value, which change corresponds to that die’s power consumption. [0050] At operation 504, with each processing device driving the voltage change onto the common node 114 when conducting, the voltage at the common node 114 represents the cumulative drive from all the dice 112. Thus, the voltage at the common node 114 has a voltage variance from the reference value that corresponds as an indication of the total power consumption by the dice 112 in the package.[0051] FIG. 6 is a flow diagram of an example method 600 to manage power consumption for one die of multiple dice, by monitoring a common node having open-drain transistors to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure. The method 600 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG. 1 performs the method 300.In some embodiments, the circuitry of FIG. 7 performs the method 600. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed
in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0052] At operation 601, each processing device (e.g., the PML 113) of the dice 112 contained in the package monitors the common node 114 shared by the dice. For example, each processing device monitors the voltage on the common node 114 that indicates total power consumption for all of the dice 112 in the package. In some embodiments, the method 500 of FIG. 5 provides the technique for using an open-drain transistor to cause a voltage of the common node 114 to change corresponding to the conduction of the transistor. Each processing device utilizes an open-drain transistor configuration and when driven into conduction, causes a voltage of the common node 114 to change in response to the conduction. As described above, all voltage changes combined at the common node 114 corresponds to the total power consumption.[0053] At operation 602, the processing device of one die 112 utilizes the voltage of the common node 114 to determine the indicated total power consumption for the dice of the package. In one embodiment, the processing device converts the analog voltage of the common node 114 to a digital value by use of an analog-to-digital converter. Other embodiments can use other techniques.[0054] At operation 603, the processing device determines if the next operation that the die 112 is about to perform will exceed a threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package operating on the power network. Because the die 112 has indicated its current power consumption (e.g., the method 500) onto the common node 114, along with indications of other dice, the processing device knows the total power consumption for the package. The processing device can determine if the required power for the intended operation, when considered with the aggregated value on the common node 114, will exceed the threshold. In one embodiment, the processing device uses a lookup table or other data structure to map an operation with a digital or analog value corresponding to the operation. If the potential increase in the power consumption can result in the total power exceeding the threshold value, the die 112 does not perform the operation, delays performing the operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not, exceed the threshold value, the die 112 performs the operation. When performing the operation, the processing device of the die 112 updates the die’s power consumption indication on the common node 114 to reflect the new power consumption value for the die in performing the operation. In some embodiments, the processing device of the die 112 updates the die’s power consumption indication onto the common node 114 prior to performing the operation, in order to
advertise or reserve power for the operation to prevent another die from executing another power consuming operation.[0055] FIG. 7 is a block diagram of an example package 700 that contains multiple dice, each die having a PML to perform the methods of FIG. 5 and/or FIG. 6. The package 700 shows only three dice 112, however, the actual number present can vary depending on the design. Each die 112includes a sequencer 115, along with the PML 113. Each sequencer 115 is responsible for sequencing the various memory operations performed within its respective die 112. Such operations include the scheduling and performing of read, program, and erase operations related to memory cells. The PML 113 manages the power related operations for the die 112. In some embodiments, the PML 113 and sequencer 115 are separate components. In some embodiments, the PML 113 and sequencer 115 are a combined component. In some embodiments, the sequencers 115 of the dice 112 can communicate with one another. Each PML 113 couples to the common node 114.[0056] In operation, the PML 113 of each die 112 determines an amount of power consumption for operating its respective die. In some embodiments, the sequencer 115 can provide information to make the determination about power consumption. The power consumption for the die 112 is dependent on the activity of the die. When in a higher current state, such as for performing a program, read, or erase operation, the power consumption is higher than when in a non-activity state. The PML 113 can use a variety of techniques to determine the power consumption for its die 112. In one technique, current measurement (e.g., supply current) provides an indication of die’s power consumption.[0057] Each PML 113 generates a signal on line 706 that corresponds to the power consumption for its die. PML 113 can generate one of a variety of signals corresponding to power consumption, including voltage or current and analog or digital. In one embodiment, the signal is an analog voltage signal, which voltage value corresponds to the current drawn by that die.[0058] Each PML 113 uses the analog voltage to drive a gate of an open-drain transistor 701. The transistors have the drain line 707 connected to the common node 114. In some embodiments, transistors 701 is a Complementary Metal-Oxide-Semiconductor (CMOS) transistor. The common node 114 connects to a reference voltage Vref via a pull-up resistor 703. In some embodiments, Vref can be a supply voltage (e.g., Vcc or Vdd) provided to the dice. When the transistors are in the off state, the drain line 707 is at the Vref level. The resistor 703 can reside inside the package 700 or outside of the package 700. When one or more transistors 701 conduct, the conducting transistors operate to pull current from the common node 114 and
pull the voltage down from the Vref value. The amount of the pull-down is dependent on the value of the signal driving the gate of the transistor 701. The voltage of the common node 114 drops from the reference voltage Vref, in which the amount of the voltage drop corresponds to the amount of the conduction of the transistor(s) 701. Because the amount of the transistor conduction depends on the gate signal to the transistor, each PML 113 causes a voltage change (e.g., voltage drop) from the reference value, which change corresponds to that die’s power consumption.[0059] Each PML 113 drives the respective transistor 701 to cause a voltage pull-down of line 707. The amount of the cumulative pull-down by the transistors 701 of dice 112 translates to an amount of the voltage from Vref at the common node 114. Thus, this voltage variation from Vref at the common node 114 for all dice 112 corresponds to the total power consumption of the dice 112. In some embodiments, a controller or regulator can control the value of Vref, in order to vary the distance between Vref and a threshold level setting (such as for peak power). With a lower Vref, the common node 114 can reach the threshold voltage level with smaller number of active dice 112, which has the effect of lowering the power limit. With a higher Vref, the common node 114 can reach the threshold voltage level with higher number of active dice 112, which has the effect of raising the power limit. In this manner, a system can adjust the Vref based on the task load and system power requirement, in order to balance power limit and task execution time.[0060] In some embodiments, each PML 113 uses multiple transistors instead of just one transistor. The signal to the gates of the multiple transistors determine which transistors are to conduct. The signal can be digital in this instance with some embodiments. The multiple transistors operate in saturation mode when conducting, so that the number of transistors turned on to pull down the common node 114 determines the current from Vref and the resultant amount of the voltage change at the common node 114. In this instance, the plurality of transistors provides discrete step changes of the drain current on line 707.[0061] To manage power consumption for each die 112, each PML 113 monitors the voltage on the common node 114 that indicates total power consumption for all of the dice 112 in the package 700. Higher the voltage difference between the common node voltage and Vref, higher the total power consumption. In some embodiments, a designated pin or terminal on each die112 connects line 708 to the common node 114 to monitor the common node voltage. A PML113 of one die 112 utilizes the voltage of the common node 114 to determine the indicated total power consumption for the dice of the package. In one embodiment, the PML 113 converts the analog voltage of the common node 114 to a digital value by use of an analog-to-digital
converter 702 and outputs the digital signal on line 709. The PML 113 can pass this information on line 709 to the sequencer 115.[0062] The PML 113 determines if the next operation the die 112 is about to perform will exceed a threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package operating on the power network. Because the die 112 has indicated its current power consumption onto the common node 114, along with indications of other dice, the PML 113 knows the total power consumption for the package. The PML 113 can determine if the required power for the next operation, when considered with the aggregated value on the common node 114, will exceed the threshold. If the potential increase in the power consumption can result in the total power exceeding the threshold value, the die 112 does not perform the operation, delays performing the operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not, exceed the threshold value, the die 112 performs the operation. When performing the operation, the PML 113 of the die 112 updates the die’s power consumption indication on the common node 114 to reflect the new power consumption value for the die in performing the operation. In some embodiments, the PML 113 of the die 112 updates the die’s power consumption indication onto the common node 114 prior to performing the operation, in order to advertise or reserve power for the operation to prevent another die from executing another power consuming operation. In some embodiments, the sequencer 115 can perform some or all of the operational functions, or aid in the performance of the operational functions of the die 112.[0063] The circuitry of FIG. 7 allows for analog control over the power management of the dice 112 in the package 700. Analog voltage and/or current monitoring at a common node 114 allows each die 112 to determine the current power consumption of the package 700, so that each individual die decides on which memory operation(s) it can currently perform, based on the monitored total power consumption value. When a die 112 cannot acquire adequate power to perform an operation, in some embodiments, the die 112 can delay performing the operation until power is available or perform a lower-power version of the operation. Furthermore, the circuitry of FIG. 7 can contain compensation devices/circuits/logic to adjust for process, temperature and/or voltage (PVT) fluctuations. PVT compensation allows for accurate performance of the PML 113 over fluctuating PVT conditions.[0064] FIG. 8 is a flow diagram of an example method 800 to manage power consumption for multiple dice in a package, operating from a power network, and utilizing current summation at a common node, in accordance with some embodiments of the present disclosure. The method
800 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG, 1 performs the method 800 (the PML 113 and other processing logic collectively referred to as “processing device” below). In some embodiments, the circuitry of FIG. 10 performs the method 800. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0065] At operation 801, the processing device (e.g., the PML 113) of each die 112 determines an amount of power consumption for operating its respective die. The power consumption for the die 112 is dependent on the activity of the die 112. When in a higher current state, such as for performing a memory operation, the power consumption is higher than when in a non activity state. The processing device can use a variety of techniques to determine the power consumption for its die 112. For example, the processing device determines the power consumption for the next operation (if any) for the die 112 to perform. In one embodiment, the processing device uses a lookup table or other data structure to map an operation with a digital or analog value corresponding to the operation. In one technique, current measurement, e.g., supply current, provides an indication of die’s power consumption.[0066] At operation 802, each processing device generates a signal that corresponds to the power consumption for its die 112. For example, the processing device can generate one of a variety of signals corresponding to power consumption, including voltage or current and analog or digital. In one embodiment, the processing device generates the signal using a digital-to- analog converter to convert a digital value to an analog value to indicate power consumption. In one embodiment, the signal is a current signal. In one embodiment, the current signal can be the supply current (or a fraction of the supply current) drawn by the die to indicate the power consumption. The amount of the supply current drawn by the die 112 corresponds to the determined power consumption.[0067] At operation 803, each processing device drives the analog signal to drive the common node 114. For the embodiment that employs a current signal, a current source drives an analog current onto the common node 114. Thus, the value of the current supplied to the common node
114 corresponds to an indication of that die’s power consumption value. The combined currents from the dice 114 results in a cumulative analog current at the common node 114.[0068] At operation 804, with each processing device generating its respective die’s power consumption, the resultant cumulative analog current driven onto the common node 114 by all of the dice 112 corresponds to a value indicative of the total power consumption by the dice in the package.[0069] FIG. 9 is a flow diagram of an example method 900 to manage power consumption for one die of multiple dice, by monitoring a common node that utilizes current summation to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure. The method 900 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG. 1 performs the method 900.In some embodiments, the circuitry of FIG. 10 performs the method 900. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0070] At operation 901, each processing device (e.g., the PML 113) of the dice 112 contained in the package monitors the common node 114 shared by the dice. For example, each processing device monitors the cumulative analog current on the common node 114 that indicates total power consumption for all of the dice 112 in the package. In some embodiments, the method 800 of FIG. 8 provides the technique for using an analog current that corresponds to the power consumption for that respective die 112. The total of the currents, when summed, provides an indication of the total power consumption for the dice 112.[0071] At operation 902, the processing device of one die 112 utilizes the cumulative analog current at the common node 114 to determine the indicated total power consumption for the dice 112 of the package. In one embodiment, the PML 113 sums the analog currents using a current summation amplifier.[0072] At operation 903, the processing device determines if an operation that the die 112 is about to perform will exceed a threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package operating on
the power network. Because the die 112 has indicated its current power consumption (e.g., the method 800) onto the common node 114, along with indications of other dice, the processing device knows the total power consumption for the package. The processing device can determine if the required power for the intended operation, when considered with the cumulative analog current value on the common node 114, will exceed the threshold. In one embodiment, the processing device uses a lookup table or other data structure to map an operation with a digital or analog value corresponding to the operation. If the potential increase in the power consumption can result in the total power exceeding the threshold value, the die 112 does not perform the operation, delays performing the operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not, exceed the threshold value, the die 112 performs the operation. When performing the operation, the PML 113 of the die 112 updates the die’s power consumption indication on the common node 114 to reflect the new power consumption value for the die 112 in performing the operation. In some embodiments, the processing device of the die 112 updates the die’s power consumption indication onto the common node 114 prior to performing the operation, in order to advertise or reserve power for the operation to prevent another die from executing another power consuming operation.[0073] FIG. 10 is a block diagram of an example package 1000 that contains multiple dice, each die having a PML to perform the methods of FIG. 8 and/or FIG. 9. The package 1000 shows only three dice 112, however, the actual number present can vary depending on the design. Each die includes a sequencer 115, along with the PML 113. Each sequencer 115 is responsible for sequencing the various memory operations performed within its respective die. Such operations including the scheduling and performing of read, program, and erase operations related to memory cells. The PML 113 manages the power related operations for the die 112. In some embodiments, the PML 113 and sequencer 115 are separate components. In some embodiments, the PML 113 and sequencer 115 are a combined component. In some embodiments, the sequencers 115 of the dice 112 can communicate with one another. Each PML 113 couples to the common node 114.[0074] In operation, the PML 113 of each die 112 determines an amount of power consumption for operating its respective die. In some embodiments, the sequencer 115 can provide information to make the determination about power consumption. The power consumption for the die 112 is dependent on the activity of the die. When in a higher current state, such as for performing a program, read, or error operation, the power consumption is higher than when in a non-activity state. The PML 113 can use a variety of techniques to
determine the power consumption for its die 112. In one technique, current measurement (e.g., supply current) provides an indication of die’s power consumption.[0075] Each PML 113 generates a signal that corresponds to the power consumption for its die 112. The PML 113 can generate one of a variety of signals corresponding to power consumption, including voltage or current and analog or digital. In one embodiment, the signal on line 1006 is a current signal. In one embodiment, the current signal can be the supply current (or a fraction of the supply current) drawn by the die to indicate the power consumption. In some embodiments, the sequencer 115 can provide the signal of line 1006 to the PML 113. [0076] Each PML 113 drives the analog signal to drive the common node 114. For the embodiment that employs a current signal, a current source 1001 drives an analog current onto line 1007, which connects to the common node 114. Thus, the value of the analog current supplied to the common node 114 corresponds to an indication of the die’s power consumption value. With each PML 113 generating its respective die’s power consumption indication on line 1007, the resultant cumulative analog current on the common node 114 by all of the dice 112 corresponds to a value indicative of the total power consumption by the dice in the package. [0077] To manage power consumption for each die 112, each PML 113 monitors the cumulative analog current on the common node 114. In some embodiments, a designated pin or terminal on each die 112 connects to line 1008 to the common node 114. In some embodiments, a resistor 1004 connects the common node 114 to a return path, such as ground. Resistor 1004 can reside inside the package or outside of the package. A PML 113 of one die 112 utilizes the summed current of the common node 114 to determine the indicated total power consumption for the dice 112 of the package 1000. In one embodiment, the PML 113 sums the current using a current summation amplifier 1003 connected to line 1008, converts the analog value to a digital value by use of an analog-to-digital converter 1002, and outputs the digital signal on line 1009. The PML 113 can pass this information on line 1009 to the sequencer 115.[0078] The PML 113 determines if a next operation that its die 112 is about to perform will exceed a certain threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package operating on the power network. Because the die 112 has indicated its current power consumption onto the common node 114, along with indications of other dice, the PML 113 knows the total power consumption for the package. The PML 113 can determine if the required power for the next operation, when considered with the cumulative analog current value on the common node 114, will exceed the threshold. If the potential increase in the power consumption can result in the total power exceeding the threshold value, the die 112 does not perform the operation, delays performing the
operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not, exceed the threshold value, the die 112 performs the operation. When performing the operation, the PML 113 of the die 112 updates the die’s power consumption indication on the common node 114 to reflect the new power consumption value for the die 112 in performing the operation. In some embodiments, the PML 113 of the die 112 updates the die’s power consumption indication onto the common node 114 prior to performing the operation, in order to advertise or reserve power for the operation to prevent another die from executing another power consuming operation. In some embodiments, the sequencer 115 can perform some or all of the operational functions, or aid in the performance of the operational functions of the die 112.[0079] The circuitry of FIG. 10 allows for analog control over the power management of the dice 112 in the package 1000. Analog voltage and/or current monitoring at a common node 114 allows each die 112 to determine the current power consumption of the package 1000, so that each individual die can decide on which memory operation(s) it can currently perform, based on the monitored total power consumption value. When a die cannot acquire adequate power to perform an operation, in some embodiments, the die 112 can delay performing the operation until power is available or perform a lower-power version of the operation. Furthermore, the circuitry of FIG. 10 can contain compensation devices/circuits/logic to adjust for process, temperature and/or voltage (PVT) fluctuations. PVT compensation allows for accurate performance of the PML 113 over fluctuating PVT conditions.[0080] FIG. 11 is a flow diagram of an example method 1100 to manage power consumption for multiple dice in a package and operating from a power network, by monitoring a fluctuation of a supply voltage at a common node, in accordance with some embodiments of the present disclosure. The method 1100 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG. 1 performs the method 1100 (the PML 113 and other processing logic collectively referred to as “processing device” below). In some embodiments, the circuitry of FIG. 13 performs the method 1100. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in
various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0081] At operation 1101, the processing device (e.g., the PML 113) of each die 112 does not generate a signal indicative of that die’s power consumption. Instead, each PML 113 monitors a supply voltage that provides power to the dice 112 of the package. The supply voltage has a nominal or reference value when the dice 112 of a package consume minimal power. As one or more die/dice 112 begins to consume power by performing memory operations, the additional current drawn on the supply causes a voltage drop on the supply line. This variation can be the ripple effect experienced by the supply line as die/dice 112 draws additional current from the supply.[0082] At operation 1102, each processing device measures the supply voltage value at the common node. In one embodiment, a voltage detector connected to the common node 114 can detect the voltage at the common node. Because the total current drawn by the dice corresponds to total power consumption for the package and because the voltage drop has a proportional relationship to the current drawn from the supply, the total voltage drop from the nominal or reference value measured at the common node, gives a good indication of the total power consumption for the dice 112 in the package.[0083] At operation 1103, the processing device can determine the difference in the voltage drop or ripple on the common node 114 from the nominal or reference value. With each processing device generating its respective die’s power consumption, the resultant voltage driven onto the common node by all of the dice 112 corresponds to a value indicative of the total power consumption by the dice in the package.[0084] At operation 1104, the processing device can utilize this difference in the voltage drop or ripple on the common node 114 from the nominal or reference value to indicate the total power consumption for the dice 112 in the package.[0085] FIG. 12 is a flow diagram of an example method 1200 to manage power consumption for one die of multiple dice, by monitoring a fluctuation of a supply voltage at a common node to determine the total power consumption for the multiple dice, in order to perform a power consuming operation, in accordance with some embodiments of the present disclosure. The method 1200 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instmctions ran or executed on a processing device), or a combination thereof. In some embodiments, the PML 113 of FIG. 1 performs the method 1200. In some embodiments, the circuitry of FIG. 13 performs the method 1200. Although shown in a
particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0086] At operation 1201, each processing device (e.g., the PML 113) of the dice 112 contained in the package monitors a supply voltage at the common node 114 shared by the dice. The supply voltage supplies power to the power network used by the dice 112 in the package. Each processing device monitors a value of the supply voltage on the common node 114. In some embodiments, the method 1100 of FIG. 11 provides the technique for monitoring the supply voltage and interpreting the value of the supply voltage on the common node 114 as an indication of total power consumption value of the dice 112.[0087] At operation 1202, the processing device of one die 112 measures the voltage of the common node 114 to determine the indicated total power consumption for the dice of the package. In some embodiments the processing device uses a voltage detector to perform the measurement.[0088] At operation 1203, the processing device determines a difference in value of the measured supply voltage at the common node 114 to a nominal or reference value for the supply voltage. In some embodiments, the change in the value of the supply voltage is the ripple induced in the supply voltage, which ripple corresponds to the amount of current drawn from the supply by the dice 112.[0089] At operation 1204, the processing device determines if an operation that its die 112 is about to perform will exceed a threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package operating on the power network. Because the die 112 has indicated its current power consumption (e.g., the method 1100) by a change in the supply voltage, along with changes induced by the other dice, the processing device knows the total power consumption for the package. The processing device can determine if the required power for the intended operation, when considered with the supply voltage value at the common node 114, will exceed the threshold. In one embodiment, the processing device uses a lookup table or other data structure to map an operation with a digital or analog value corresponding to the operation. If the potential increase in the power consumption can result in the total power exceeding the threshold value, the die 112 does not perform the operation, delays performing the operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not,
exceed the threshold value, the die 112 performs the operation. When performing the operation, the PML 113 of the die 112 will update the die’ s power consumption indication on the common node 114, by causing a change in the supply voltage, to reflect the new power consumption value for the die.[0090] FIG. 13 is a block diagram of an example package 1300 that contains multiple dice, each die having a PML to perform the methods of FIG. 11 and/or FIG. 12. The package 1300 shows only three dice 112, however, the actual number present can vary depending on the design. Each die includes a sequencer 115, along with the PML 113. Each sequencer 115 is responsible for sequencing the various memory operations performed within its respective die 112. Such operations including the scheduling and performing of read, program, and erase operations related to memory cells. The PML 113 manages the power related operations for the die 112. In some embodiments, the PML 113 and sequencer 115 are separate components. In some embodiments, the PML 113 and sequencer 115 are a combined component. In some embodiments, the sequencers 115 of the dice 112 can communicate with one another. Each PML 113 couples to the common node 114 via line 1306. In some embodiments, a designated pin or terminal on each die connects line 1306 to the common node 114.[0091] In operation, PML 113 of each die 112 does not generate a signal indicative of that die’s power consumption. Instead, each PML 113 monitors, at the common node, a supply voltage (e.g., Vcc/Vdd) that provides power to the dice 112 of the package 1300. The supply voltage has a nominal or reference value when the dice 112 of the package 1300 consume minimal power. As one or more die/dice 112 begins to consume power by performing memory operations, the additional current drawn on the supply causes a voltage drop on the supply line. This variation can be the ripple effect experienced by the supply line as die/dice draws additional current from the supply.[0092] Each PML 113 measures the supply voltage value at the common node 144. In one embodiment, a voltage detector 1301 connected to the common node 114, via line 1306, can detect the voltage at the common node. Because the total current drawn by the dice 112 corresponds to total power consumption for the package 1300 and because the voltage drop has a proportional relationship to the current drawn from the supply, the voltage drop from the nominal or reference value measured at the common node, gives a good indication of the total power consumption for the dice 112 in the package 1300. Each PML 113 can determine the difference in the voltage drop or ripple on the common node 114 from the nominal or reference value. Each PML 113 can utilize this difference in the voltage drop or ripple from the nominal or reference value to indicate the total power consumption of the dice 112 in the package 1300.
[0093] Each PML 113 of the dice 112 contained in the package 1300 monitors the supply voltage at the common node 114. The voltage detector 1301 of a PML 113 of one die 112 measures the voltage of the common node 114 to determine the indicated total power consumption for the dice 112 of the package 1300. The PML determines a difference in value of the measured supply voltage at the common node 114 to a nominal or reference value for the supply voltage. As noted above, in some embodiments, the change in the value of the supply voltage is the ripple induced in the supply voltage, which ripple corresponds to the amount of current drawn from the supply by the dice 112.[0094] The PML 113 determines if the next operation that the die 112 is about to perform will exceed a threshold value for power consumption. In some embodiments, the threshold level is the peak power level set for all of the dice 112 of the package 1300 operating on the power network. Because the PML 113 knows the total power consumption for the package, the PML 113 can determine if the required power for the next operation, when considered with the supply voltage value on the common node 114, will exceed the threshold. If the potential increase in the power consumption can result in the total power exceeding the threshold value, the die 112 does not perform the operation, delays performing the operation, or performs a lower-power version of the operation. If the potential increase in the power consumption cannot, or most likely will not, exceed the threshold value, the die 112 performs the operation. When performing the operation, the power consumption for that die 112 increases due to additional current drawn from the supply, which increase in the power consumption results in additional change in the voltage (e.g., ripple) introduced in the supply voltage and noted at the common node 114.[0095] The circuitry of FIG. 13 allows for analog control over the power management of the dice 112 in the package 1300. Analog voltage and/or current monitoring at common node 114 allows each die 112 to determine the current power consumption of the package 1300, so that each individual die 112 can decide on which memory operation(s) it can currently perform, based on the monitored total power consumption value. Lurthermore, the circuitry of FIG. 13 can contain compensation devices/circuits/logic to adjust for process, temperature and/or voltage (PVT) fluctuations. PVT compensation allows for accurate performance of the PML 113 over fluctuating PVT conditions.[0096] FIG. 14 illustrates an example machine of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system of FIG. 14 can correspond to a host system (e.g., the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (e.g., the memory subsystem 110 of FIG. 1) or can be used to perform the
operations of a controller (e.g., to execute an operating system to perform operations corresponding to the PML 113 of FIG. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0097] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0098] The example computer system includes a processing device 1402, a main memory 1404 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 1406 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 1418, which communicate with each other via a bus 1430.[0099] Processing device 1402 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1402 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 1402 is configured to execute instructions 1426 for performing the operations and steps discussed herein. The computer system can further include a network interface device 1408 to communicate over the network 1420.[00100] The data storage system 1418 can include a machine-readable storage medium 1424 (also known as a computer-readable medium) on which is stored one or more sets of instructions 1426 or software embodying any one or more of the methodologies or functions described herein. The instructions 1426 can also reside, completely or at least partially, within the main memory 1404 and/or within the processing device 1402 during execution thereof by the
computer system 1400, the main memory 1404 and the processing device 1402 also constituting machine-readable storage media. The machine-readable storage medium 1424, data storage system 1418, and/or main memory 1404 can correspond to the memory subsystem 110 of FIG.1.[00101] In one embodiment, the instructions 1426 include instructions to implement functionality corresponding to a power manager or power management logic (e.g., the PML 113 of FIG. 1). While the machine-readable storage medium 1424 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[00102] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[00103] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[00104] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general-purpose computer selectively activated or reconfigured by a computer program stored in
the computer. For example, a computer system or other data processing system, such as a memory component 112, may carry out the computer- implemented methods described herein in response to its processor executing a computer program (e.g., a sequence of instmctions) contained in a memory or other non-transitory machine-readable storage medium. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.[00105] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[00106] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. [00107] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
The present invention discloses a system for controlling high-speed bidirectional communication includes a slave device such as a memory device, for example, coupled to a master device such as a memory controller, for example. The master device may be configured to control data transfer between the master device and the slave device. The master device may be configured to provide one or more clock signals to the slave device and during an initialization mode, the master device may modify a phase alignment of the one or more clock signals. Further the master device may subsequently modify a phase alignment of data transmitted from the master device based upon information received from the slave device. |
1.A system (10) including:Slave (110A); andA master device (100), which is connected to the slave device and is configured to control data transmission between the master device and the slave device,The master device is configured to provide one or more clock signals (118) to the slave device;Wherein, during the initialization mode, the master device is further configured to modify the phase alignment of the one or more clock signals, and subsequently correct the phase alignment of the data transmitted from the master device according to the information received from the slave device. .2.The system of claim 1, wherein the information received from the slave device includes CRC information transmitted via one or more unidirectional cyclic redundancy code (CRC) data paths (112), wherein the CRC information corresponds to The data transmitted by the master device via a plurality of bidirectional data paths (114).3.The system according to any one of the preceding claims, wherein the master device includes a receiver phase adjustment circuit (104) configured to appropriately modify the reception of the master device depending on the CRC information Phase adjustment of the sampling clock of the converter.4.The system of claim 1, wherein the master device includes a receiver phase adjustment circuit (104) configured to depend on receiving from the slave device during each read operation performed by the master device. Phase correction of the receiver sampling clock of the master device appropriately.5.The system of claim 1, wherein, during normal operation, the master device is further configured to appropriately modify transmission by the master device via a plurality of bidirectional data paths depending on a calculated data error rate received from the slave device. The phase alignment of the data.6.The system of claim 5, wherein the master device is configured to transmit a predetermined pattern to the slave device and adjust the phase alignment of the transmitted data in one direction until a calculated transition error rate of substantially 50% is reached Until then, and then adjust the phase alignment of the transmitted data in the other direction by an amount substantially equal to half the data bit period.7.The system of claim 1, wherein the master device is configured to transmit a predetermined command on an address / command signal path (116), and adjusts in response to the predetermined command depending on data received from the slave device. Phase alignment of the one or more clock signals.8.A method including:A master device (100), which controls data transmission between the master device and the slave device (110A);The master device provides one or more clock signals (118) to the slave device; andDuring the initialization mode, the master device corrects the phase alignment of the one or more clock signals and subsequently corrects the phase alignment of the data transmitted from the master device based on the information received from the slave device.9.The method of claim 8, wherein the information received from the slave device includes CRC information transmitted via one or more unidirectional cyclic redundancy code (CRC) data paths (112), wherein the CRC information corresponds to Data transmitted by the master device via a plurality of bidirectional data paths (114).10.A memory subsystem includes:A memory device (410); andA memory controller (100) connected to the memory device and configured to control data transmission between the memory controller and the memory device, wherein the memory controller is configured to provide one or more clock signals (118) to the memory device;Wherein, during the initialization mode, the memory controller is further configured to correct the phase alignment of the one or more clock signals, and subsequently modify the information transmitted from the memory controller to the memory device based on the information received from the memory device. Phase alignment of the data. |
System for controlling high-speed two-way communicationTechnical fieldThe present invention relates to a communication link, and more specifically, to controlling communication between a master device and a slave device via a bidirectional link.Background techniqueMany systems use a known high-speed two-way signalling scheme, in which the work of controlling the amplitude and phase of a signal sent over a channel can be divided equally between the ends of the communication link. In such a system, the control of the link can be symmetrical, so that the transmitters and receivers at each end of the link can contain very similar functions.An example of such a system may be a memory system, where there may be a complex master device (for example, a memory controller) and a simpler slave device (for example, a memory device). Bidirectional device transmission will correspond to write data when transmitting to the slave, and bidirectional device transmission will correspond to reading data when transmitting from the slave.For transmission to occur at a high data rate, a clock phase recovery function can be implemented in the receivers at each end of the bidirectional data bus. For channels with significant high-frequency loss or reflection, the channels can be equalized to prevent data eye closure from being affected by inter-symbol interference (ISI). In addition, links with high data rates may have a significant probability of bit errors. Therefore, an error detection mechanism is typically implemented. As mentioned above, these functions can be implemented in a conventional manner at both ends of the link. However, it is desirable to simplify the analog nature of the data waveform in which the slave device maintains control while traveling in two directions.Summary of the InventionThis specification discloses various embodiments of a system and method for controlling high-speed two-way communication between a master device and a slave device. In one embodiment, the system includes a slave device such as a memory device, for example, the slave device is connected to a master device such as a memory controller. The master device may be configured to control data transmission between the master device and the slave device. The master device may be configured to provide one or more clock signals to the slave device, and the master device may modify the phase alignment of the one or more clock signals during the initialization mode. Moreover, the master device can then modify the phase alignment of the data transmitted from the master device according to the information received from the slave device.In a specific implementation, the master device includes a receiver phase adjustment circuit that can be appropriately modified during each read operation performed by the master device depending on the data received from the slave device. Phase alignment of the receiver sampling clock of the master device.In another specific implementation, during normal operation, the master device may appropriately modify the data transmitted by the master device through multiple bidirectional data paths depending on the calculated data error rate received from the slave device. Phase alignment. For example, the master device may transmit a predetermined pattern to the slave device and adjust the phase alignment of the transmitted data in one direction until the substantially 50% calculated transition error rate is completed. In addition, the master device can then adjust the phase alignment of the transmitted data in the other direction by an amount substantially equal to half of the data bit period, which can correspond to the middle of each data bit.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of one embodiment of a system including asymmetric control of bidirectional data transmission.FIG. 2 is an icon showing a more detailed aspect of an embodiment of the slave device of FIG. 1. FIG.FIG. 3 is a flowchart illustrating the operation of the embodiment shown in FIGS. 1 and 2. FIG.FIG. 4 is a block diagram of a specific embodiment of the system of FIG. 1. FIG.FIG. 5 is a diagram showing an exemplary pinout diagram of the memory module shown in FIG. 4.Although the present invention is susceptible to various modifications and alternative forms, specific examples of the present invention are shown and described in detail herein by way of example in the drawings. However, it should be understood that the drawings and detailed descriptions of the specific embodiments herein are not intended to limit the present invention to the specific forms disclosed. On the contrary, the present invention will cover all those falling within the scope of the attached patents. Modifications, equivalents, and alternatives falling within the spirit and scope of the invention. It should be noted that the word "may [may]" is in a non-mandatory sense [ie, potential [to]], able (beingable), not in a mandatory sense [ie must] ] And used throughout this application.detailed descriptionReferring now to FIG. 1, a block diagram of one embodiment of a system including asymmetric control of two-way data transmission is shown. The system 10 includes a master controller 100 connected to slave devices 110A to 110n via a plurality of signal paths and connectors 150. As shown, the signal path includes a bidirectional data path 114, a command path 116, and a cyclic redundancy code (CRC) information path 112, and a clock 118. It should be noted that the slave device 110n means that any number of slave devices can be used. It should also be noted that components containing reference designations with numbers and letters may be referenced by numbers only. For example, the slave device 110A may be referred to as a slave device 110 when appropriate.In the illustrated embodiment, the main controller 100 includes a control unit 101 that is connected to the transmitting unit 102, the receiving unit 104, and the clock unit 106. In one implementation, the system 10 may be an example of a memory subsystem. In this case, for example, the master controller 100 may be a memory controller and the slave devices 110A to 110n may be memory devices in a dynamic random access memory (DRAM) family such as a memory device. In this case, the connector 150 may be a “finger” connector, as may be found on a memory module including a plurality of memory devices such as the slave device 110. It should be noted, however, that in general, the system 10 may represent any type of system that uses a bidirectional data path.In one embodiment, the command path 116 may carry address and control information via a single-ended signal path. The bi-directional data path 114 can transmit data in two directions via a bi-directional single-ended signal path. The bi-directional data path 114 may include a number of 8-bit (byte-wide) data paths. For example, the entire data path may be 64 bits wide, but the entire data path may be divided into byte-sized portions. It should be noted that the entire data path can contain any number of data bits and is divided into different size parts. The CRC path 112 may carry CRC information from the slave device 110 to the master controller 100 via a unidirectional single-ended signal path. In one embodiment, the CRC path 112 may include two signal paths to carry two CRC bits but any number of signal paths and bits may be used. In addition, the clock path 118 may carry clock signals 0, 1, 2 and 3 to the slave devices 110. In one implementation, each of the clock signals 0, 1, 2 and 3 can be transmitted as a different signal pair.The probability of receiving bit errors from the device 110 or the main controller 100 at high data rates is significant. Therefore, it may be necessary to protect the transmission with an error detection code, which will actively detect multiple bit errors in the protected block. In one embodiment, the CRC code can be used to protect such multi-bit error detection. In detail, as shown in FIG. 2, to simplify the logic in the slave device and report the error to the master controller 100, the slave device 110 is based on any of the data it is generating or receiving. Calculate the CRC. Therefore, to transmit CRC information back to the main controller 100, one or more unidirectional CRC signal paths 112 may be used. As shown in FIG. 2, the CRC generation unit 119A calculates a CRC based on its internal data and sends the CRC data back to the main controller 100. When an error is detected in either direction on the link, the main controller 100 can correct the error by retrying the operation.In one embodiment, the CRC information can be calculated and transmitted from the slave device 110 to the master controller 100 in parallel with the data, so that when the CRC reaches the master controller 100, it can be obtained at the same time as the data block it is protecting. In one embodiment, the associated with calculating the CRC can be slowed by introducing delays in the data path during write-to-read and read-to-write transitions delay.As described above, many conventional systems control high-speed two-way communication by implementing control functions such as clock phase recovery, channel equalization, and error detection (for example, in both communication devices). However, as explained in more detail below, the slave device 110 can be simplified. In this regard, the master controller 100 may include a control function that can dynamically and appropriately adjust the signal characteristics (eg, phase, etc.) of the transmitted write data to enable the slave device 110 to be based on the signals received by the slave device 110. The information reads the data correctly. In addition, the master controller 100 may adjust its internal receiver characteristics to enable the master controller 100 to receive data sent by the slave device 110. Furthermore, the master controller 100 can adjust the phase of the clock signal 118 provided to the slave device 110 so that the address and command information to be sampled correctly.In more detail, the uncertainty of the high data rate delay of different signals in the bus in the transmission path may need to be adjusted for each bit phase of the sampling clock of the receiver of these signals. To avoid using the circuit system in the slave device 110, the master controller 100 can adjust the phases of its transmission clock and data signals to avoid complicated phase shift circuits in the slave device. Therefore, the control unit 101 can calculate the phase information according to the data received from the slave device 110, and the slave device 110 can be used to adjust the phases of different clock edges in the master controller 100. For example, in response to such information as CRC data and read data, the control unit 101 may control the phase tracking and adjustment circuits 103, 105, and 107 in the transmitting unit 102, the receiving unit 104, and the clock unit 106, respectively.Referring to FIG. 2, a diagram showing a more detailed aspect of an embodiment of the slave device of FIG. 1 is shown. It should be noted that the slave device 110A may represent any of the slave devices in FIG. 1. The slave device 110A of FIG. 3 includes core logic 255 connected to the receiving address and command signal 116. The slave device 110A also includes a data input buffer 209 connected to receive one of the signal paths 114 and the VRef signal. The write data output of the buffer 209 is connected to the input of a flip-flop (FF) 208. The output of FF208 is connected to the input of the CRC unit 119A and to the storage 120A. The read data output signal from the memory 120A is connected to the input of the FF206. The output of the FF 206 is connected to the data output buffer 210, which is connected to the same signal path of the bidirectional data path 114. The read data output signal is also connected to the input of the CRC unit 119A.The output of the CRC unit 119A is connected to one of the inputs of a two-input multiplexer 250. The output of the multiplexer 250 is connected to the input of FF205. The output of FF205 is connected to an output buffer 211 and the output buffer 211 is connected to a signal path and a signal path 112 of the CRC. Another input to the multi-tasker 250 is the data bytes of the read data. The CRC signal path can be multiplexed with read data. The multiplexer input selection is provided by the slave core logic 255. It should be noted that although only one signal path and therefore one bit of data is displayed, there may be any number of data signal paths depending on the number of data bits operated by each slave device. For example, in the embodiment where the slave device is a DRAM device, there can be 4, 8, 16, etc. data path signals to each device.In the illustrated embodiment, the clock 118 is provided to the input buffer 219 as a differential signal at 1.6 GHz, but it is considered that other frequencies may be used. The output of buffer 219 is a single-ended clock signal connected to the input of FF218. The output of FF218 is connected back to the input of FF218 via the inverter 217, so FF218 divides the 1.6GHz clock by 2. The 800MHz output of FF218 is also used to provide the clock to the circuits in the slave core logic 255. The clear input of FF218 is connected to the slave core logic 255 and designated as "training reset". As shown, FF205, FF206, FF208, and FF218 are each clocked by a 1.6GHz clock. In addition, FF205, FF206, and FF208 are shown as dual edge flip-flops, indicating that they are configured to be latched on the leading edge and trailing edge of the input clock signal. D 'input. Therefore, the read data, write data, and CRC information can be transmitted on its individual data path at 3.2 GHz.In one embodiment, when the write data is received, the write data is latched by the FF 208 and stored in the memory 120A. In various embodiments, the storage 120A may represent any type of storage that can store data. For example, in one implementation, the memory 120A may include an array of memory storage arranged in columns and rows, the memory storage array containing corresponding sense amplifiers (as can be seen in a typical DRAM device). Specific columns and rows of the memory array can be accessed based on the addresses and commands received by the address command signal path 116. In addition, the memory 120A may include one or more independent accessible registers, and the registers may also be accessed according to addresses and commands received by the address command signal path 116.As described above, the CRC information is transmitted from the slave device 110 to the master controller 100 via the multiplexer 250. As shown in FIG. 2, the CRC signal path 112 may carry data byte data during a portion of a read data cycle. Specifically, in one embodiment, two CRC signal paths can protect eight data paths. During the transmission from the slave device 110 to the master controller 100, the correction of the data in the data block may not be established until all the data blocks and the CRC have been received. However, this increases the latency for the first part of the data block, which may be a critical word for forward transmission in the system.Therefore, in one embodiment, important words may be additionally protected by including an additional in-line error code. For example, additional error detection information can be implemented by repeating important blocks (eg, byte 0) at the beginning of the read data block. By sending the important word group twice, the main controller 100 can confirm that each bit between the two copies is the same, and substantially reduce the error rate for the important word group, thus allowing the important word group to be received for the area. A block's complete CRC was previously considered valid. In another method, during a read operation, the slave device 110 may send out the important block during the first two beats or bit times of the read data block. In one embodiment, to allow space for two copies of the important first data block, one data byte (e.g., data byte 3) can be output during the first four beats of the read data block. CRC path. It should be noted that appropriate error coverage is obtained from the CRC to minimize the impact of bus efficiency, and data can be clustered in data blocks calculated by the CRC.The following will be described in more detail in conjunction with FIG. 3. During operation, the main controller 100 can dynamically and appropriately adjust the signal characteristics (eg, phase, etc.) of the transmitted write data and its internal receiver characteristics, and adjust the phase of the clock signal 118 provided to the slave device 110 . In particular, as described above, the receiving unit 104 includes a sampling clock phase adjustment circuit 105, and the sampling clock phase adjustment circuit 105 may include a bang-bang phase detector (not shown). In this regard, whenever the master controller 100 is receiving data from the slave device 110, the receiving unit 104 can use the binary phase detector to adjust its own local sampling clock phase to better receive the data from the slave device 110. The transmitted data. In addition, the main controller 100 includes a clock phase adjustment logic 107 that can be used to adjust the phase of each clock signal 120. For example, during the initialization process during the power-on reset period, the master controller 100 may adjust the phase of each clock signal 118 to enable each slave device to correctly sample the address and command signals 116. Furthermore, the master controller 100 includes transmission data phase adjustment logic 103, which can be used to adjust the phase of the write data transmitted to the slave device 110A. During the initialization period and during the operation at a predetermined time interval, the main controller 100 can adjust the phase of the transmitted data to enable the slave device 110 to better receive the written data.FIG. 3 is a flowchart illustrating the operation of the embodiment shown in FIGS. 1 and 2. FIG. As described above, the main controller 100 can be configured to appropriately modify its clock, transmission, and reception characteristics so that the main controller 100 can transmit data correctly received by the slave device, and the main controller 100 can correctly To receive data transmitted by the slave device.FIG. 4 is a diagram depicting one implementation of the system in FIG. 1. FIG. As shown, the system 10 is a memory subsystem including a memory controller 100 connected to a dual in-line memory module (DIMM) 410. Therefore, the memory controller 100 is a representative of the master controller 100 shown in FIG. 1, and the DIMM 410 includes a plurality of DRAM devices 110A. The DRAM device 110A is a representative of the slave device 110 in FIG. 1.In the illustrated embodiment, the clock signal 120 in FIG. 1 is depicted as MCLK 0 to MCLK 3. In addition, as explained above, MCLK1 is connected to the first five DRAM devices 110, and MCLK0 is connected to the next four DRAM devices 110. In the same situation, MCLK 2 and MCLK 3 are connected to the next five and four DRAM devices. In the illustrated embodiment, the address / command 116 signal path is connected to the DRAM device 110 in parallel, but from one end to the other end of the DIMM. Therefore, this special routing of the address / command signal causes the signal skew between the DRAM device and the DRAM device, especially if they are further spaced. As described in more detail below, the clocks provided to a group of DRAM devices 110 can be phase-adjusted independently of each other's clocks.Referring collectively to FIGS. 1 to 4 and starting from block 300 in FIG. 3, after resetting or turning on the power condition (block 300), the master controller 100 can independently adjust each clock signal so that each slave device can Address and command information is latched correctly (block 305). Specifically, in one embodiment, each clock signal (for example, clock 0, clock 1, clock 2, etc.) may be arranged to a path of one or more individual slave devices 110 so that the slaves connected to a common clock The device may have similar clock skew. In addition, as shown in FIG. 4, the address / command signal path 116 is arranged in parallel to the paths of all the slave devices and from one end of the DIMM 410 to the other end. In this regard, the address / command signal timing of one slave device (e.g., 110A) with one clock (e.g., MCLK 1) can be significantly different from that of another slave device (e.g., 110n) with a different clock (e.g. MCLK 2). The address / command signal timing is significantly different. However, the address / command signal skew is sufficiently close to the slave devices connected to the common clock, so that the phase of the common clock can be adjusted to allow all slave devices to which the common clock is connected to correctly obtain the address / command signal.Therefore, in one embodiment, in order to adjust the clock 118, each slave device 110 may have a predetermined value stored in the memory 120A. This value can be accessed by sending a specific address or command to a slave device (eg, 110A), which can cause the stored value to be sent from the slave device 110 to the master controller 100. If the clock divider circuit (eg, FF218) of the slave device 110A is sampling the input clock correctly (block 310), the master controller 100 can read back the correct value stored in the memory 120A. However, to obtain a good initial margin, the clock phase adjustment circuit 107 can sweep the clock phase through two cycles. In one embodiment, the control unit 101 may provide a digital signal to the phase adjustment circuit 107 to adjust the clock phase (block 310). During the clock phase adjustment, the read data can be continuously checked and the control unit 101 can determine which range of the clock phase adjustment produces the most accurate result, and whether the slave device 110A is locked to the master clock (block 315). It is possible for one or more slave dividers (FF218) to obtain a clock of 1.6 GHz at the edge of the error wave. In this case, the sub-logic 255 may provide a training reset signal to the FF 218 (block 320). Once each slave device 110 is locked to its own master clock (block 315), the operation proceeds to (block 325), where the receiving unit 104 of the master controller 100 can be trained to correctly receive the slave device 110 Read the data.It should be noted that in one implementation, data can be written and stored to the slave device 110 during the phase alignment training. However, in some embodiments, it may not be desirable to provide a special buffer for use only during training. This is particularly true for DRAM devices. In this regard, the sense amplifier of the DRAM device can be used as scratch storage during training. In detail, when a bit value is read from a given memory cell, the charge stored in the cell can be transferred to a sense amplifier and then read. However, it may not be necessary to write this data back to a separate storage unit.The phase adjustment circuit 105 can adjust the sampling clock phase to correctly receive the read data and the CRC data. In one embodiment, the control unit 101 may include a circuit to determine whether the receiving unit 104 optimally locks the read data. If the receiving unit 104 does not lock the read data optimally (block 330), the control unit 101 may provide a control signal to the phase adjustment circuit 105. Specifically, in one embodiment, the two samples may be made of CRC data and read data of a binary phase detector used in the phase detection and adjustment circuit 105. One sample can be made at the center of the data, and one sample can be made at the edges of the data. From the results of these samples, the control unit 101 can determine whether the samples were taken too early, too late, or in an intermediate position. Based on the result of the determination, the control unit 101 can adjust the phase of the reception phase adjustment circuit 105 (block 335). If the receiving unit 104 is locked to read data (block 330), the operation proceeds to block 340 where the transmitting unit 102 can be trained to write data that can be read from the device. It should be noted that each time the read data is received during normal operation, the receiving unit 104 may be continuously trained.When the main controller 100 determines that the receiving unit 104 is locked to the read data and the CRC data (block 330), the main controller 100 attempts to train the transmitting unit 102 to transmit data that the slave device 110 can correctly receive. In detail, the master controller 100 sends a write data training pattern to the slave device 110 (block 340). In one implementation, the training pattern can include many 0 to 1 and 1 to 0 transitions. The control unit 101 may determine whether the slave device is locked to write data. If the control unit 101 determines that the slave device is not locked to the write data (block 345), the control unit 101 can adjust the phase of the write data. In one embodiment, the write data phase can be adjusted far enough to cause the write data to be incorrectly latched by the device 110 with an error rate of nearly 50% at the transition bit (eg, a 0 to 1 transition) and Store, as seen by the read data (block 350). A 50% transition error rate may indicate that the written data is being sampled near the wave edge. The write data phase can then be adjusted back to 0.5 data bit times. This will cause the FF 208 sample data to be approximately near the center of each data bit. This processing can be performed for each data signal path for each slave device 110. If the master controller 100 determines that the slave device 110 is locked to the data, the system 10 may begin normal operation (block 355).Proceeding to block 360, during normal operation of the system 10, various clock and data phases may drift due to such temperature differences in the die. As mentioned above, as long as reading occurs and data is being transmitted on the data path, the main controller 100 can continuously check the read data phase alignment. However, large gaps in bus traffic may allow phase drift without detection. In this regard, the control unit 101 may write a data phase at a predetermined time interval by measuring the elapsed time between training sequences (block 365). If the elapsed time between trainings of writing data phase exceeds the limit value (block 370), the control unit 101 adjusts the writing data phase by writing the written data training pattern with many transitions (block 375) (block 370) 385) At the same time, look for the transition error rate of approximately 50% at blocks 340 to 350 as described above, and train the write data phase as described above. If the control unit 101 determines that the slave device 110 is locked to write data (block 380), the system 10 continues normal operation.Referring to FIG. 5, a diagram illustrating an example of pin positions of one embodiment of the memory module shown in FIG. 4 is shown. In the embodiment shown in FIG. 4, the memory module is a DIMM. Typically, a DIMM includes a circuit board with a finger connector that usually slides into a socket. Finger connectors have metal pads that mate with spring-loaded contacts in the socket. Various signals are routed from the finger connector to the DRAM device. To obtain a clock signal with the desired signal quality, the clock signal is located at the end of the finger connector, as shown in the pin map.Although the embodiments have been described in considerable detail, many variations and modifications will become apparent to those skilled in the art once the above disclosure is fully understood. The following patent application scope is intended to include all such changes and modifications. |
An integrated circuit includes a substrate, a noise sensitive circuit, and a first low impedance guard ring. The substrate includes a well-doped blocking ring that at least partially surrounds the noise sensitive circuit. The noise sensitive circuit is fabricated on the substrate. The first low impedance guard ring is fabricated on the substrate to at least partially surround the well-doped blocking ring, wherein the first low impedance guard ring is operably coupled to a first circuit ground, wherein impedance of the first low impedance guard ring is substantially less than impedance of the well-doped blocking ring. |
What is claimed is: 1. An integrated circuit comprises: a substrate having a first resistivity, the substrate including a blocking ring having the first resistivity and a portion positioned between a first region and a second region, the first region and the second region each comprising a well having a second resistivity, which is lower than the first resistivity, and having circuit elements;a noise sensitive circuit comprising circuit elements of the first region of the substrate comprising a well, wherein the blocking ring at least partially surrounds the noise sensitive circuit; anda first low impedance guard ring comprising an implant region formed in the substrate to at least partially surround the blocking ring, wherein the first low impedance guard ring is operably coupled to a first circuit ground, wherein impedance of the first low impedance guard ring is substantially less than impedance of the blocking ring.2. The integrated circuit of claim 1 further comprises: a second low impedance guard ring comprising an implant region formed in the substrate between the blocking ring and the noise sensitive circuit, wherein the second low impedance guard ring at least partially surrounds the noise sensitive circuit, and wherein the second low impedance guard ring is operably coupled to a second circuit ground.3. The integrated circuit of claim 1 comprises: the noise sensitive circuit operably coupled to a second circuit ground.4. The integrated circuit of claim 1, wherein the blocking ring comprises at least one of: a blocking ring surrounding a p-well; anda blocking ring surrounding an n-well.5. The integrated circuit of claim 1, wherein the noise sensitive circuit comprises at least one of: a transistor, a trace, a capacitor, and a resistor.6. The integrated circuit of claim 1 comprises: a second noise sensitive circuit formed in a third region comprising a well having the second resistivity and having circuit elements formed in the substrate, wherein a second blocking ring formed in the substrate at least partially surrounds the second noise sensitive circuit; anda second low impedance guard ring comprising an implant region formed in the substrate to at least partially surround the second blocking ring, wherein the second low impedance guard ring is operably coupled to a second circuit ground, wherein impedance of the second low impedance guard ring is substantially less than impedance of the second blocking ring.7. The integrated circuit of claim 1 wherein the second region comprising a well comprises a noise generating circuit fabricated on the substrate; and wherein the integrated circuit further comprises a second low impedance guard ring comprising an implant region formed in the substrate to at least partially surround the noise generating circuit, wherein the second low impedance guard ring is operably coupled to a second ground and wherein impedance of the second low impedance guard ring is substantially less than impedance of the blocking ring.8. The integrated circuit of claim 7 comprises: a second blocking ring formed in the substrate to at least partially surround the noise generating circuit, wherein impedance of the second blocking ring is substantially greater than impedance of the second low impedance guard ring.9. The integrated circuit of claim 6 further comprises: a Serial-Input-Parallel-Output (SIPO) module operably coupled to convert inbound high-speed serial data into inbound parallel data, wherein the SIPO module is fabricated on the substrate; anda Parallel-Input-Serial-Output (PISO) module operably coupled to convert outbound parallel data into high-speed outbound serial data, wherein the PISO module iswherein the noise sensitive circuit is part of the SIPO module; andwherein the second noise sensitive circuit is part of the PISO module.10. The integrated circuit of claim 9 comprises: a third low impedance guard ring comprising an implant region formed in the substrate to at least partially surround a first noise generating circuit of the SIPO module, wherein the third low impedance guard ring is operably coupled to a third ground and wherein impedance of the third low impedance guard ring is substantially less than impedance of the blocking ring; anda fourth low impedance guard ring comprising an implant region formed in the substrate to at least partially surround a second noise generating circuit of the PISO module, wherein the fourth low impedance guard ring is operably coupled to a fourth ground and wherein impedance of the fourth low impedance guard ring is substantially less than impedance of the second blocking ring.11. The integrated circuit of claim 10, wherein each of the first and second noise generating circuits comprises at least one of: a clock circuit; anddigital circuitry.12. The integrated circuit of claim 10, wherein the blocking ring comprises at least one of: a blocking ring surrounding a p-well; anda blocking ring surrounding an n-well.13. The integrated circuit of claim 10, wherein each of the first and second noise sensitive circuits comprises at least one of: a transistor, a trace, a capacitor, and a resistor.14. The integrated circuit of claim 1, wherein the integrated circuit is a field programmable gate array (FPGA), the integrated circuit further comprises: programmable logic fabric fabricated on the substrate; anda multi-gigabit transceiver (MGT) for transmitting and receiving high-speed data, wherein the MGT is fabricated on the substrate;wherein the MGT includes the noise sensitive circuit.15. The integrated circuit of claim 14 further comprises: a second blocking ring formed in the substrate to at least partially surround noise generating circuitry of the programmable logic fabric; anda second low impedance guard ring comprising an implant region formed in the substrate to at least be partially surrounded by the second blocking ring, wherein the second low impedance ring is operably coupled to a second circuit ground and wherein an impedance of the second low impedance guard ring is substantially less than the impedance of the second blocking ring.16. The integrated circuit of claim 15, wherein the noise generating circuitry comprises at least one of: a clock circuit; anddigital circuitry.17. The integrated circuit of claim 14, wherein the noise sensitive circuit comprises at least one of: a trace, a capacitor, a transistor, and a resistor.18. The integrated circuit of claim 14 comprises: the noise sensitive circuit operably coupled to a second circuit ground.19. The integrated circuit of claim 14, wherein the blocking ring comprises at least one of: a blocking ring surrounding a p-well; anda blocking ring surrounding an n-well.20. The integrated circuit of claim 14 further comprises: a digital clock manager (DCM) for generating at least one clock signal, wherein the DCM is fabricated on the substrate;a second blocking ring formed in the substrate to at least partially surround noise generating circuitry of at least one of the programmable logic fabric and the DCM; anda second low impedance guard ring comprising an implant region formed in the substrate to at least be partially surrounded by the second blocking ring, wherein the second low impedance ring is operably coupled to a second circuit ground and wherein an impedance of the second low impedance guard ring is substantially less than the impedance of the second blocking ring. |
BACKGROUND OF THE INVENTION1. Technical Field of the InventionThis invention relates generally to integrated circuits and more particularly to reducing substrate noise coupling.2. Description of Related ArtFIG. 1 is a cross-sectional view of a prior art N-channel transistor fabricated on a substrate. As shown, the N-channel transistor includes two N-doped implants to produce a source and drain. The N-channel transistor further includes a gate and a P-doped implant that functions as a guard ring. To activate the N-channel transistor, a voltage VDS is provided to the drain and a voltage VGS is provided to the gate.In many applications, the N-channel transistor may be used in noise sensitive circuitry such as amplifiers, buffers, analog-to-digital converters, et cetera. As is often the case, an integrated circuit has millions of transistors on its substrate, some of which are used in digital circuitry that produces noise, which is coupled into the substrate.In the illustration of FIG. 1, a portion of a transistor (e.g., a gate and drain) that may be used in digital circuitry is shown on the substrate. The transistor within the digital circuitry is switched on and off using, for example, 1-volt VGS voltage, which produces a varying voltage between the supply voltage and AC or DC ground at the transistor's drain. Such a varying voltage produces an AC voltage gradient with respect to the guard ring of the transistor used in the noise sensitive circuit. The gradient is represented by the thin dashed lines.The substrate may include different regions as shown. For example, one region may be a P-doped region that has a relatively low resistivity (for example, 0.1 OHMS-centimeter) and a lightly doped P-region which has a higher resistivity (e.g., 20 OHMS-centimeter). Due to the voltage gradient and the impedance of the P-doped region, AC noise voltage couples from various terminals (e.g., drain and/or gate) of the transistor in the noise generating circuit to various terminals (e.g., drain and/or gate) of transistor of the noise sensitive circuit.In particular, the substrate coupled noise causes the voltage of the drain and/or gate of the transistor of the noise sensitive circuit to vary, which alters its operating point. As such, the substrate coupled noise modulates the signals being processed by the transistor as is desired function within the noise sensitive circuit causing adverse affects on the overall performance of the noise sensitive circuit.Therefore, a need exists for isolating substrate coupled noise within integrated circuits.BRIEF SUMMARY OF THE INVENTIONThe substrate coupled noise isolation within integrated circuit of the present invention substantially meets these needs and others. In one embodiment, an integrated circuit includes a substrate, a noise sensitive circuit, and a first low impedance guard ring. The substrate includes a well-doped blocking ring that at least partially surrounds the noise sensitive circuit. The noise sensitive circuit is fabricated on the substrate. The first low impedance guard ring is fabricated on the substrate to at least partially surround the well-doped blocking ring, wherein the first low impedance guard ring is operably coupled to a first circuit ground, wherein impedance of the first low impedance guard ring is substantially less than impedance of the well-doped blocking ring.In another embodiment, a serializer/deserializer (SERDES) module includes a Serial-Input-Parallel-Output (SIPO) module, a Parallel-Input-Serial-Output (PISO) module, a first well-doped blocking ring, a second well-doped blocking ring, a first low impedance guard ring, and a second low impedance guard ring. The SIPO module converts inbound high-speed serial data into inbound parallel data, wherein the SIPO module includes a first noise sensitive circuit and wherein the SIPO module is fabricated on a substrate of an integrated circuit. The PISO module converts outbound parallel data into high-speed outbound serial data, wherein the PISO module includes a second noise sensitive circuit and wherein the PISO module is fabricated on the substrate of the integrated circuit. The first well-doped blocking ring is fabricated on the substrate to at least partially surround the first noise sensitive circuit. The second well-doped blocking ring is fabricated on the substrate to at least partially surround the second noise sensitive circuit. The first low impedance guard ring is fabricated on the substrate to at least partially surround the first well-doped blocking ring, wherein the first low impedance guard ring is operably coupled to a first circuit ground and wherein impedance of the first low impedance guard ring is substantially less than impedance of the first well-doped blocking ring. The second low impedance guard ring is fabricated on the substrate to at least partially surround the second well-doped blocking ring, wherein the second low impedance guard ring is operably coupled to a second circuit ground and wherein impedance of the second low impedance guard ring is substantially less than impedance of the second well-doped blocking ring.In yet another embodiment, a field programmable gate array (FPGA) includes programmable logic fabric, a multi-gigabit transceiver (MGT), a first well-doped blocking ring, and a first low impedance guard ring. The programmable logic fabric is fabricated on a substrate of an integrated circuit. In some embodiments, the FPGA may include a digital clock manager (DCM) that generates at least one clock signal, wherein the DCM is fabricated on the substrate. The MGT transmits and receives high-speed data, wherein the MGT is fabricated on the substrate and includes noise sensitive circuitry. The first well-doped blocking ring is fabricated on the substrate to at least partially surround the noise sensitive circuitry. The first low impedance guard ring is fabricated on the substrate to at least partially surround the first well-doped blocking ring, wherein the first low impedance ring is operably coupled to a first circuit ground and wherein an impedance of the first low impedance guard ring is substantially less than the impedance of the first well-doped guard ring.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a cross-sectional diagram of a prior art N-channel transistor;FIG. 2 is a schematic block diagram of a programmable logic device in accordance with the present invention;FIG. 3 is a schematic block diagram of a multi-gigabit transceiver in accordance with the present invention;FIG. 4 is a diagram of an integrated circuit in accordance with the present invention;FIG. 5 is a diagram of another embodiment of an integrated circuit in accordance with the present invention;FIG. 6 is a diagram of yet another integrated circuit in accordance with the present invention;FIG. 7 is a diagram of a further integrated circuit in accordance with the present invention;FIG. 8 is a cross-sectional diagram of the integrated circuit of FIG. 5;FIG. 9 is a cross-sectional diagram of the integrated circuit of FIG. 7;FIG. 10 is a cross-sectional diagram of the integrated circuit of FIG. 4; andFIG. 11 is a cross-sectional diagram of the integrated circuit of FIG. 6.DETAILED DESCRIPTION OF THE INVENTIONFIG. 2 is a schematic block diagram of a programmable logic device 10 that includes programmable logic fabric 12, an input/output section 14, and memory 16. The programmable logic fabric 12 may include one or more processing cores and programmable logic circuitry. Such programmable logic circuitry may include programmable logic arrays (PLAs), programmable array logic (PAL) devices, erasable programmable logic devices (EPLDs) and/or programmable gate arrays (PGAs). Memory 16 may be block random access memory (BRAM). Input/output section 14 may include a plurality of digital clock managers (DCMs) and a plurality of multi-gigabit transceivers (MGTs). An alternative embodiment of a programmable logic device may be found in U.S. patent application Ser. No. 10/683,944 by Young, which is incorporated herein in its entirety.The digital clock managers provide various clock signals to the programmable logic fabric 12 and may further provide clock signals to the multi-gigabit transceivers. The multi-gigabit transceivers provide digital interfaces for the programmable logic fabric 12 to exchange data with components external to the programmable logic device 10. In general, the multi-gigabit transceivers provide serial-to-parallel conversion of received serial data and provide parallel-to-serial conversion for outgoing data. The MGTs may include signal detection circuitry to detect the presence of the received serial data and to enable the receiver section within the MGT. Further, the digital clock managers may provide clock signals to memory, or other input/output modules, for double data rate and quad data rate accesses.FIG. 3 is a schematic block diagram of a multi-gigabit transceiver 20 that includes a serializer/de-serializer (SERDES) module 22, a physical coding sub-layer (PCS) module 26 and an interface 32. The SERDES module 22 includes a parallel-in-serial-out module 24 and a serial-in-parallel-out module 26. The parallel-in-serial-out module 24 may include one or more noise sensitive circuits 34. Similarly, serial-in-parallel-out module 26 may include one or more noise sensitive circuits 36. The physical coding sub-layer 26 includes a transmit PCS module 28 and a receive PCS module 30.The interface 32 provides coupling between the programmable logic fabric 12 and the PCS module 26. For transmitting data, the interface 32 provides transmit data words 38 (e.g., bytes of information formatted in accordance with a particular protocol) from the programmable logic device 12 to the transmit PCS module 28. In general, the transmit PCS module 28 converts the transmit data words 38 (e.g., the bytes of information) into transmit parallel data 40 (e.g., parallel bits of information).The parallel-in-serial-out module 24 converts the transmit parallel data 40 into transmit serial data 42 (e.g., a serial bit stream). Note that the noise sensitive circuit 34 may be incorporated in high speed analog circuits of the parallel-in-serial-out module 34 including, but not limited to, amplifiers, analog-to-digital converters, buffers, VCO, charge pumps, analog latches, and analog XOR gates, small signal circuits, et cetera.For received data, the serial-in-parallel-out module 26 converts receive serial data 44 into receive parallel data 46. The receive PCS module 30 converts the received parallel data 46 into received data words 48. The interface 32 provides the received data words 48 to the programmable logic fabric 12.Protection of the noise sensitive circuits 34 and 36 from substrate coupled noise will be described in greater detail with reference to FIGS. 4-9. The circuitry that generates the noise, which is typically digital circuitry, may be included anywhere within the MGT 20 and/or the field programmable gate array. Such digital circuitry includes, but is not limited, logic gates, digital clocks, digital filters, large signal analog circuits, et cetera.As one of average skill in the art will appreciate, the MGT 20 may be implemented on an integrated circuit as a stand-alone device or may be implemented on an integrated circuit as part of other devices such as the programmable logic device 10.FIG. 4 is a diagram of an integrated circuit 50 that may support the field programmable gate array, an MGT, a SERDES module and/or any other circuitry that may be fabricated on an integrated circuit. The fabrication process for producing integrated circuit 50 may be done using conventional integrated circuit CMOS fabrication process or other types of integrated circuit fabrication technologies.Integrated circuit 50 includes a substrate 60, the noise sensitive circuits 34 and 36, a noise generating circuit 62, a 1st well-doped blocking ring 52, a 1st low impedance guard ring 54, a 2nd well-doped blocking ring 56, and a 2nd low impedance guard ring 58. As shown, each of the circuits 34, 36 and 62 have their own ground circuit connections. For example, the noise sensitive circuit 34 may be coupled to one analog ground, the noise sensitive circuit 36 may be coupled to a second analog ground, while the noise generating circuit 62 may be coupled to a separate ground.As shown, the 1st well-doped blocking ring 52 surrounds the noise sensitive circuit 34, which may include one or more of resistors, traces, transistors, and/or capacitors. The well-doped blocking ring may be a P-well blocking ring or an N-well blocking ring as will be further described with reference to FIGS. 8 and 9, while the cross-sectional view of FIG. 4 taken at lines 10-10 is shown in FIG. 10. The low impedance guard ring 54 encircles the well-doped blocking ring 52 and is coupled to a separate ground connection through a low impedance path. The ground path impedance of the low impedance guard ring 54 is substantially less than impedance of the well-doped blocking ring 52. For example, the low impedance of the ground path of the guard ring 54 may have one-half or less of the impedance of the well-doped blocking ring between the noise sensitive and noise generating circuits, where impedance is a function of resistivity, width, and perimeter. With such an implementation, substrate noise that is produced by the noise generating circuit 62 is shunted to ground via the low impedance guard ring 54 before reaching the noise sensitive circuit 34. As such, the noise sensitive circuit 34 is more immune to the substrate noise produced by the noise generating circuit 62. A similar situation occurs for noise sensitive circuit 36.As one of average skill in the art will appreciate, the well-doped blocking ring 52 and/or the low impedance guard ring 54 may only partially encircle the noise sensitive circuit 34 to achieve a level of isolation with respect to substrate noise.FIG. 5 is a diagram of another integrated circuit 50 that includes the noise sensitive circuit 34, noise sensitive circuit 36, noise generating circuit 62, well-doped blocking ring 52, well-doped blocking ring 56 and low impedance guard ring 58. In this integrated circuit embodiment, the low impedance guard ring 58 encircles the noise generating circuit 62. As such, substrate noise generated by the noise generating circuit 62 is shunted to ground via the low impedance guard ring 58 before reaching the well-doped blocking rings 56 and 52 which further attenuates substrate noise thereby providing substrate noise immunity to the noise sensitive circuits 34 and 36.FIG. 6 is a diagram of another embodiment of an integrated circuit 50 that includes the noise sensitive circuit 34, a well-doped blocking ring 52, a low impedance guard ring 64 and a low impedance guard ring 54. As shown, each of the circuits 34 and 62 and the guard rings 64 and 54 has separate grounds. In this embodiment, a low impedance guard ring 64 encircles the noise sensitive circuit 34 and is within the well-doped blocking ring 52. Note that the rings 52, 54 and/or 64 may be partial rings, thus not fully encircling the noise sensitive circuit. For example, when the noise sensitive circuit 34 is at an edge of the substrate 60, the rings 52, 54, and/or 64 may only partially surround the circuit 34 or 36. A cross-sectional view of FIG. 6 taken at lines 11-11 is shown in FIG. 11.FIG. 7 is a diagram of yet another embodiment of integrated circuit 50 that includes the noise sensitive circuit 34, noise generating circuit 62, well-doped blocking ring 56, low impedance guard ring 64, and low impedance guard ring 58. In this embodiment, the low impedance guard ring 58 encircles the noise generating circuit 62, the low impedance guard ring 64 encircles the noise sensitive circuit 34, and the well-doped blocking ring 56 encircles the low impedance guard ring 64. As shown, the noise sensitive circuit 34, noise generating circuit 62 and guard rings 58 and 64 each include their own ground connection. As one of average skill in the art will appreciate, the rings 56, 58 and 64 may be partial rings thus, only partially surrounding the respective circuits 34 and 62. As one of average skill in the art will appreciate, another well-doped blocking ring may be included and encircling the low impedance guard ring 58.FIG. 8 is a cross-sectional diagram of the integrated circuit of FIG. 5. In this example, the noise sensitive circuit is represented by an N-channel transistor as is the noise generating circuit. In this instance, transistor of the noise sensitive circuit 36 includes two N+-doped implants (e.g., drain and source), a gate, and a P+-doped guard ring. The surrounding substrate includes a P-doped region that has a relatively low resistivity, for example, 0.1 to 0.2 OHMS-centimeter.The transistor of the noise generating circuit 62 includes two N+-doped implants (e.g., drain and source), a gate and a P+-doped encircling guard ring 58. The transistor is fabricated in a P-doped region of the substrate that has a relatively low resistivity (0.1 OHMS-centimeter). The well-doped blocking ring 56 is fabricated utilizing a lightly P−-doped region, which has a relatively high resistivity (e.g., 20 OHMS-centimeter). By having a high impedance substrate region (i.e., the well-doped blocking region 56) surrounding the low impedance guard ring 58, noise generated by the transistor of noise generating circuit 62 will be primarily shunted to ground via the low impedance guard ring 58 and substantially contained within the corresponding P-doped region. As such, very little substrate noise will be coupled to the transistor of the noise sensitive circuit 36.FIG. 9 is a cross-sectional diagram of the integrated circuit of FIG. 7. This diagram differs from FIG. 8 in that the transistor of the noise sensitive circuit 34 is encircled by its own low impedance guard ring 64. In this embodiment, any substrate noise that is not shunted to ground via the low impedance guard ring 58 and that is coupled through the well-doped blocking ring will be shunted to ground via the low impedance guard ring 64.As one of average skill in the art will appreciate, the concepts provided with respect to FIGS. 2-9 may be equally applicable for P-channel transistors as well as for other integrated circuit fabrication processes.As one of average skill in the art will appreciate, the term “substantially” or “approximately”, as may be used herein, provides an industry-accepted tolerance to its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to twenty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences. As one of average skill in the art will further appreciate, the term “operably coupled”, as may be used herein, includes direct coupling and indirect coupling via another component, element, circuit, or module where, for indirect coupling, the intervening component, element, circuit, or module does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As one of average skill in the art will also appreciate, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two elements in the same manner as “operably coupled”. As one of average skill in the art will further appreciate, the term “compares favorably”, as may be used herein, indicates that a comparison between two or more elements, items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2, a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1.The preceding discussion has presented a technique for isolating substrate noise thereby improving overall performance of integrated circuits. As one of average skill in the art will appreciate, other embodiments may be derived from the teaching of the present invention without deviating from the scope of the claims. |
A network device for determining an optimal sampling phase for source synchronous data received on a data communications channel. The network device includes a transmitter clock domain for providing a data pattern along with a synchronous free-running clock. The network device also includes a plurality of phases of a core clock. The network devices further includes means, in a core clock domain, for sampling a data pattern generated by the received clock with the plurality of phases to determine the optimal phase for sampling the data received from the external device. |
A network device (100) for determining an optimal sampling phase for source synchronous data (204) sent from an external device (201), the network device (100) comprising:receiving means for receiving from a transmitting device, in a transmitter clock domain (203), an external device clock (202) and data (204) with a fixed phase relationship;a phase shift generator (206) for phase shifting the external device clock (202); characterized by a divide-by-two circuit (208) for receiving the phase shifted external device clock (207) and for generating an alternating 1/0 signal pattern (302), which alternates every clock cycle;means (210, 212) for sampling the data signal (304, 306) in the transmitter clock domain (203) using said phase shifted external device clock, and for aligning (214, 216) the sampled data with said alternating signal 1/0 pattern (302);means for generating a plurality of phases (222a, 222b, 222c, 222d) of a core clock;means for sampling the alternating 1/0 signal pattern (302) in a core clock domain (205) with the plurality of phases (222a, 222b, 222c, 222d);means (224) for selecting the optimal sampling phase (308) for sampling the alternating 1/0 signal pattern (302); andmeans for sampling the aligned data (314, 316) with the selected optimal sampling phase (308).The network device (100) according to claim 1, wherein the transmitter clock domain further comprises means for sampling the data (204) with the output of the phase shift generator, wherein the data (204) is sampled using edges of a phase shifted external device clock (207) outputted by the phase shift generator.The network device (100) according to claim 2, wherein the transmitter clock domain further comprises means for aligning data (204) sampled at the rising and falling edges of the phase shifted external device clock (207) outputted by the phase shift generator with the locally generated signal pattern (302).A method for determining an optimal sampling phase for source synchronous data (204) sent from an external device (201), the method comprising the steps of:receiving from a transmitting device, in a transmitter clock domain (203), external device clock (202) and data (204) with a fixed phase relationship;phase shifting the external device clock (202, 206),generating an alternating 1/0 signal pattern (302), which alternates every clock cycle, out of the phase shifted external device clock using a divide-by-two-circuit,sampling the data signal (304, 306) in the transmitter clock domain (203) using said phase shifted external device clock, and aligning (214, 216) the sampled data with said alternating signal pattern (302);generating a plurality of phases (222a, 222b, 222c, 222d) of a core clock;sampling the alternating 1/0 signal pattern (302) in a core clock domain (205) with the plurality of phases (222a, 222b, 222c, 222d);selecting the optimal sampling phase (308) for sampling the alternating 1/0 signal pattern (302); andsampling the aligned data (314, 316) with the selected optimal sampling phase (308).The method according to claim 4, wherein the step of creating comprises transmitting the external device clock (202) to a phase shift generator and transmitting an output from the phase shift generator to a circuit which creates the locally generated signal pattern (302).The method according to claim 5, further comprising the step of sampling the data (204) with the output of the phase shift generator.The method according to claim 6, further comprising the step of aligning data (204) sampled using edges of the output of the phase shift generator with the locally generated signal pattern (302). |
BACKGROUND OF THE INVENTIONField of the InventionThe present invention relates to a network device in a data communications network and more particularly to a method of obtaining an optimal sampling of data obtained from an external source synchronous communication channel. Description of the Related Art A data network may include one or more network devices, such as a Ethernet switching chip, each of which includes several modules that are used to process information that is transmitted through the device. Specifically, as data enters the device from multiple ports, it is forwarded to an ingress module where switching and other processing are performed on the data. Thereafter, data is transmitted to one or more destination ports through one or more units including a Memory Management Unit (MMU). The MMU provides access to one or more off-chip source synchronous memory devices, for example, an external Double Data Rate (DDR) memory. The network device typically generates a source synchronous clock that is provided with data during a write operation on the source synchronous memory device. The memory device then uses the clock to capture the data and perform the write operation. However, when the network device is performing a read operation from the memory device, the delay for data and clock from the memory device is indeterministic based on at least the trace lengths and process corner associated with the memory device. For example, if there is a fast process or slow process corner device, the delay from the memory device will vary. As such, the round trip delays for a read operation can vary greatly from chip-to-chip or board-to-board.When a read operation is performed by the source synchronous memory device, the memory device returns data and clock. However, the clock phase from the source synchronous memory device can vary relative to the clock within the network device because the phases may shift. As is known, when the phases of the clock and data line up with each other, bit errors may occur and the network device cannot adequately sample data returned from the memory device.Therefore, to obtain the least amount of error, a mechanism must be provided to sample the received data at a time when the data is most stable. Some source synchronous interfaces and some memory devices provide free running clocks. Current network devices typically sample the data multiple times to find out where the edges exist in relation to the internal clock in the network device. However, when there are no memory operations being performed by the source synchronous memory device, the received data is not changing. Hence, there are no edges/transitions for determining the optimal phase of the clock. Furthermore, even if memory operations are occurring, if the same data value is being continuously read, there will still be no transitions for determining the optimal phase of the clock.To overcome the problems presented by source synchronous memory devices with free running clocks, some network devices use a first-in-first-out (FIFO) buffer to absorb difference between the memory controller clock in the network device and the clock generated by the source synchronous memory device. However, the use of the FIFO to absorb the differences between the clocks increases gate count which in turn increases circuit area. Use of a FIFO to realign clock phases also increases latency for received data.US 6,509,762 describes a method and apparatus for capturing data read from a memory device that is aligned with respect to a clock strobe signal originating from the memory device, which has constraints with respect to a local clock signal supplied to the memory device. The apparatus includes a circuit for capturing the data read from the memory device relative to the clock strobe signal to produce captured read data, a circuit for latching the captured read data relative to a sample clock signal, and a circuit for measuring a phase difference between the sample clock signal and the clock strobe signal and adjusting a phase of the sample clock signal as a function of the phase difference.According to the invention, there are provided a network device for determining an optimal sampling phase for source synchronous data as defined by independent claim 1, and a method for determining an optimal sampling phase for source synchronous data as defined by independent claim 4.Further advantageous features of the invention are defined by the dependent subclaims.Advantageously, the transmitter clock domain comprises means for transmitting the clock to a phase shift generator and for transmitting an output from the phase shift generator to a circuit which creates the data pattern.Advantageously, the transmitter clock domain further comprises means for sampling the data with the output of the phase shift generator, wherein the data is sampled using edges of a clock outputted by the phase shift generator.Advantageously, the transmitter clock domain further comprises means for aligning data sampled at the rising and falling edges of the clock outputted by the phase shift generator with the locally generated data pattern.Advantageously, the transmitter clock domain comprises a flip-flop cell that is used in a divide-by-two operation on the clock and in sampling the data generated by the memory device.Advantageously, the sampling means comprises means for sampling the locally generated data pattern multiple times with the plurality of phases to determine the optimal sampling phase for sampling the received data.Advantageously, the memory clock domain further comprises means for providing the locally generated data pattern with a deterministic rate of periodic transitions.Advantageously, at least one of the plurality of phases includes an offset from the core clockAdvantageously, the sampling means includes means for selecting one of the plurality of phases that provides sampling points that are farthest from the edges of the received data.Advantageously, the step of creating comprises transmitting the clock to a phase shift generator and transmitting an output from the phase shift generator to a circuit which creates the locally generated data pattern.Advantageously, the method further comprises the step of sampling the data with the output of the phase shift generator.Advantageously, the method further comprises the step of aligning data sampled using edges of the output of the phase shift generator with the locally generated data pattern.Advantageously, the step of sampling comprises the step of sampling the locally generated data pattern multiple times with the plurality of phases to determine the optimal sampling phase for sampling the received data.Advantageously, the method further comprises the step of providing the locally generated data pattern with a deterministic rate of periodic transitions.Advantageously, the step of sampling comprises the step of providing at least one of the plurality of phases with an offset from the core clockAdvantageously, the step of sampling comprises the step selecting one of the plurality of phases that provides sampling points that are farthest from the edges of the received data.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention that together with the description serve to explain the principles of the invention, wherein:Figure 1 illustrates a network device in which an embodiment of the present invention may be implemented;Figure 2a illustrates how memory read data is sampled by the network device;Figure 2b aligned memory clock and read data;Figure 3 illustrates sampling phases generated by the network device using multiple quadrature phases; andFigure 4 illustrates the steps in providing data for sampling from a memory clock domain to a network device clock domain.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSReference will now be made to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings.Figure 1 illustrates a network device, such as a switching chip, in which an embodiment the present invention may be implemented. Device 100 includes an ingress module 102, a MMU 104, and an egress module 106. Ingress module 102 is used for performing switching functionality on an incoming packet. The primary function of MMU 104 is to efficiently manage cell buffering and packet pointer resources in a predictable manner even under severe congestion scenarios. Egress module 106 is used for performing packet modification and transmitting the packet to an appropriate destination port.Device 100 may also include one internal fabric high speed port, for example a HiGig port, 108, one or more external Ethernet ports 109a-109x, and a CPU port 110. High speed port 108 is used to interconnect various network devices in a system and thus form an internal switching fabric for transporting packets between external source ports and one or more external destination ports. As such, high speed port 108 is not externally visible outside of a system that includes multiple interconnected network devices. CPU port 110 is used to send and receive packets to and from external switching/routing control entities or CPUs. According to an embodiment of the invention, CPU port 110 may be considered as one of external Ethernet ports 109a-109x. Device 100 interfaces with external/off-chip CPUs through a CPU processing module 111, such as a CMIC, which interfaces with a PCI bus that connects device 100 to an external CPU.Network traffic enters and exits device 100 through external Ethernet ports 109a-109x. Specifically, traffic in device 100 is routed from an external Ethernet source port to one or more unique destination Ethernet ports. In one embodiment of the invention, device 100 supports twelve physical Ethernet ports 109, each of which can operate in 10/100/1000 Mbps speed and one high speed port 108 which operates in either 10 Gbps or 12 Gbps speed.In an embodiment of the invention, device 100 is built around a shared memory architecture, wherein MMU 104 provides access to one or more off-chip source synchronous memory devices, for example, an external Double Data Rate (DDR) memory device 201. In an embodiment of the invention, MMU 104 includes 4 DDR interfaces. During a write operation to device 201, network device 100 typically generates a source synchronous clock that is provided with data to the source synchronous memory device. Memory device 201 then uses the clock to capture the data and perform the write operation. However, when network device 100 is performing a read operation from memory device 201, the phase of the received clock and data is indeterministic and thus an optimal sampling phase must be derived.Figure 2a illustrates how memory read data is sampled by device 100 and timing is transferred from a clock domain 203 of the external memory to an internal clock domain 205 of device 100. As shown in figure 2 , during a read operation in memory clock domain 203, memory device 201 generates a clock 202 and data 204 which is aligned as shown in figure 2b . This figure shows double data rate (DDR) data but the data could also be single data rate (SDR). However, the aligned clock 202 and data 204 do not provide an optimal sampling phase because clock edges do not occur when the data is most stable. Therefore, clock 202 is transmitted to a 90 degree phase shift generator 206, with offset control, which generates a 90 degree phase offset clock 207. Shift generator 206 may be a standard DLL or PLL generator. Clock 207 is then used to sample data 204, wherein clock 207 samples data 204 at the rising edge of clock 207 at flop 210 and samples data 204 at the falling edge of clock 207 at flop 212. Thereafter flops 214 and 216 are used to line up the data sampled at the rising and falling edges of the clock 207. Clock 207 is also transmitted to a divide-by-two circuit 208 which creates an alternating 1/0 data pattern that alternates every clock cycle. According to an embodiment of the invention, by using the same flip-flop cell in the divide-by-two operation as is used for the initial read data sample, the inventive system allows for better matching of delays and better determination of the optimal sampling phase. In an embodiment of the inventive system, memory 201 is not required to perform an operation in order for device 100 to obtain the needed transitions that are sampled to determine an optimal phase for sampling data. The sampled results are then synchronized back into main clock domain 205 and are then fed into the state machine to decide which quadrature phase should be used to sample data from memory clock domain 203.In an embodiment of the invention, along with the rise and fall data transmitted from memory device 201, device 100 also obtains the alternating 1/0 data pattern generated by circuit 208, wherein the alternating data pattern is in line with the aligned rise and fall data from flops 214 and 216. Device 100 then uses phases 222a-222d to multiply sample the alternating 1/0 data pattern multiple times to determine the optimal sampling phase. Thereafter, in core clock domain 205, device 100 provides multiple quadrature phases 222a-222d of a core clock. Phase 222a has a 0 degree offset from the core clock, phase 222b has a 270 degree offset from the core clock, phase 222c has a 180 degree offset from the core clock and phase 222d has a 90 degree offset from the core clock. According to one embodiment of the invention, device 100 generates four phases 222a-222d of the core clock. However, as is known to those of ordinary skill in the art, device 100 may generate more than four phases for better resolution.In an embodiment of the inventive system, during sampling, device 100 ignores data 204 returned from memory device 201. Device 100 only samples the alternate 1/0 data pattern from clock 202, wherein the 1/0 data pattern provides a transition in every cycle. Since device 100 samples the alternating 1/0 data pattern, memory 201 is not required to perform an operation in order for device 100 to obtain the needed transitions that are sampled to determine an optimal phase for sampling data. As such, the inventive system eliminates the drifts that occur between phases when a transition does not occur every cycle, thereby causing the phase to be off. By producing a transition every cycle, the inventive system enables device 100 to constantly re-correct in order to determine the location of the optimal sampling phase.Sampling of the alternating data pattern provides an advantage over directly sampling of the received clock or data in that it enables better phase match with the delays data from flops 214 and 216 to provide the most optimal sampling phase. The process corner delay variations of the alternating data pattern match the process corner delay variation of the data from flops 214 and 216. As is known to those skilled in the art, the clock returned from memory 201 typically includes jitter that blurs the edges. As such when a sample is obtained from near the edge, the data pattern may sometimes be a zero or a one, which is a non-optimal point for sampling data. Therefore, according to an embodiment of the invention, device 100 selects the optimal sampling phase that will produce the fewest sampling error, that is, a sampling phase that is farthest away from the edges.As mentioned above, device 100 operates without the need for any memory operations. As such, when device 100 is started, as long as a free running clock in memory 201 is executing, device 100 can determine the optimal sampling phase. Device 100 therefore relies only on the free running read strobe clock from external memory 210 and may run without a training sequence and remains locked even in the absence of memory operations. Since there is a transition every cycle, device 100 can realign every cycle, is insensitive to data patterns, and can tolerate infinite sequences of ones and zeros. Device 100 can also respond quickly to changes in phase of memory read strobe clocks since the sampled data has a guaranteed transition on every rising clock edge.Figure 3 illustrates sampling phases generated by device 100 using phases 222a-222d. According to the inventive system, as illustrated in figure 3 , the 90 degree shifted clock 207 was used to create an alternating 1/0 data pattern 302 which is then double-flop sampled with multiple 90 degree shifted quadrature phases 222a-222d. The sample clock which lands in the middle of the eye of the alternate 1/0 pattern is the used to sample all of the read data from the memory. Therefore, based on the illustrations of figure 3 , clock phase 222a will be selected as the optimal sampling phase because that phase provides points that are farthest away from the edges of the clock. Since an embodiment of the inventive system uses the same flip-flop cell that is used for generating the alternate 1/0 pattern for sampling the read data from the memory, the phase of the alternate 1/0 pattern is virtually identical to the phase of the sampled rise and fall data 304 and 306. Therefore, the optimal clock phase 222a, as shown as 308, needed to sample the alternate 1/0 pattern will be the same as that needed to sample rise and fall data 314 and 316 at the output of flops 214 and 216.Figure 4 illustrates the steps implemented in transferring timing from a memory clock domain to a core clock domain in order to determine an optimal sampling phase. In Step 4010, during a read operation in memory clock domain 203, memory device 201 generates clock 202 and data 204. In Step 4020, clock 202 is then transmitted to 90 degree phase shift generator 206 which generates 90 degree phase offset clock 207. It should be noted that while the phase shift generator 206 in one embodiment of the invention is a 90 degree phase shift generator, a 90 degree phase shift generator is optional and other phase shift generators may be implemented in the present invention. In Step 4030, clock 207 is used to sample data at the rising and falling edges of clock 207. In Step 4040, the data sampled at the rising and falling edges of the clock 207 are lined up. In Step 4050, clock 207 is also transmitted to divide-by-two circuit 208 which creates an alternating 1/0 data pattern that alternates every clock cycle. In Step 4060, in core clock domain 205, device 100 provides multiple quadrature phases 222a-222d for sampling the alternating 1/0 pattern. In Step 4070, device 100 samples the alternating 1/0 data pattern multiple times with clocks 222a-222d to determine which of the quadrature phases is optimal for resampling the received data.According to an embodiment, device 100 includes an algorithm for determine which quadrature clock 222a-222d to use in sampling data. The algorithm relies on comparing samples (voting) from clocks 222a-222d of the sampled values from the alternating 1/0 pattern to determine where the edges of the received data are located.The foregoing description has been directed to specific embodiments of this invention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the scope of the invention. |
The invention provides mobile NAND parity information techniques. Disclosed in some examples are techniques for handling parity data of a non-volatile memory device with limited cache memory. In certain examples, user data can be programmed into the non-volatile memory of the non-volatile memory device in data stripes, and parity information can be calculated for each individual data stripe withina limited capacity cache of the non-volatile memory device. The individual parity information can be swapped between a swap block of the non-volatile memory and the limited capacity cache as additional data stripes are programmed. |
1. A NAND memory device, comprising:Random access memory RAM buffer;An array of NAND memory cells organized into pages, data stripes, and parity information areas, wherein the data strips include user data and parity data for the user data, and wherein the parity information area Containing parity information for the data stripe; andA controller configured to:programming first user data to a first portion of a first plurality of data stripes of the NAND memory cell array;Copy current parity information of a second plurality of data stripes from the RAM buffer to the parity information area;Copying stored parity information for the first plurality of data stripes from the parity information area to the RAM buffer to replace the current parity for the second plurality of data stripes test; andNew parity information for the first plurality of data stripes is determined using the stored parity information and the first user data.2. The NAND memory device of claim 1, wherein each data stripe of the first plurality of data stripes spans a plurality of pages of the NAND memory cell array.3. The NAND memory device of claim 2, wherein each of the plurality of pages of the NAND memory for each data stripe is associated with a word line of the NAND memory device; andwherein pages of a first data stripe are separated from each other page in the first data stripe by at least a plurality of word lines within a plane of the NAND memory array.4. The NAND memory device of claim 1, wherein the parity information area is larger in size than the RAM buffer.5. The NAND memory device of claim 1, wherein the RAM buffer includes static RAM SRAM.6. A method for processing parity data, comprising:Programming a first number of data stripes of the NAND memory device with first data;loading a first number of parity information for the first number of data stripes from a NAND memory of the NAND memory device to a random access memory RAM buffer of the NAND memory device;Refresh the first number of parity information using the first data;Programming a second number of data stripes of the NAND memory with second data;Copy the first number of parity information to the parity information area of the NAND memory;Loading a second number of parity information of the second number of data stripes from the parity information area into the RAM buffer to replace the first number of parity information;Refreshing the second number of parity information using the second data; andEach of the first number of data stripes and the second number of data stripes includes user data and parity data of the user data.7. The method of claim 6, each data stripe of the first number of data stripes spans a plurality of pages of an array of NAND memory cells of the NAND memory device.8. The method of claim 7, wherein each page of the plurality of pages of each data stripe is associated with a word line of the NAND memory device; andwherein pages of a first data stripe are separated by a plurality of word lines from each other page in the first data stripe within a plane of the NAND memory array.9. The method of claim 7, each data stripe of the second number of data stripes spanning the plurality of pages of the NAND memory cell array.10. The method of claim 6, wherein the size of the RAM buffer is less than a combined size of the first number of parity information and the second number of parity information.11. The method of claim 10, wherein the size of the RAM buffer is smaller than a parity information area of the NAND memory.12. The method of claim 6, wherein said loading a first number of parity information for said first number of data stripes from a NAND memory of said NAND memory device to said NAND memory device. A random access memory RAM buffer includes loading a first number of parity information for the first number of data stripes from a NAND memory of the NAND memory device to a static RAM SRAM buffer of the NAND memory device.13. A method for processing parity data, comprising:Programming user data into a plurality of data stripes across a plurality of pages of NAND memory of a NAND memory device of a mobile electronic device, wherein each of the plurality of data stripes includes a portion of the user data and parity data for said portion of said user data;Update first parity information for a first plurality of data stripes of the plurality of data stripes using a random access memory RAM buffer of the memory device; andThe first parity information of the first plurality of data stripes is exchanged between a NAND memory block as a swap block and the RAM buffer.14. The method of claim 13, comprising filling the swap block locations with parity information for the plurality of data stripes.15. The method of claim 14, wherein the RAM buffer is sized to hold a fraction of the parity information held by the swap block location.16. The method of claim 13, comprising:Programming second user data into a second plurality of data stripes across the plurality of data stripes of the plurality of pages of the NAND memory device of the mobile electronic device, wherein the Each data stripe of the second plurality of data stripes includes a portion of the second user data and second parity data of the portion of the second user data; andThe RAM buffer is used to update second parity information for the second plurality of data stripes.17. The method of claim 16, wherein updating the second parity information includes:Retrieving second parity information from the swap block location of the NAND memory;storing the second parity information in the RAM buffer; andA logical operation is performed using the second parity information and the second user data to provide updated second parity information.18. A machine-readable medium comprising instructions that, when executed by a machine, cause the machine to perform operations including:Programming user data into a plurality of data stripes across a plurality of pages of NAND memory of a NAND memory device of a mobile electronic device, wherein each of the plurality of data stripes includes a portion of the user data and parity data for said portion of said user data;Update first parity information for a first plurality of data stripes of the plurality of data stripes using a random access memory RAM buffer of the memory device; andThe first parity information of the first plurality of data stripes is exchanged between a NAND memory block as a swap block and the RAM buffer.19. The machine-readable medium of claim 18, wherein the operations further comprise:programming second user data into a second plurality of data stripes across the plurality of data stripes of the plurality of pages of the NAND memory device of the mobile electronic device; andThe RAM buffer is used to update second parity information for the second plurality of data stripes.20. The machine-readable medium of claim 19, wherein updating the second parity information includes:Retrieving second parity information from the swap block location of the NAND memory;storing the second parity information in the RAM buffer; andA logical operation is performed using the second parity information and the second user data to provide updated second parity information. |
Mobile NAND Parity Information TechnologyTechnical fieldThe present invention relates to mobile NAND parity information technology.Background techniqueMemory devices are typically provided as internal semiconductor integrated circuits in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory.Volatile memory requires power to maintain its data and includes random access memory (RAM), dynamic random access memory (DRAM), or synchronous dynamic random access memory (SDRAM), among other memories.Non-volatile memory can retain stored data when no power is applied, and includes flash memory, read-only memory (ROM), electrically erasable programmable ROM (EEPROM), static RAM (SRAM), and erasable programmable memory. ROM (EPROM), resistive variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), magnetoresistive random access memory (MRAM) or 3D XpointTM memory and other memories.Flash memory is used as non-volatile memory for a wide range of electronic applications. Flash memory devices typically include one or more groups of transistors, floating gates, or charge trapping memory cells, which allow for high memory density, high reliability, and low power consumption.Two common types of flash memory array architectures include NAND and NOR architectures, each named after the basic memory cell configuration and the logical form in which they are arranged. The memory cells of a memory array are typically arranged in a matrix. In an example, the gate of each floating gate memory cell in a row of the array is coupled to an access line (eg, a word line). In a NOR architecture, the drain of each memory cell in a column of the array is coupled to a data line (eg, a bit line). In a NAND architecture, the drains of each memory cell in a string of the array are coupled together in series (source to drain) between a source line and a bit line.NOR and NAND architecture semiconductor memory arrays are accessed through decoders that activate specific memory cells by selecting word lines coupled to gates of specific memory cells. In a NOR architecture semiconductor memory array, once activated, a selected memory cell places its data value on a bit line, causing different currents to flow depending on the state in which it programmed the particular cell. In NAND architecture semiconductor memory arrays, high bias voltages are applied to drain-side select gate (SGD) lines. The word lines coupled to the gates of the unselected memory cells of each group are driven at a designated pass voltage (e.g., Vpass) to operate the unselected memory cells of each group as pass transistors (e.g., with no Delivers current in a manner constrained by its stored data value). Current then flows from the source line to the bit line through each series-coupled group, bounded only by the selected memory cell of each group, thereby placing the current encoded data value of the selected memory cell in place on-line.Each flash memory cell in a NOR or NAND architecture semiconductor memory array can be individually or collectively programmed to one or several programmed states. For example, a single-level cell (SLC) may represent one of two programmed states (eg, 1 or 0), thereby representing one data bit.However, flash memory cells can also represent one of more than two programmed states, allowing the fabrication of higher density memories without increasing the number of memory cells because each cell can represent more than one binary digit (e.g. , one or more bits). Such cells may be called multi-state memory cells, multi-digit cells, or multi-level cells (MLC). In some examples, MLC may refer to a memory cell that can store two bits of data (eg, one of four programmed states) per cell, and a three-level cell (TLC) may refer to a memory cell that can store three bits per cell. A memory cell of data bits (eg, one of eight programmed states), and a four-level cell (QLC) can store four data bits per cell. MLC as used herein in its broader context may refer to any memory cell that can store more than one bit of data per cell (ie, that can represent more than two programmed states).Traditional memory arrays are two-dimensional (2D) structures arranged on the surface of a semiconductor substrate. To increase the memory capacity of a given region and to reduce cost, the size of individual memory cells is reduced. However, there are technical limitations in reducing the size of individual memory cells and therefore the memory density of 2D memory arrays. In response, three-dimensional (3D) memory structures, such as 3D NAND architecture semiconductor memory devices, have been developed to further increase memory density and reduce memory costs.Such 3D NAND devices typically include one or more source side select gates (SGS) coupled in series (e.g., drain to source) proximate the source and one or more drain line select gates proximate the bit line. memory cell string between poles (SGD). In an example, the SGS or SGD may include one or more field effect transistor (FET) or metal oxide semiconductor (MOS) structural devices, etc. In some examples, the strings will extend vertically through multiple vertically spaced layers containing corresponding word lines. Semiconductor structures (eg, polysilicon structures) may extend adjacent a string of memory cells to form channels for the memory cells of the string. In the case of vertical strings, the polysilicon structure may be in the form of vertically extending pillars. In some examples, the string may be "folded" and thus arranged relative to the U-shaped post. In other examples, multiple vertical structures can be stacked on top of each other to form a stacked array of memory cell strings.Memory arrays or devices may be combined together to form the storage capacity of a memory system, such as solid state drives (SSD), universal flash storage (UFSTM) devices, multimedia card (MMC) solid state storage devices, embedded MMC devices (eMMCTM), etc. SSDs are particularly useful as primary storage devices for computers, offering advantages over traditional hard drives with moving parts with respect to, for example, performance, size, weight, strength, operating temperature range, and power consumption. For example, SSDs may have reduced seek times, latency, or other delays associated with disk drives (eg, electromechanical, etc.). SSDs use non-volatile memory cells, such as flash memory cells, to avoid internal battery supply requirements, allowing the drive to be more versatile and compact.Contents of the inventionIn one aspect, the present invention provides a NAND memory device including: a random access memory (RAM) buffer; an array of NAND memory cells organized into pages, data stripes of user data and parity information areas, wherein the parity information region includes parity information associated with the data stripe of user information; and a controller configured to: program first user data to a first user data of the NAND memory cell array a first portion of a plurality of data stripes; copying current parity information associated with a second plurality of data stripes from the RAM buffer to the parity information area; copying the first plurality of data strips copying stored parity information for a data stripe from the parity information area to the RAM buffer to replace the current parity information associated with the second plurality of data stripes; and using The stored parity information and the first user data determine new parity information for the first plurality of data stripes.In another aspect, the present invention provides a method comprising: programming a first number of data stripes of a NAND memory device with first data; and converting a first number of parity data associated with the first number of data stripes. Loading parity information from a NAND memory of the NAND memory device into a random access memory (RAM) buffer of the NAND memory device; refreshing the first number of parity information using the first data; using a third program a second number of data strips of the NAND memory with two data; copy the first number of parity check information to the parity check information area of the NAND memory; and program the second number of data strips with the second number of data strips Loading the RAM buffer with an associated second number of parity information from the parity information area to replace the first number of parity information; and using the second data to refresh the The second number of parity information.In yet another aspect, the present invention provides a method comprising: programming user data into a plurality of data stripes across a plurality of pages of NAND memory of a NAND memory device of a mobile electronic device; using the memory device updating a random access memory (RAM) buffer with first parity information associated with a first plurality of data stripes of the plurality of data stripes; and swapping block locations in the NAND memory with the The first parity information associated with the first plurality of data stripes is exchanged between RAM buffers.In yet another aspect, the present invention provides a machine-readable medium including instructions that when executed by a machine cause the machine to perform operations including programming user data to a device across a mobile electronic device. A first plurality of data stripes of a plurality of pages of NAND memory of a NAND memory device; updating a first plurality of data stripes with the plurality of data stripes using a random access memory (RAM) buffer of the memory device associated first parity information; and exchanging the first parity information associated with the first plurality of data stripes between a swap block location of the NAND memory and the RAM buffer .Description of the drawingsIn the drawings, which are not necessarily to scale, similar reference symbols may describe similar components in the different figures. Similar component symbols with different letter suffixes identify different instances of similar components. The drawings illustrate the various embodiments discussed in this document generally by way of example and not by way of limitation.Figure 1 illustrates an example of an environment including a memory device.2-3 illustrate schematic diagrams of examples of 3D NAND architecture semiconductor memory arrays.Figure 4 illustrates an example block diagram of a memory module.Figure 5 generally illustrates an example data and parity information placement scheme in accordance with an example of the present subject matter.Figure 6A generally illustrates an example swap block.Figure 6B illustrates the logical placement of parity pages in volatile memory in a memory controller or other component of a NAND in accordance with some examples of the present invention.7 illustrates an example method 700 of using non-volatile swap blocks to help maintain parity information for multiple data stripes when the aggregated parity information for an open data stripe is sized to cache or RAM available on a memory device. flow chart.8 illustrates a block diagram of an example machine 800 upon which any one or more of the techniques (eg, methods) discussed herein may be performed.Detailed waysTechniques are disclosed in some examples that organize data written to a mobile device's memory device, such as a NAND memory device, in order to prevent certain types of failures. For example, mobile devices typically do not contain large amounts of RAM that can be dedicated to maintaining parity information when programming the memory device. Therefore, the present invention addresses the example technique of maintaining data stripe parity information by using the swap area of NAND memory in conjunction with a smaller RAM buffer. Such techniques balance mobile device performance with minimal RAM utilization to provide robust techniques to allow data recovery in the event an error occurs during data programming of a NAND memory device.Electronic devices such as mobile electronic devices (e.g., smartphones, tablet computers, etc.), electronic devices used in automotive applications (e.g., automotive sensors, control units, driver assistance systems, passenger safety or comfort systems, etc.) and Internet connections Household appliances or devices (eg, Internet of Things (IoT) devices, etc.) have different storage requirements depending especially on the type of electronic device, usage environment, performance expectations, etc.Electronic devices can be broken down into several major components: a processor (e.g., a central processing unit (CPU) or other main processor); a memory (e.g., one or more volatile or non-volatile random access memories (RAM) Memory devices such as dynamic RAM (DRAM), static RAM (SRAM), mobile or low power double data rate synchronous DRAM (DDR SDRAM), etc.); and storage devices such as non-volatile memory (NVM) devices such as Flash memory, read-only memory (ROM), SSD, MMC or other memory card structures or assemblies, etc.). In some examples, an electronic device may include a user interface (e.g., a display, a touch screen, a keyboard, one or more buttons, etc.), a graphics processing unit (GPU), power management circuitry, a baseband processor, or one or more transceivers circuit etc.FIG. 1 illustrates an example of an environment 100 including a host 105 and a memory device 110 configured to communicate via a communication interface 111 . Host device 105 or memory device 110 may be included in multiple products 150, such as Internet of Things (IoT) devices (eg, refrigerators or other home appliances, sensors, engines or actuators, mobile communications devices, automobiles, drones, etc. ) to support the processing, communication or control of product 150.Memory device 110 includes a memory controller 115 and a memory array 120, which includes, for example, several individual memory dies (eg, a three-dimensional (3D) NAND die stack). In 3D architecture semiconductor memory technology, vertical structures are stacked, thereby increasing the number of layers, physical pages, and thus increasing the density of memory devices (eg, storage devices). In an example, memory device 110 may be a discrete memory or storage device component of host device 105 . In other examples, memory device 110 may be part of an integrated circuit (eg, a system on a chip (SOC), etc.), stacked, or otherwise included within one or more other components of host device 105 .One or more communication interfaces 111 may be used to transfer data between the memory device 110 and one or more other components of the host device 105, such as a Serial Advanced Technology Attachment (SATA) interface, a Peripheral Component Interconnect Express (PCIe) interface, Universal Serial Bus (USB) interface, Universal Flash Storage (UFS) interface, eMMCTM interface, or one or more other connectors or interfaces. Host device 105 may include a host system, an electronic device, a processor, a memory card reader, or one or more other electronic devices external to memory device 110 . In some examples, host 105 may be a machine having some or all of the components discussed with reference to machine 800 of FIG. 8 .Memory controller 115 may receive instructions from host 105 and may communicate with the memory array, such as to transfer data to (eg, write or erase) one or more of the memory cells, planes, sub-blocks, blocks, or pages of the memory array. transmit data to or from the one or more. Memory controller 115 may, among other things, include circuitry or firmware that includes one or more components or integrated circuits. For example, memory controller 115 may include one or more memory control units, circuits, or components configured to control access across memory array 120 and provide a translation layer between host 105 and memory device 110 . Memory controller 115 may include one or more input/output (I/O) circuits, lines, or interfaces to transfer data to and from memory array 120 . Memory controller 115 may include memory manager 125 and array controller 135.Memory manager 125 may include, inter alia, circuitry or firmware, such as several components or integrated circuits associated with various memory management functions. For the purposes of this description, instance memory operations and management functions will be described in the context of NAND memory. Those skilled in the art will recognize that other forms of non-volatile memory may have similar memory operation or management functions. Such NAND management functions include wear leveling (eg, garbage collection or reclamation), error detection or correction, block retirement, or one or more other memory management functions. Memory manager 125 may parse or format host commands (e.g., commands received from the host) into device commands (e.g., commands associated with operation of a memory array, etc.) or generate commands for array controller 135 or a memory device. Device commands for one or more other components of 110 (e.g., to implement various memory management functions).Memory manager 125 may include a set of management tables 130 configured to maintain various information associated with one or more components of memory device 110 (e.g., with a memory array or one or more devices coupled to memory controller 115 Various information associated with each memory unit). For example, management table 130 may include block age, block erase count, error history, or one or more error counts (e.g., write operation errors) for one or more blocks of memory cells coupled to memory controller 115 count, read bit error count, read operation error count, erase error count, etc.). In some examples, a bit error may be referred to as an uncorrectable bit error if the number of detected errors for one or more of the error counts is above a threshold. Management table 130 may, among other things, maintain a count of correctable or uncorrectable bit errors.Array controller 135 may, among other things, be configured to control and write data to, read data from, or erase one or more memory cells of memory device 110 coupled to memory controller 115 The one or more memory cells are associated with a circuit or component for memory operation. Memory operations may be based, for example, on host commands received from host 105 or generated internally by memory manager 125 (eg, associated with wear leveling, error detection or correction, etc.).Array controller 135 may include an error correction code (ECC) component 140 , which may, among other things, include an ECC engine or be configured to detect or correct writing of data to one or more memory cells of memory device 110 coupled to memory controller 115 or other circuitry associated with errors reading data from the one or more memory cells. Memory controller 115 may be configured to actively detect and recover from error occurrences associated with various operations or storage of data (e.g., bit errors, operation errors, etc.) while maintaining control between host 105 and the memory device. The integrity of data transferred between 110 or maintaining the integrity of stored data (e.g., using redundant RAID storage devices, etc.), and failing memory resources (e.g., memory cells, memory arrays, etc.) can be removed (e.g., retired) , pages, blocks, etc.) to prevent future errors.In some examples, a memory array may include several NAND dies and one or more functions of the memory controller 115 of a particular NAND die may be implemented on an on-die controller on that particular die. Other organizations and renderings of control functionality may also be utilized, such as controllers for each die, plane, superblock, block, page, and the like.Memory array 120 may include a number of memory cells arranged in, for example, several devices, semiconductor dies, planes, sub-blocks, blocks, or pages. As an example, a 48GB TLC NAND memory device may contain 18,592 data bytes per page (16,384 + 2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device. As another example, a 32GB MLC memory device (which stores two data bits per cell (i.e., 4 programmable states)) may contain 18,592 data bytes per page (16,384+2208 bytes), 1024 pages per block , 548 blocks per plane and 4 planes per device, but uses half the write time and twice the program/erase (P/E) cycles of a corresponding TLC memory device. Other examples may include other numbers or arrangements. In some examples, a memory device, or a portion thereof, can selectively operate in SLC mode or in a desired MLC mode (eg, TLC, QLC, etc.).In operation, data is typically written to or read from NAND memory device 110 in pages and erased in blocks. However, one or more memory operations (eg, read, write, erase, etc.) may be performed on larger or smaller groups of memory cells as desired. The data transfer size of the NAND memory device 110 is typically referred to as a page, while the data transfer size of the host is typically referred to as a sector.Although a page of data may contain several bytes of user data (e.g., a data payload containing several sectors of data) and its corresponding metadata, the size of the page typically refers only to the number of bytes used to store the user data. number. As an example, a page of data having a page size of 4KB may contain 4KB of user data (e.g., assuming a sector size of 512B, 8 sectors) as well as data corresponding to the user data (e.g., integrity data (e.g., error detection or correction) The number of bytes (eg, 32B, 54B, 224B, etc.) of metadata for code data), address data (eg, logical address data, etc.), or other metadata associated with user data.Different types of memory cells or memory arrays 120 may provide different page sizes, or may require different amounts of metadata associated therewith. For example, different memory device types may have different bit error rates, which may result in different amounts of metadata necessary to ensure the integrity of the page's data (e.g., a memory device with a higher bit error rate may require less than a memory device with a lower bit error rate of a memory device (more error correction code data bytes). As an example, a multi-level cell (MLC) NAND flash device may have a higher bit error rate than a corresponding single-level cell (SLC) NAND flash device. Thus, MLC devices may require more metadata bytes for error data than corresponding SLC devices.In some examples, memory array 120 may include non-volatile memory for storing user data. The non-volatile memory may include a swap block 160 for handling parity information associated with user data when the non-volatile memory is programmed. Parity data may be associated with the memory controller's smaller volatile buffers or other components of the NAND memory device to help recover user data lost or corrupted during programming of the non-volatile memory.2 illustrates an example schematic diagram of a 3D NAND architecture semiconductor memory array 200. The 3D NAND architecture semiconductor memory array 200 includes several memory cell strings (eg, first to third A0 memory strings 205A0 to 207A0, first to third An memory strings). 205An to 207An, first to third B0 memory strings 205B0 to 207B0, first to third Bn memory strings 205Bn to 207Bn, etc.), which are organized into blocks (e.g., block A 201A, block B 201B, etc.) and subblocks ( For example, sub-block A0201A0, sub-block An201An, sub-block B0201B0, sub-block Bn201Bn, etc.). Memory array 200 represents one portion of a larger number of similar structures that may commonly be found in blocks, devices, or other units of memory devices.Each memory cell string includes a source line (SRC) 235 or a source side select gate (SGS) stacked in the Z direction (source to drain) (eg, first to third A0SGS 231A0 to 233A0, The first to third AnSGS 231An to 233An, the first to third B0SGS 231B0 to 233B0, the first to third BnSGS 231Bn to 233Bn, etc.) and the drain side select gate (SGD) (for example, the first to third A0SGD 226A0 to 228A0, first to third AnSGD 226An to 228An, first to third B0SGD 226B0 to 228B0, first to third BnSGD 226Bn to 228Bn, etc.) charge storage transistors (eg, floating gate transistors, charge trapping structure, etc.). Each memory cell string in the 3D memory array may be arranged as data lines (eg, bit lines (BL) BL0 to BL2 220 to 222) in the X direction and as physical pages in the Y direction.Within a physical page, each level represents a row of memory cells, and each string of memory cells represents a column. A subblock may contain one or more physical pages. A block may contain several sub-blocks (or physical pages) (eg, 128, 256, 384, etc.). Although illustrated herein as having two blocks, each block having two sub-blocks, each sub-block having a single physical page, each physical page having three strings of memory cells and each string having 8 layers of memory cells, in other examples , memory array 200 may include more or fewer blocks, sub-blocks, physical pages, strings of memory cells, memory cells, or layers. For example, each memory cell string may include more or fewer layers (eg, 16, 32, 64, 128, etc.) as desired, as well as one or more additional layers of semiconductor material above or below the charge storage transistors (eg, Select gates, data lines, etc.). As an example, a 48GB TLC NAND memory device may contain 18,592 data bytes per page (16,384 + 2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device.Each memory cell in memory array 200 includes a memory cell coupled to (e.g., electrically or otherwise operatively connected to) an access line (e.g., word lines (WL) WL00 through WL70 210A through 217A, WL01 through WL71 210B through 217B, etc.) Control gates (CG) that the access lines commonly couple as necessary across a particular layer or a portion of a layer. Specific layers in a 3D memory array, and therefore specific memory cells in a string, can be accessed or controlled using corresponding access lines. Groups of select gates can be accessed using various select lines. For example, the first to third A0SGD 226A0 to 228A0 can be accessed using the A0SGD line SGDA0225A0, the first to third AnSGD 226An to 228An can be accessed using the AnSGD line SGDAn225An, and the first to third B0SGD 226B0 to 228B0 can be accessed using the B0SGD Line SGDB0 225B0 is accessed, and the first to third BnSGD 226Bn to 228Bn are accessible using the BnSGD line SGDBn 225Bn. The first to third A0SGS 231A0 to 233A0 and the first to third AnSGS 231An to 233An can be accessed using the gate selection line SGS0230A, and the first to third B0SGS 231B0 to 233B0 and the first to third BnSGS 231Bn to 233Bn can be accessed Gate select line SGS1230B access.In an example, memory array 200 may include several layers of semiconductor material (eg, polysilicon, etc.) configured to couple a control gate (CG) of each memory cell or a select gate (or CG or CG) of a corresponding layer of the array. part of the selection gate). Specific strings of memory cells in the array may be accessed, selected, or controlled using combinations of bit lines (BLs) and select gates, and the like, and specific memory cells at one or more levels in a specific string may be accessed, selected, or controlled using one or more memory cells. Access, select, or control lines (e.g., word lines).3 illustrates an example schematic diagram of a portion of a NAND architecture semiconductor memory array 300, which includes arrays arranged in strings (eg, first through third strings 305 through 307) and layers (eg, illustrated as respective word lines ( A plurality of memory cells 302 and sense amplifiers in a two-dimensional array of WL) WL0 to WL7 310 to 317, drain side select gate (SGD) line 325, source side select gate (SGS) line 330, etc.) or Device 360. For example, memory array 300 may illustrate an example schematic diagram of a portion of a physical page of memory cells of a 3D NAND architecture semiconductor memory device, such as illustrated in FIG. 2 .Each memory cell string is coupled to a source line (SRC) using a corresponding source side select gate (SGS) (eg, first through third SGS 331 through 333), and using a corresponding drain side select gate (SGD) (eg, first through third SGDs 326 through 328) are coupled to corresponding data lines (eg, first through third bit lines (BL) BL0 through BL2 320 through 322). Although the example of FIG. 3 is illustrated as having eight layers (eg, using word lines (WL) WL0 through WL7 310 through 317) and three data lines (BL0 through BL2 326 through 328), other examples may include layers with Strings of memory cells with more or fewer layers or data lines.In a NAND architecture semiconductor memory array, such as example memory array 300, the state of a selected memory cell 302 may be accessed by sensing changes in current or voltage associated with a particular data line containing the selected memory cell. Memory array 300 may be accessed using one or more drivers (eg, via control circuitry, one or more processors, digital logic, etc.). In an example, one or more drivers may access specific potentials by driving specific potentials to one or more data lines (eg, bit lines BL0 through BL2), depending on the type of operation desired to be performed on a specific memory cell or group of memory cells. A line (eg, word lines WL0 through WL7) or a select gate activates the particular memory cell or group of memory cells.To program or write data to a memory cell, a programming voltage (Vpgm) (e.g., one or more programming pulses, etc.) may be applied to a selected word line (e.g., WL4), and thus, to a selected word line coupled to A control gate of each memory cell of the word line (eg, first through third control gates (CG) 341 through 343 of the memory cell coupled to WL4). Programming pulses may start, for example, at or near 15V, and in some examples, may increase in magnitude during each application of a programming pulse. When a programming voltage is applied to a selected word line, a potential, such as a ground potential (eg, Vss), may be applied to a data line (eg, a bit line) and the substrate of the memory cell that is the target of programming (and, therefore, to channel between source and drain), causing charge transfer from the channel (e.g., direct injection or Fowler-Nordheim (FN) tunneling, etc.) to the floating gate of the target memory cell .In contrast, a pass voltage (Vpass) may be applied to one or more word lines with memory cells that are not targeted for programming, or a suppress voltage (eg, Vcc) may be applied to data with memory cells that are not targeted for programming. Lines (eg, bit lines), for example, to inhibit charge transfer from the channel to the floating gates of such non-target memory cells. The pass voltage may vary depending on, for example, the proximity of the applied pass voltage to the word line being targeted for programming. The suppression voltage may include a supply voltage (Vcc) relative to ground potential (eg, Vss), such as a voltage from an external source or supplier (eg, battery, AC to DC converter, etc.).As an example, if a programming voltage (eg, 15V or more) is applied to a particular wordline, such as WL4, then a pass voltage of 10V may be applied to one or more other wordlines, such as WL3, WL5, etc., to inhibit non-target memory Programming of cells or retention of values stored on such memory cells that are not targeted for programming. As the distance between the applied programming voltage and the non-target memory cell increases, the pass voltage required to inhibit programming of the non-target memory cell may decrease. For example, in the case where a programming voltage of 15V is applied to WL4, a pass voltage of 10V can be applied to WL3 and WL5, a pass voltage of 8V can be applied to WL2 and WL6, a pass voltage of 7V can be applied to WL1 and WL7, etc. In other examples, the pass voltage or number of word lines, etc. may be higher or lower or more or less.Sense amplifier 360 coupled to one or more of the data lines (eg, first, second, or third bit lines (BL0 to BL2) 320 - 322 ) may detect the voltage or current by sensing the voltage or current on the particular data line. The state of each memory cell in the corresponding data line.Between application of one or more programming pulses (eg, Vpgm), a verify operation may be performed to determine whether the selected memory cell has reached its expected programmed state. If a selected memory cell has reached its expected programmed state, further programming may be inhibited. If the selected memory cell has not yet reached its expected programmed state, additional programming pulses may be applied. If the selected memory cell has not reached its expected programmed state after a certain number of programming pulses (eg, a maximum number), the selected memory cell or the string, block, or page associated with the selected memory cell may Marked as defective.To erase a memory cell or a group of memory cells (eg, erasure is typically performed in blocks or sub-blocks), an erase voltage (Vers) (eg, typically Vpgm) may be applied to the liner of the memory cell that is the target of erasure. bottom (and thus applied to the channel between source and drain) (e.g., using one or more bit lines, select gates, etc.) while the word line of the target memory cell is maintained at a potential, such as ground potential ( For example, Vss), causing charge transfer (eg, direct injection or Fowler-Nordheim (FN) tunneling, etc.) from the floating gate of the target memory cell to the channel.4 illustrates an example block diagram of a memory device 400 that includes a memory array 402 having a plurality of memory cells 404 and one or more circuits that provide communication with the memory array 402 or perform one or more memory operations on the memory array 402 or components. Memory device 400 may include a row decoder 412, a column decoder 414, a sense amplifier 420, a page buffer 422, a selector 424, input/output (I/O) circuitry 426, and a memory control unit 430.Memory cells 404 of memory array 402 may be arranged in blocks (eg, first block 402A and second block 402B). Each block can contain sub-blocks. For example, the first block 402A may include a first sub-block 402A0 and a second sub-block 402An, and the second block 402B may include a first sub-block 402B0 and a second sub-block 402Bn. Each sub-block may contain several physical pages, and each page may contain several memory cells 404 . Although illustrated herein as having two blocks, each block having two sub-blocks and each sub-block having a number of memory cells 404, in other examples, the memory array 402 may include more or fewer blocks, sub-blocks, memory cells wait. In other examples, memory cells 404 may be arranged in several rows, columns, pages, sub-blocks, blocks, etc., using, for example, access lines 406, first data lines 410, or one or more select gates, sources, etc. Line access.Memory control unit 430 may respond to one or more signals or instructions received on control line 432, including, for example, one or more clock signals or control signals indicating desired operations (e.g., write, read, erase, etc.) ) or address signals (A0 through AX) received on one or more address lines 416 control the memory operations of memory device 400 . One or more devices external to memory device 400 may control the value of the control signal on control line 432 or the address signal on address line 416 . Examples of devices external to memory device 400 may include, but are not limited to, a host, a memory controller, a processor, or one or more circuits or components not illustrated in FIG. 4 .Memory device 400 may use access line 406 and first data line 410 to transfer (eg, write or erase) data to or from one or more of memory cells 404 (eg, read). fetch) data. Row decoder 412 and column decoder 414 may receive and decode address signals (A0 through AX) from address lines 416 , may determine which of memory cells 404 are to be accessed, and may provide signals into access lines 406 One or more of the plurality of word lines (eg, one or more of the plurality of word lines (WL0 to WLm)) or the first data line 410 (eg, one or more of the plurality of bit lines (BL0 to BLn)), e.g. Described above.Memory device 400 may include sensing circuitry, such as sense amplifier 420 , configured to use first data line 410 to determine a data value (eg, read) on memory cell 404 or to determine data to be written to memory cell 404 value. For example, in a selected string of memory cells 404, one or more of the sense amplifiers 420 may read the selected string in response to a read current flowing in the memory array 402 through the selected string to the data line 410. logic level in memory cell 404.One or more devices external to memory device 400 may communicate with memory device 400 using I/O lines (DQ0 to DQN) 408, address lines 416 (A0 to AX), or control lines 432. Input/output (I/O) circuitry 426 may use I/O lines 408 to transfer data values into or out of memory device 400 , such as into or out of page buffer 422 or memory, based on control lines 432 and address lines 416 , for example. Array 402. Page buffer 422 may store data received from one or more devices external to memory device 400 before the data is programmed into the relevant portion of memory array 402 , or may store data read from memory array 402 before the data is transferred to One or more devices external to memory device 400 previously stored the data.Column decoder 414 may receive address signals (A0 through AX) and decode the address signals into one or more column select signals (CSEL1 through CSELn). A selector 424 (eg, a select circuit) may receive column select signals (CSEL1 through CSELn) and select data in page buffer 422 representing data values to be read from or programmed into memory cell 404 . Selected data may be transferred between page buffer 422 and I/O circuitry 426 using second data line 418 .Memory control unit 430 may receive positive and negative supply signals, such as supply voltage (Vcc) 434 and negative supply (Vss) 436 (e.g., ground potential). In some examples, memory control unit 430 may include a regulator 428 that internally provides a positive or negative supply signal.As used herein, a page line is a logical construct that identifies a group of pages that includes pages at the same location in each plane of a group of planes. So, for example, the first page in planes 0 through 3 is identified by page line 0. A page consists of memory cells belonging to the same word line (WL). A block is a group of pages - that is, all NAND strings that share the same group of word lines (a NAND string is a group of NAND cells connected in series). In some NAND configurations, a block is the smallest erasable unit. A page is the smallest addressable unit used for reading and writing. A plane is a group of physical blocks on a single NAND die that is configured to operate such that the physical blocks from each of the multiple planes can be erased in parallel (i.e., can be erased substantially simultaneously during a given time interval or erase physical blocks on top of each other), but only a single physical block in any individual plane may be erased at any one time. Multiple planes can exist per NAND die.ECC and other technologies have significantly improved the reliability of NAND devices. However, there are certain situations where additional protection against data loss is desirable. The data stripe may include user data and parity data (eg, a combination of user data and parity data). The parity data of the data stripe may include error protection data, which may be used to protect user data stored in the memory from defects and/or errors that may occur during operation of the memory. For example, parity information for a data stripe may insulate user data stored in a memory from defects and/or errors that may occur during operation of the memory, and may therefore provide protection against memory failures. Defects and/or errors that parity information can prevent include electrical shorts that can occur between different components of a memory and/or shorts that can occur at the interface between a memory group and the corresponding driver associated therewith.In some examples, techniques are disclosed that allow the storage and manipulation of parity information in a very limited amount of random access memory (RAM), as may be found, for example, in mobile devices. In some examples, parity information may be calculated and stored until programming is complete. In some instances, due to a lack of RAM, parity information is exchanged between the memory device's RAM and the memory device's non-volatile memory until programming is complete. This parity information can be used to recover from corruption of a portion of the data stripe, as discussed above.Figure 5 generally illustrates an example data and parity information placement scheme within a data block of a memory according to an example of the present subject matter. The placement scheme shows a non-volatile memory device 500 that provides 12 pages per word line (WL) and 4 planes per logical unit (LUN). The example is based on having 3 WL separations between data in a common data stripe. The WL separation criteria and 12 pages per WL layout state that data is stored on at least 36 data strips. Assuming at least 128 pages are available for each data stripe, (127 pages for user data, and 1 page for parity of the stripe), if 2 LUNs are used, then 16 different WLs can be assigned to each data strip. In some examples, parity information may be stored within location 550 of the data block once programming of all data stripes is turned off.If out-of-sequence programming of pages is not allowed, then data stripe 1 (D1) cannot be closed until the 540th page associated with WL46 is programmed. Therefore, during programming, the data parity information for each of the 36 data stripes may be stored and updated in a memory block separate from the data block. When programming data blocks, conventional solutions use RAM space to store and update data stripe parity. In some examples, such as for mobile electronic devices, only a limited number of pages of RAM space are allocated for parity information manipulation during programming, and may be unchallenged to maintain all parity information during programming.In certain examples of the present subject matter, a dedicated swap block of non-volatile memory may be used as a parity information area or placeholder until the data stripe may be closed. Swap blocks allocate separate blocks of memory separate from data blocks.Figure 6A generally illustrates an example exchange block 660. 6B illustrates the logical temporal placement of parity pages of a volatile memory or RAM buffer 651 of a NAND memory controller or other component in accordance with some examples of the present invention. If the user data pages within a data block of memory are not allowed to be programmed out of sequence and the data placement scheme does not allow the user data pages to be stored in immediately contiguous pages, then the data stripe parity information may be stored in the RAM buffer 651 Repeat the exchange with exchange block 660.In some instances, whenever the memory controller writes data to a new page belonging to an open data stripe, at least the parity information for that data stripe may be read from the parity swap block 660 into the RAM buffer. 651 and new parity information may be determined or calculated. In some instances, a logical function (eg, XOR) can be used to generate new parity information for the open data stripe. If new data is written to a second data stripe that is not associated with the parity information in RAM buffer 651, then the parity information in RAM buffer 651 may be copied to parity swap block 660 with at least The parity information associated with the second data stripe may be read from the parity swap block 660 into the RAM buffer 651, and so on, until all data stripes are closed. In some examples, as the data stripe is closed, the parity information in the parity exchange block 660 associated with the closed data stripe may be discarded. In some instances, swap block information is discarded when all data stripes are closed.The example data placement in the non-volatile memory of the memory device illustrated in Figure 5 is only one example of data placement, and it is understood that other data placement schemes are possible without departing from the scope of the present subject matter. In some examples, non-volatile memory device 500 may receive the first data item from the host device. This first data item can be divided into several parts. For the purposes of this description, an example will be used in which the received data item is divided into 127 parts. It should be understood that a received data item may be divided into a smaller number of parts or a larger number of parts without departing from the scope of the present subject matter. The first portion of the first data item, which may also be referred to as the first portion of the first data stripe (D11), may be programmed at a first location within the first data stripe (D1) in the NAND, and the second portion ( D12) is programmed in the second position, the third section (D13) is programmed in the second position, and so on. Each portion of user data is annotated as a DNM, where N is a positive integer value identifying a data item or data stripe within the NAND memory, and M is a positive integer value identifying a data item or data stripe portion. The location of each data stripe portion may be selected based on a specific data placement scheme.The example placement scheme of Figure 5 places the first data item or portion of the data stripe (D1) and portions of the other data items such that each sequential portion of the first stripe is stored on a different plane with respect to each other but can in memory cells on the same WL. If a portion of the first data stripe is on WLi, where i is an integer value, no word line within the WL separation of WLi is associated with any other portion of the first data stripe. Regarding the example of Figure 5, portions of the first data stripe (D1) are on WL4. No other part of the first data stripe (D1) is located on the two WLs (WL2, WL3, WL5, WL6) immediately adjacent to WL4. Therefore, the data placement scheme has 3 WL separation, which may be helpful for some data loss scenarios (such as WL short circuit). In some examples, non-volatile memory device 500 may include parity locations 550 for storing parity information for each data stripe once programming is complete. In this example, parity locations 550 may occupy 36 pages corresponding to 36 data stripes allocated to hold user data. Depending on the number of portions of each data stripe, WL separation, and configuration of the non-volatile memory system, the number of data stripes may be larger or smaller, and swap block 660 may use more or fewer pages to The non-volatile memory system stores data stripe parity information when programmed.Parity pages can be calculated from the data stripe portion of each data stripe. For example, for a data stripe with only four parts, the parity page may be the XOR of the data in the first part, the second part, the third part, and the fourth part. for example:where is the XOR operator.Parity information may be calculated and may be temporarily stored in volatile memory (eg, RAM) and then periodically swapped to non-volatile storage in a separate NAND memory block, parity swap block 660 , shown in Figure 6A. In some examples, the swap block 660 location may be an SLC, MLC, TLC, or QLC block. In some instances, SLC blocks can provide faster read and write operations and can also be beneficial because the endurance capabilities of SLC blocks can be significantly higher than other forms of blocks (eg, MLC, TLC, QLC) . In some instances, swap block 660 does not result in garbage collection because the parity information in swap block 660 is invalid once the data stripe is closed.Figure 6B illustrates the logical placement of parity pages of volatile memory in a memory controller or other component of a NAND in accordance with some examples of the present invention. The parity page shown in Figure 5 is the parity page calculated for the data stripe (DN) in Figure 5 once all data stripes are closed. As stripes of data are programmed into the NAND in Figure 5, parity information can be calculated and stored in volatile memory (eg, random access memory (RAM) 651). At the first time T0, parts of data strips D1 to D8 can be written to WL1 of LUN1 and planes 0 to 3. At the same time, parity information for these data strips: 602 to 620 may be calculated and stored in a volatile storage device (eg, RAM buffer 661), as shown in Figure 6B.At time T1, portions of data stripes D9 through D16 may be written to WL1 and the corresponding parity (P9 through P16) may be calculated in RAM 651, as shown in Figure 6B. In some examples, parity information 602-620 may be overwritten with parity information 626-644. In some examples, parity information 602-620 may be written to NAND swap block 660 before it is overwritten, such as to a reliable SLC block. Similarly, at time T2, parity information for data stripes D17 through D25 may be calculated and exchanged with the existing value in parity exchange block 660. Parity information stored in the memory device's RAM 661 or swap block 660 may be used to overwrite user data that is lost or corrupted during programming operations of the memory device's non-volatile memory.7 illustrates an example method 700 of using non-volatile swap blocks to help maintain parity information for multiple data stripes when the aggregate parity information for an open data stripe is larger than the cache or RAM available on the memory device. flow chart. At 702, a memory controller of the non-volatile memory device may store or program user data into NAND memory of the non-volatile memory device. At 704, as the user data is stored in the data stripe within the NAND memory, a cache or RAM memory of the memory device or a memory controller of the memory device may be used to update or calculate first parity information for the user data. When new user data is ready to be stored or programmed into a different data stripe, the first parity information in the cache of the memory device may be exchanged with the second parity information in the swap block of the NAND memory. As new user data is programmed into a different data stripe than the first user data, exchanging parity information allows the first parity information to be saved in the NAND memory and the second parity information to be available for updates. Using a parity swap block may allow a memory device with limited RAM to provide parity information for multiple data stripes independent of word line separation of the memory device's data placement scheme and independent of the number of open data stripes.8 illustrates a block diagram of an example machine 800 upon which any one or more of the techniques (eg, methods) discussed herein may be performed. In alternative embodiments, machine 800 may operate as a stand-alone device or may be connected (eg, networked) to other machines. In a networked deployment, machine 800 may operate as a server machine, a client machine, or both in a server-client network environment. In an example, machine 800 may function as a peer machine in a peer-to-peer (P2P) (or other distributed) network environment. Machine 800 may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), mobile phone, network equipment, IoT device, automotive system, or capable of executing instructions that specify actions to be taken by that machine ( any machine (sequential or otherwise). Furthermore, while merely describing a single machine, the term "machine" shall also be taken to include executing a set (or sets) of instructions, individually or collectively, to perform any one or more of the methodologies discussed herein (e.g., cloud computing). , Software as a Service (SaaS), other computer cluster configurations) any collection of machines.As described herein, examples may include or be operable by logic, components, devices, packages, or mechanisms. A circuit is a collection of circuits (eg, a set of circuits) implemented in a tangible entity that includes hardware (eg, simple circuits, gates, logic, etc.). Circuit membership can be flexible over time and potential hardware variability. A circuit contains members that, when operated, individually or in combination, perform specific tasks. In examples, the hardware of a circuit may be invariably designed to perform a specific operation (eg, hardwired). In examples, the hardware of a circuit may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) that include physically modified (e.g., magnetically, electrically, invariant mass particles) A computer-readable medium that encodes instructions for a specific operation. When connecting physical components, the basic electrical properties of a hardware component change from an insulator to a conductor, or vice versa, for example. Instructions enable participating hardware (eg, execution units or load mechanisms) to create members of a circuit in the hardware via variable connections to perform portions of a specific task while in operation. Thus, the computer-readable medium is communicatively coupled to other components of the circuitry during operation of the device. In an example, any of the physical components may be used in more than one member of more than one circuit. For example, in operation, the execution unit may be used in a first circuit of the first circuitry at one point in time and reused at a different time by a second circuit in the first circuitry or by a third circuit in the second circuitry. Three circuits are used again.Machine (e.g., computer system) 800 (e.g., host device 105, memory device 110, etc.) may include a hardware processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processing core, or any combination thereof , such as memory controller 115, etc.), main memory 804, and static memory 806, some or all of which may communicate with each other via an interlink (eg, bus) 808. The machine 800 may further include a display unit 810, an alphanumeric input device 812 (eg, a keyboard), and a user interface (UI) navigation device 814 (eg, a mouse). In an example, display unit 810, input device 812, and UI navigation device 814 may be touch screen displays. Machine 800 may additionally include a storage device (e.g., a drive unit), a signal generation device 818 (e.g., a speaker), a network interface device 820, and one or more sensors 816, such as a global positioning system (GPS) sensor, a compass, an accelerometer, or Other sensors. Machine 800 may include an output controller 828, such as a serial (e.g., Universal Serial Bus (USB)) device that communicates with or controls one or more peripheral devices (e.g., printer, card reader, etc.). )), parallel connection, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection.Storage devices may include machine-readable media 822 having stored thereon one or more sets of data structures or instructions 824 that embody or are utilized by any one or more of the techniques or functions described herein (e.g., software). Instructions 824 may also reside entirely, or at least partially, within main memory 804 , within static memory 806 , or within hardware processor 802 during their execution by machine 800 . In an example, one or any combination of hardware processor 802, main memory 804, static memory 806, or storage may constitute machine-readable medium 822.Although machine-readable medium 822 is illustrated as a single medium, the term "machine-readable medium" may include a single medium or multiple media configured to store one or more instructions 824 (e.g., a centralized or distributed database or related connected cache and server).The term "machine-readable medium" may include or capable of storing, encoding, or carrying instructions for execution by machine 800 and causing machine 800 to perform any one or more of the techniques of this disclosure. Any media that uses or has data structures associated with such instructions. Non-limiting examples of machine-readable media may include solid-state memory and optical and magnetic media. In examples, large-scale machine-readable media includes machine-readable media having multiple particles with constant (eg, rest) mass. Therefore, large-scale machine-readable media do not propagate signals temporarily. Specific examples of large-scale machine-readable media may include: non-volatile memory, such as semiconductor memory devices (eg, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices. Flash memory devices; magnetic disks, such as internal hard disks and removable hard disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.Instructions 824 (eg, software, programs, operating system (OS), etc.) or other data are stored on storage device 821 and may be accessed by memory 804 for use by processor 802 . Memory 804 (eg, DRAM) is typically faster, but volatile, and is therefore a different type of storage device than storage device 821 (eg, SDD) that is suitable for long-term storage (included in the "off" condition). Instructions 824 or data used by a user or machine 800 are typically loaded into memory 804 for use by processor 802 . When memory 804 is full, virtual space from storage 821 can be allocated to supplement memory 804; however, because storage 821 is typically slower than memory 804, and write speeds are typically at least two times slower than read speeds, virtual memory The use of can greatly degrade the user experience due to storage latency (compared to memory 804 (eg, DRAM)). Additionally, using storage device 821 for virtual memory can significantly reduce the useful life of storage device 821.In contrast to virtual memory, virtual memory compression (eg, kernel feature "ZRAM") uses portions of memory as compressed block storage to avoid paging to storage 821. Paging occurs in compressed blocks until necessary to write such data to storage 821. Virtual memory compression increases the available size of memory 804 while reducing wear on storage 821 .Storage devices optimized for use in mobile electronic devices, or removable storage devices, have traditionally included MMC solid-state storage devices (eg, Micro Secure Digital (microSDTM) cards, etc.). An MMC device includes several parallel interfaces (eg, an 8-bit parallel interface) with a host device, and generally components that are removable from and separate from the host device. In contrast, eMMC™ devices, which attach to the circuit board and are considered components of the host device, have read speeds that rival Serial ATA™ (Serial AT (Advanced Technology) Attachment, or SATA)-based SSD devices. However, demands on mobile device performance continue to increase, such as to fully implement virtual or augmented reality devices, take advantage of increased network speeds, and so on. In response to this demand, storage devices have moved from parallel communication interfaces to serial communication interfaces. Universal Flash Storage (UFS) devices (including controller and firmware) use a low-voltage differential signaling (LVDS) serial interface to communicate with the host device using a dedicated read/write path, further improving read/write speeds .Instructions 824 may further utilize a transmission medium through the communication network 826 via the network interface device 820 utilizing several transport protocols (e.g., Frame Relay, Internet Protocol (IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), HyperX Transmit or receive any of the text transfer protocols (HTTP, etc.). Example communications networks may include local area networks (LANs), wide area networks (WANs), packet data networks (e.g., the Internet), mobile phone networks (e.g., cellular networks), plain old telephone (POTS) networks, and wireless data networks (e.g., known as For the Institute of Electrical and Electronics Engineers (IEEE) 802.11 series of standards, known as the IEEE 802.16 series of standards), IEEE 802.15.4 series of standards, peer-to-peer (P2P) networks, and other networks. In an example, network interface device 820 may include one or more physical sockets (eg, Ethernet, coaxial, or telephone sockets) or one or more antennas connected to communication network 826 . In examples, network interface device 820 may include multiple antennas that communicate wirelessly using single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) technologies. The term "communication media" shall be considered to include any intangible medium capable of storing, encoding, or carrying instructions for execution by machine 800, and including digital or analog communication signals or other intangible media that facilitate communication of such software.The foregoing detailed description contains references to the accompanying drawings, which form a part hereof. The drawings show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are also referred to herein as "examples." Such examples may contain elements in addition to those shown or described. However, the inventors also intend that examples of only those elements shown or described be provided therein. Furthermore, the inventors also contemplate the use of any combination or arrangement of those elements shown or described with respect to a particular example (or one or more aspects thereof) or with respect to other examples (or one or more aspects thereof) shown or described herein. Example (or one or more aspects thereof).In this filing, the term "a/an" is used, as commonly found in patent filings, to include one or more independently of any other examples or uses of "at least one" or "one or more." In this document, the term "or" is used to refer to non-exclusive or such that "A or B" may include "A but not B", "B but not A" and "A and B" unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as the plain English equivalents of the corresponding terms "including" and "wherein." Furthermore, in the appended claims, the terms "comprises" and "comprises" are used in an open-ended manner, that is, a system, device, article or process containing elements in addition to the elements listed after the term in the claim are still considered to fall within the scope of that claim. Furthermore, in the appended claims, the terms "first", "second", "third", etc. are used merely as labels and are not intended to impose numerical requirements on their subject matter.In various examples, the components, controllers, processors, units, engines, or tables described herein may comprise, inter alia, physical circuitry or firmware stored on a physical device. As used herein, "processor" means any type of computing circuit, such as (but not limited to) a microprocessor, microcontroller, graphics processor, digital signal processor (DSP), or any other type of processor or process Circuits, including groups of processors or multi-core devices.The terms "wafer" and "substrate" are used herein to generally refer to any structure on which an integrated circuit is formed, and also to refer to such structures during the various stages of integrated circuit fabrication. The following detailed description is therefore not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled.Various embodiments in accordance with the present invention and described herein include memories that utilize vertically structured memory cells (eg, strings of NAND memory cells). As used herein, directional adjectives will be employed relative to the surface of the substrate on which the memory cells are formed (i.e., vertical structures will be considered to extend away from the substrate surface, and the bottom end of the vertical structure will be considered to be the end closest to the substrate surface. , and the top end of the vertical structure will be considered the end farthest from the substrate surface).As used herein, operating a memory unit includes reading from, writing to, or erasing a memory unit. The operation of placing a memory cell in a desired state is referred to herein as "programming" and may include both writing to the memory cell or erasing from the memory cell (eg, the memory cell may be programmed to an erased state).In accordance with one or more embodiments of the invention, a memory controller (e.g., processor, controller, firmware, etc.) located within or external to a memory device is capable of determining (e.g., selecting, setting, adjusting, calculating, changing, clearing , transfer, adapt, derive, define, utilize, modify, apply, etc.) a number of wear cycles or wear states (e.g., record wear cycles, count operations of a memory device as they occur, track their origin operation of the memory device, evaluation of memory device characteristics corresponding to wear states, etc.).According to one or more embodiments of the invention, a memory access device may be configured to provide wear cycle information to the memory device with each memory operation. Memory device control circuitry (eg, control logic) may be programmed to compensate for changes in memory device performance corresponding to wear-out cycle information. The memory device can receive wear cycle information and determine one or more operating parameters (eg, values, characteristics) responsive to the wear cycle information.Examples of methods described herein may be at least partially machine or computer implemented. Some examples may include computer-readable media or machine-readable media encoded with instructions operable to configure an electronic device to perform the methods described in the examples above. Implementations of such methods may include code, such as microcode, assembly language code, high-level language code, or the like. This code may include computer-readable instructions for performing various methods. The code may form part of a computer program product. Additionally, the code may be tangibly stored on one or more volatile or non-volatile tangible computer-readable media, such as during execution or other times. Examples of these tangible computer-readable media may include, but are not limited to, hard drives, removable disks, removable optical disks (e.g., optical disks and digital video disks), magnetic tape, memory cards or sticks, random access memory (RAM) , read-only memory (ROM), solid-state drive (SSD), universal flash storage (UFS) devices, embedded MMC (eMMC) devices, and the like.The above description is intended to be illustrative rather than restrictive. For example, the above examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, for example, by one of ordinary skill in the art upon review of the above description. The description is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Furthermore, in the above detailed description, various features may be grouped together to simplify the invention. This should not be construed as an expectation that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in all features of less than the specific embodiments disclosed. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and with the intent that such embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.ExampleExample 1 is a NAND memory device including: a random access memory (RAM) buffer; an array of NAND memory cells organized into pages, data stripes of user data, and parity information areas, wherein the parity a parity information area including parity information associated with the data stripe of user information; and a controller configured to: program first user data to a first plurality of data of the NAND memory cell array a first portion of a stripe; copying current parity information associated with a second plurality of data stripes from the RAM buffer to the parity information area; copying the first portion of the first plurality of data stripes copying stored parity information from the parity information area to the RAM buffer to replace the current parity information associated with the second plurality of data stripes; and using the stored parity information The parity information and the first user data determine new parity information for the first plurality of data stripes.In Example 2, the subject matter of Example 1, wherein each data stripe of the plurality of first data stripes spans a plurality of pages of the NAND memory cell array.In Example 3, the subject matter of Example 2, wherein each of the plurality of pages of the NAND memory for each data stripe is associated with a word line of the NAND memory device; and wherein Within the plane of the NAND memory array, pages of a first data stripe are separated by at least a plurality of word lines from each other page in the first data stripe.In Example 4, the subject matter of any one of Examples 1-3, wherein the parity information area is larger in size than the RAM buffer.In Example 5, the subject matter of any one of Examples 1-4, wherein the RAM buffer includes static RAM (SRAM).Example 6 is a method comprising: programming a first number of data stripes of a NAND memory device with first data; converting a first number of parity information associated with the first number of data stripes from Loading NAND memory of the NAND memory device into a random access memory (RAM) buffer of the NAND memory device; refreshing the first number of parity information using the first data; programming the parity information with the second data a second number of data stripes of the NAND memory; copying the first number of parity information to a parity information area of the NAND memory; copying the parity information associated with the second number of data strips Loading a second number of parity information from the parity information area into the RAM buffer to replace the first number of parity information; and refreshing the second number of parity information using the second data Parity information.In Example 7, in accordance with the subject matter of Example 6, each of the first number of data stripes spans a plurality of pages of an array of NAND memory cells of the NAND memory device.In Example 8, the subject matter of Example 7, wherein each of the plurality of pages of each data stripe is associated with a word line of the NAND memory device; and wherein in the NAND memory Within the plane of the array, the pages of a first data stripe are separated by a plurality of word lines from each other page in the first data stripe.In Example 9, according to the subject matter of any of Examples 7-8, each of the second number of data stripes spans the plurality of pages of the NAND memory cell array. .In Example 10, the subject matter of any one of Examples 6-9, wherein the size of the RAM buffer is less than the first number of parity information and the second number of parity information. The combined size of.In Example 11, the subject matter of Example 10, wherein the size of the RAM buffer is less than a parity information area of the NAND memory.In Example 12, the subject matter of any one of Examples 6-11, wherein the first number of parity information associated with the first number of data stripes is obtained from the NAND memory NAND memory of a device loads a random access memory (RAM) buffer of the NAND memory device including a first number of parity information associated with the first number of data stripes from the NAND memory device The NAND memory is loaded into the static RAM (SRAM) buffer of the NAND memory device.Example 13 is a method comprising: programming user data into a plurality of data stripes across a plurality of pages of NAND memory of a NAND memory device of a mobile electronic device; using a random access memory (RAM) of the memory device ) buffer updating first parity information associated with a first plurality of data stripes of the plurality of data stripes; and exchanging an AND between a swap block location of the NAND memory and the RAM buffer The first parity information associated with the first plurality of data stripes.In Example 14, the subject matter of Example 13 includes populating the swap block location with parity information for the plurality of data stripes.In Example 15, the subject matter of Example 14, wherein the RAM buffer is sized to hold a fraction of the parity information held by the swap block location.In Example 16, the subject matter of any one of Examples 13-15, comprising programming second user data into the NAND memory across the NAND memory device of the mobile electronic device. in a second plurality of data stripes of the plurality of data stripes of a plurality of pages; and using the RAM buffer to update second parity information associated with the second plurality of data stripes.In Example 17, the subject matter of Example 16, wherein updating second parity information includes: retrieving second parity information from the swap block location of the NAND memory; storing parity information in the RAM buffer; and performing a logical operation using the second parity information and the second user data to provide updated second parity information.Example 18 is a machine-readable medium comprising instructions that, when executed by a machine, cause the machine to perform operations including programming user data into a plurality of NAND memories across a NAND memory device of a mobile electronic device. updating a first parity associated with a first plurality of data stripes of the plurality of data stripes using a random access memory (RAM) buffer of the memory device information; and exchanging the first parity information associated with the first plurality of data stripes between a swap block location of the NAND memory and the RAM buffer.In Example 19, the subject matter of Example 18, wherein the operations further comprise programming second user data into the plurality of NAND memories across the NAND memory devices of the mobile electronic device. in a second plurality of data stripes of the page; and using the RAM buffer to update second parity information associated with the second plurality of data stripes.In Example 20, the subject matter of Example 19, wherein updating second parity information includes: retrieving second parity information from the swap block location of the NAND memory; storing parity information in the RAM buffer; and performing a logical operation using the second parity information and the second user data to provide updated second parity information.Example 21 is at least one machine-readable medium containing instructions that, when executed by a processor circuit, cause the processing circuit to perform operations to implement any of Examples 1-20.Example 22 is an apparatus including means implementing any of Examples 1-20.Example 23 is a system implementing any of Examples 1-20.Example 24 is a method of implementing any of Examples 1-20. |
An optical link for achieving electrical isolation between a controller and a memory device is disclosed. The optical link increases the noise immunity of electrical interconnections, and allows the memory device to be placed at greater distance from the processor than is conventional without power-consuming I/O buffers. |
CLAIMS 1. A memory system comprising: a memory controller; at least one memory device; and an optical path connected between said memory controller and said at least one memory device for optically passing data between said controller and said at least one memory device. 2. The memory system of claim 1, wherein said controller transmits data to said at least one memory device through said optical path. 3. The memory system of claim 1, wherein said controller receives data from said at least one memory device through said optical path. 4. The memory system of claim 1, wherein said data includes at least one of read and write data. 5. The memory system of claim 1, wherein said data includes address data transmitted from said controller to said at least one memory device. 6. The memory system of claim 1, wherein said data includes command data transmitted from said controller to said at least one memory device. <Desc/Clms Page number 13> 7. The memory system of claim 1, wherein said data includes a clock signal. 8. The memory system of claim 1, wherein said data includes control data. 9. The memory system of claim 1, wherein said optical path comprises a plurality of multiplexed optical channels, said data being transmitted over said multiplexed optical channels. 10. The memory system of claim 1, further comprising an electro-optical converter for converting an electrical signal output from said controller to an optical signal for transmission on said optical path. 11. The memory system of claim 10, wherein said converter is wavelength-adjustable. 12. The memory system of claim 10, further comprising an electro-optical converter for converting an optical signal on said optical path to an electrical signal and transmitting said electrical signal to said controller. 13. The memory system of claim 1 further comprising: <Desc/Clms Page number 14> an electro-optical converter for converting an electrical signal output from said at least one memory device to an optical signal for transmission on said optical path. 14. The memory system of claim 1 further comprising: an electro-optical converter for converting an optical signal on said optical path to an electrical signal an transmitting said electrical signal to said at least one memory device. 15. The memory system of claim 9, further comprising: a multiplexer associated with said controller for multiplexing said optical channels, and a demultiplexer associated with said at least one memory device for demultiplexing said multiplexed optical channels. 16. The memory system of claim 9, further comprising: a multiplexer associated with said at least one memory device for multiplexing optical channels and providing multiplexed optical channels to said optical path; and a demultiplexer associated with said memory controller for demultiplexing said multiplexed optical channels. 17. The memory system of claim 9, further comprising: an optical multiplexer and demultiplexer located on each side of said optical path. <Desc/Clms Page number 15> 18. The memory system of claim 17, wherein said data includes at least read and write data. 19. The memory system of claim 17, wherein said data includes command data. 20. The memory system of claim 17, wherein said data includes address data. 21. The memory system of claim 17, said data includes a clock signal. 22. The memory system of claim 17, wherein said data includes control data. 23. The memory system of claim 17, further comprising: electrical paths connected between said controller and said at least one memory device for passing data between said controller and memory device. 24. The memory system of claim 1, wherein said at least one memory device is located on a memory module. 25. The memory system of claim 24, further comprising: <Desc/Clms Page number 16> an optical coupler at said memory module, having a connector for connecting with said optical path. 26. The memory system of claim 11, further comprising: a wavelength sensing mechanism connected to said controller, for providing wavelength information to said controller with respect to an optical signal on said optical path. 27. The memory system of claim 26, wherein said wavelength sensing mechanism is located at a controller side of said optical path. 28. The memory system of claim 26, wherein said controller provides wavelength adjustment information to said converter. 29. The memory system of claim 1, wherein said optical path comprises a single optical path between said controller and at least one memory device for passing at least read/write data present on a plurality of electrical paths between said controller and at least one memory device. 30. The memory system of claim 29 wherein said single optical path further passes command data between said controller and at least one memory device. <Desc/Clms Page number 17> 31. The memory system of claim 29 wherein said single optical path further passes address data between said controller and at least one memory device. 32. The memory system of claim 29 wherein said single optical path further passes a clock signal between said controller and at least one memory device. 33. The memory system of claim 1 wherein said data includes read/write data which originates on a plurality of electrical paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical path. 34. The memory system of claim 1 wherein said data includes command data which originates on a plurality of electrical paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical path. 35. The memory system of claim 1 wherein said data includes address data which originates on a plurality of electrical paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical path. 36. The memory system of claim 1 wherein said data includes clock signal data which originates on an electrical path, said optical path comprising a discrete optical guide respectively associated with said electrical path. <Desc/Clms Page number 18> 37. The memory system of claim 1 wherein said data includes clock signal data which originates on a plurality of electrical signal paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical signal paths. 38. The memory system of claim 1 wherein said data includes control signal data which originates on an electrical signal path, said optical path comprising a discrete optical guide associated with said electrical signal path. 39. The memory system of claim 1, wherein said controller, at least one memory device, and optical path are all integrated on the same die. 40. The memory system of claim 1, further comprising: a processor, for communicating with said at least one memory device, wherein said controller, at least one memory device, processor, and optical path are all integrated on the same die. 41. The memory system of claim 1, further comprising: a processor, for communicating with said at least one memory device, wherein said, processor and said at least one memory device are provided on separate dies and communicate via said optical path. <Desc/Clms Page number 19> 42. The memory system of claim 41, wherein said separate dies are provided in a common package. 43. The memory system of claim 41, wherein said separate dies are separately packaged and said optical path interconnects said packages. 44. The memory system of claim 24, wherein said memory module comprises an electro-optical converter for connecting optical data from said optical path to electrical signals for said at least one memory device. 45. A computer system, comprising: a processor; a memory system connected to said processor, said memory system comprising: a memory controller; at least one memory device; and an optical path connected between said memory controller and said at least one memory device for optically passing data between said controller and said at least one memory device. 46. A computer system of claim 45, wherein said controller transmits data to said at least one memory device through said optical path. <Desc/Clms Page number 20> 47. A computer system of claim 45, wherein said controller receives data from said at least one memory device through said optical path. 48. A computer system of claim 45, wherein said data includes at least one of read and write data. 49. A computer system of claim 45, wherein said data includes address data transmitted from said controller to said at least one memory device. 50. A computer system of claim 45, wherein said data includes command data transmitted from said controller to said at least one memory device. 51. A computer system of claim 45, wherein said data includes a clock signal. 52. A computer system of claim 45, wherein said data includes control data. 53. A computer system of claim 45, wherein said optical path comprises a plurality of multiplexed optical channels, said data being transmitted over said multiplexed optical channels. <Desc/Clms Page number 21> 54. A computer system of claim 45, further comprising an electro-optical converter for converting an electrical signal output from said controller to an optical signal for transmission on said optical path. 55. A computer system of claim 54, wherein said converter is wavelength-adjustable. 56. A computer system of claim 54, further comprising an electro-optical converter for converting an optical signal on said optical path to an electrical signal and transmitting said electrical signal to said controller. 57. A computer system of claim 45, comprising an electro-optical converter for converting an electrical signal output from said at least one memory device to an optical signal for transmission on said optical path. 58. A computer system of claim 45, comprising an electro-optical converter for converting an optical signal on said optical path to an electrical signal an transmitting said electrical signal to said at least one memory device. 59. A computer system of claim 52, comprising a multiplexer associated with said controller for multiplexing said optical channels, and a demultiplexer associated with said at least one memory device for demultiplexing said multiplexed optical channels. <Desc/Clms Page number 22> 60. A computer system of claim 52, comprising a multiplexer associated with said at least one memory device for multiplexing optical channels and providing multiplexed optical channels to said optical path; and a demultiplexer associated with said memory controller for demultiplexing said multiplexed optical channels. 61. A computer system of claim 52, comprising an optical multiplexer and demultiplexer located on each side of said optical path. 62. A computer system of claim 61, wherein said data includes at least read and write data. 63. A computer system of claim 61, wherein said data includes command data. 64. A computer system of claim 61, wherein said data includes address data 65. A computer system of claim 61, wherein said data includes a clock signal. 66. A computer system of claim 61, wherein said data includes control data. 67. A computer system of claim 61, further comprising: <Desc/Clms Page number 23> electrical paths connected between said controller and said at least one memory device for passing data between said controller and memory device. 68. A computer system of claim 45, wherein said at least one memory device is located on a memory module. 69. A computer system of claim 68, further comprising: an optical coupler at said memory module, having a connector for connecting with said optical path. 70. A computer system of claim 55, further comprising: a wavelength sensing mechanism connected to said controller, for providing wavelength information to said controller with respect to an optical signal on said optical path. 71. A computer system of claim 70, wherein said wavelength sensing mechanism is located at a controller side of said optical path. 72. A computer system of claim 70, wherein said controller provides wavelength adjustment information to said converter. <Desc/Clms Page number 24> 73. The computer system of claim 45, wherein said optical path comprises a single optical path between said controller and at least one memory device for passing at least read/write data present on a plurality of electrical paths between said controller and at least one memory device. 74. The computer system of claim 45, wherein said single optical path further passes command data between said controller and at least one memory device. 75. The computer system of claim 45, wherein said single optical path further passes address data between said controller and at least one memory device. 76. The computer system of claim 45, wherein said single optical path further passes a clock signal between said controller and at least one memory device. 77. The computer system of claim 45, wherein said data includes read/write data which originates on a plurality of electrical paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical path. 78. The computer system of claim 45, wherein said data includes command data which originates on a plurality of electrical paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical path. <Desc/Clms Page number 25> 79. The computer system of claim 45, wherein said data includes address data which originates on a plurality of electrical paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical path. 80. The computer system of claim 45, wherein said data includes clock signal data which originates on an electrical path, said optical path comprising a discrete optical guide respectively associated with said electrical path. 81. The computer system of claim 45, wherein said data includes clock signal data which originates on a plurality of electrical signal paths, said optical path comprising a plurality of discrete optical guides respectively associated with said electrical signal paths. 82. The computer system of claim 45, wherein said data includes control signal data which originates on an electrical signal path, said optical path comprising a discrete optical guide associated with said electrical signal path. 83. The computer system of claim 45, wherein said controller, at least one memory device, and optical path are all integrated on the same die. 84. The computer system of claim 45, wherein said processor, controller, at least one memory device and optical path are all integrated on the same die. <Desc/Clms Page number 26> 85. The computer system of claim 45, wherein said processor and at least one memory device are provided on separate dies and communicate via said optical path. 86. The computer system of claim 85, wherein said separate dies are provided in a common package. 87. The computer system of claim 85, wherein said separate dies are separately packaged and said optical path interconnects said packages. 88. The computer system of claim 68, wherein said memory module comprises an electro-optical converter for connecting optical data from said optical path to electrical signals for said at least one memory device. 89. An electro-optical converter for a memory system comprising: at least one input for receiving an electrical data signal from a memory controller; at least one device for converting said data signal to an optical signal; and at least one optical output for transmitting said optical signal into an optical path. 90. The electro-optical converter of claim 89, further comprising: said at least one device for converting being wavelength-adjustable. <Desc/Clms Page number 27> 91. The electro-optical converter of claim 89, wherein said optical output further comprises either a light emitting diode or injection laser diode. 92. An electro-optical converter for a memory system comprising: at least one input for receiving an electrical data signal from at least one memory device; at least one device for converting said data signal to an optical signal; and at least one optical output for transmitting said optical signal into an optical path. 93. The electro-optical converter of claim 92, further comprising: said at least one device for converting being wavelength-adjustable. 94. The electro-optical converter of claim 92, wherein said optical output further comprises either a light emitting diode or injection laser diode. 95. An electro-optical converter for a memory system comprising: at least one input for receiving a optical data signal from an optical path; at least one electro-optical converter for converting said received data signal to an electrical signal; and <Desc/Clms Page number 28> at least one electrical output for transmitting said output signal to an electrical path of a memory controller. 96. The electro-optical converter of claim 95, further comprising: said at least one electro-optical converter being wavelength-adjustable. 97. The electro-optical converter of claim 95, wherein said optical output further comprises either a photodiode. 98. An electro-optical converter for a memory system comprising: at least one input for receiving a optical data signal from an optical path; at least one electro-optical converter for converting said received data signal to an electrical signal; and at least one electrical output for transmitting said output signal to an electrical path of a memory device. 99. The electro-optical converter of claim 98, further comprising: said at least one electro-optical converter being wavelength-adjustable. <Desc/Clms Page number 29> 100. The electro-optical converter of claim 98, wherein said optical output further comprises a photodiode. 101. A method of operating a memory system comprising: receiving electrical signals from a memory controller; converting said received electrical signals into optical signals; and transmitting said optical signals over an optical path to a memory device. 102. The method of claim 101, further comprising: said controller receiving data from said at least one memory device through said optical path. 103. The method of claim 102, wherein said data includes at least one of read and write data. 104. The method of claim 102, wherein said data includes address data transmitted from said controller to said at least one memory device. 105. The method of claim 102, wherein said data includes command data transmitted from said controller to said at least one memory device. <Desc/Clms Page number 30> 106. The method of claim 102, wherein said data includes a clock signal. 107. The method of claim 102, wherein said data includes control data. 108. The method of claim 102, wherein said optical path comprises a plurality of multiplexed optical channels, said data being transmitted over said multiplexed optical channels. 109. The method of claim 102, further comprising: converting an electrical signal output from said controller to an optical signal for transmission on said optical path. 110. The method of claim 109, wherein said conversion step further comprises: adjusting the wavelength of said optical path. 111. The method of claim 108, further comprising: multiplexing said optical channels, and demultiplexing said multiplexed optical channels. 112. The method of claim 108, further comprising: <Desc/Clms Page number 31> multiplexing optical channels and providing multiplexed optical channels to said optical path; and demultiplexing said multiplexed optical channels. 113. The method of claim 108, further comprising: an optical multiplexer and demultiplexer located on each side of said optical path. 114. The method of claim 101, wherein said at least one memory device is located on a memory module. 115. The method of claim 114, further comprising: an optical coupler at said memory module, having a connector for connecting with said optical path. 116. The method of claim 101, further comprising: providing wavelength information to said controller with respect to an optical signal on said optical path. 117. The method of claim 116, wherein said controller provides wavelength adjustment information to said converter. <Desc/Clms Page number 32> 118. The method of claim 101, further comprising: combining a plurality of electrical paths between said controller and at least one memory device into a single optical path between said controller and at least one memory device for passing at least read/write data present on a 119. The method of claim 118 wherein said single optical path further passes command data between said controller and at least one memory device. 120. The method of claim 118 further comprising: passing address data between said controller and at least one memory device along said single optical path. 121. The method of claim 101, further comprising: integrating said controller, at least one memory device, and optical path all on the same die. 122. The method of claim 121, further comprising: integrating a processor for communicating with said at least one memory device with said controller, at least one memory device, and optical path all within the same die. 123. The method of claim 101, further comprising: <Desc/Clms Page number 33> providing a processor for communicating with said at least one memory device on separate dies; and communicating between said processor and at least one memory device via said optical path. 124. The method of claim 123, further comprising: providing said separate dies in a common package. 125. The method of claim 123, further comprising: separately packaging said separate dies; and interconnecting said packages via said optical path. 126. A method of operating a memory system comprising receiving electrical signals from at least one memory device; converting said received electrical signals into optical signals; and transmitting said optical signal over an optical path to a memory controller. 127. The method of claim 126, further comprising: said controller receiving data from said at least one memory device through said optical path. <Desc/Clms Page number 34> 128. The method of claim 127, wherein said data includes at least one of read and write data. 129. The method of claim 127, wherein said data includes address data transmitted from said controller to said at least one memory device. 130. The method of claim 127, wherein said data includes command data transmitted from said controller to said at least one memory device. 131. The method of claim 127, wherein said data includes a clock signal. 132. The method of claim 127, wherein said data includes control data. 133. The method of claim 127, wherein said optical path comprises a plurality of multiplexed optical channels, said data being transmitted over said multiplexed optical channels. 134. The method of claim 126, further comprising: converting an electrical signal output from said controller to an optical signal for transmission on said optical path. <Desc/Clms Page number 35> 135. The method of claim 134, wherein said conversion step further comprises: adjusting the wavelength of said optical path. 136. The method of claim 133, further comprising: multiplexing said optical channels, and demultiplexing said multiplexed optical channels. 137. The method of claim 133, further comprising: multiplexing optical channels and providing multiplexed optical channels to said optical path; and demultiplexing said multiplexed optical channels. 138. The method of claim 133, further comprising: an optical multiplexer and demultiplexer located on each side of said optical path. 139. The method of claim 126, wherein said at least one memory device is located on a memory module. 140. The method of claim 139, further comprising: <Desc/Clms Page number 36> an optical coupler at said memory module, having a connector for connecting with said optical path. 141. The method of claim 126, further comprising: providing wavelength information to said controller with respect to an optical signal on said optical path. 142. The method of claim 141, wherein said controller provides wavelength adjustment information to said converter. 143. The method of claim 126, further comprising: combining a plurality of electrical paths between said controller and at least one memory device into a single optical path between said controller and at least one memory device for passing at least read/write data present on a 144. The method of claim 143 wherein said single optical path further passes command data between said controller and at least one memory device. 145. The method of claim 143 further comprising: passing address data between said controller and at least one memory device along said single optical path. <Desc/Clms Page number 37> 146. The method of claim 126, further comprising: integrating said controller, at least one memory device, and optical path all on the same die. 147. The method of claim 146, further comprising: integrating a processor for communicating with said at least one memory device with said controller, at least one memory device, and optical path all within the same die. 148. The method of claim 126, further comprising: providing a processor for communicating with said at least one memory device on separate dies; and communicating between said processor and at least one memory device via said optical path. 149. The method of claim 148, further comprising: providing said separate dies in a common package. 150. The method of claim 148, further comprising: separately packaging said separate dies; and <Desc/Clms Page number 38> interconnecting said packages via said optical path. 151. The memory system of claim 9, wherein said plurality of multiplexed optical channels use Time Division Multiplexing (TDM). 152. The memory system of claim 9, wherein said plurality of multiplexed optical channels use Wave Division Multiplexing (WDM). 153. The memory system of claim 9, wherein said plurality of multiplexed optical channels use Frequency Division Multiplexing (WDM). 154. The memory system of claim 1, wherein said optical path optically passes compressed data. 155. The computer system of claim 53, wherein said plurality of multiplexed optical channels use Time Division Multiplexing (TDM). 156. The computer system of claim 53, wherein said plurality of multiplexed optical channels use Wave Division Multiplexing (WDM). <Desc/Clms Page number 39> 157. The computer system of claim 53, wherein said plurality of multiplexed optical channels use Frequency Division Multiplexing (WDM). 158. The computer system of claim 45, wherein said optical path optically passes compressed data. 159. The method of claim 108, wherein said plurality of multiplexed optical channels use Time Division Multiplexing (TDM). 160. The method of claim 108, wherein said plurality of multiplexed optical channels use Wave Division Multiplexing (WDM). 161. The method of claim 108, wherein said plurality of multiplexed optical channels use Frequency Division Multiplexing (WDM). 162. The method of claim 101, wherein said step of transmitting further comprises transmitting compressed data. |
<Desc/Clms Page number 1> AN OPTICAL INTERCONNECT IN HIGH-SPEED MEMORY SYSTEMS FIELD OF THE INVENTION [0001] The present invention relates to communicating at high speed data signals to and from memory storage devices such as DRAM memory devices. BACKGROUND OF THE INVENTION [0002] As computer processor and DRAM (Dynamic Random Access Memory) memory speeds increase, their bus speeds increase also. This increased speed also increases signal noise at connection points where a memory controller and DRAM memory devices connect to a bus. In addition, the connections of the bus also have associated electrical properties such as capacitance and inductance which, while causing minimal problems at low data speeds, causes increasingly significant problems at high speed. Consequently, at high speed, conventional bus arrangements can introduce signal distortion, noise, delays and other unwanted spurious signal phenomenon. ] Current memory devices commonly operate at hundreds of megahertz, but it is anticipated that computer bus speeds, which tend to run slightly slower than microprocessor speeds, will soon extend beyond lGHz. At such high frequencies, the minutest amount of signal aberration caused by the electrical properties of the electrical bus may cause severe and unexpected consequences. Additionally, the distance between components on a bus must be kept short, to minimize signal distortions and help insure that data and control signals reach their destination very quickly. ] Accordingly, a memory bus structure which reduces or eliminates signal distortion, noise, and other problems and permits reliable high speed (e. g. greater than 1 GHz) operation is desired. <Desc/Clms Page number 2> BRIEF SUMMARY OF THE INVENTION [0005] In one aspect the invention provides a memory apparatus and method of its operation which utilizes an optical path connected between a memory controller or processor and at least one memory device for passing data between the controller or processor and memory device at high throughput speed. BRIEF DESCRIPTION OF THE DRAWINGS [0006] The foregoing and other features and advantages of the invention will become more apparent from the detailed description of the exemplary embodiments of the invention given below with reference to the accompanying drawings in which : Fig. 1 shows a generic overview of the present invention ; Fig. 2 shows one exemplary embodiment of the invention; Fig. 3 shows a transistor-level view of the transmitter and receiver used in an exemplary embodiment of the invention; Fig. 4 shows a second exemplary embodiment of the invention; Fig. 5 shows a third exemplary embodiment of the invention; Fig. 6 shows a fourth exemplary embodiment of the invention; Fig. 7 shows a fifth embodiment of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS <Desc/Clms Page number 3>] The present invention uses one or more optical links between a processor and/or a memory controller and a DRAM memory device. The optical link includes, but is not limited, to optical fiber and optical waveguide links as described below in connection with various exemplary embodiments of the invention. Fig. 1 shows a high level block diagram of the present invention. A processor 100 is connected to a memory controller 104 which in turn is connected to a memory module 113 containing one or more memory devices 112 using one or more optical links 108. The memory controller 104 and modules 113 have optical couplers which enable them to connect to the optical links 108 to maintain optical continuity. The modules 113 have optical plug-in connectors to the optical links 108, but also have standard (non-optical) Dual Inline Memory Module (DIMM) connectors 109 for supplying power and other low-frequency signals. ] In the context of the invention, the processor 100, controller 104, and memory devices 112 can be located either on the same die or located on separate dies. In some cases, processor 100 can also serve as the memory controller 104 in which case a separate memory controller 104 can be omitted. [0009] Fig. 2 shows a first exemplary embodiment of the invention in which a single common optical link 108a transmits a plurality of data streams between a memory controller 104 and memory modules 113 using paired optical transmitters and receivers on opposite sides of link 108a pre-set to communicate at a respective wavelength. Fig. 2 shows the use of separate data (DQ), command (CMD), address (ADD), and clock (CLK) paths between controller 104 and each memory module 113 as is typical in a computer bus structure. It is also possible to send control and address data over the same data paths as is also well known in the art. For brevity, only the data (DQ) optical path will be discussed in detail, it being understood that the optical paths for other data and clock information sent <Desc/Clms Page number 4> by the controller will be handled the same except for the direction of data/clock pulse flow. It should also be understood that while the data (DQ) paths are bidirectional, the command/address and clock paths are unidirectional in that the dataflow is from controller 104 to the modules 113 and associated memory devices 112. ] As shown in Fig. 2, each data DQ path of the memory controller 104 is coupled to a respective optical transmitting/receiving device To/R... Tjg/R, each collectively identified by the label 201. Each transmitting/receiving device converts an electrical signal received from a DQ path of memory controller 114 and converts the electrical signal to an optical signal for transmission on optical link 108a to a memory module 113 over optical link 108a. Each transmitter/receiver 201 is also capable of receiving an optical signal from a module 113 and converting it to an electrical signal and sending it to controller 104 on a respective data (DQ) path. ] In addition to the transmitter/receivers 201 provided on the controller side, respective transmitters 203 are also provided for converting each of the electrical signals on the command, address and clock signal paths to optical signals over link 108a and transmitting these optical signals to modules 113. The transmitter/receivers 201 and transmitters 203 may form part of an electrical/optical converter 205. ] The Fig. 2 embodiment uses a single optical link 108a constructed as an optical fiber or optical waveguide between controller 104 and the memory modules 113. In this way, many datapins of controller 104 communicate over a single optical link 108a. In order to keep the optical signals from the different data (DQ), command (CMD), address (ADDRESS), and clock (CLK) paths from interfering with each other, wave division multiplexing is employed so that the optical signals from each of the transmitter/receiver devices 201 and transmitter devices 203 have a respective optical <Desc/Clms Page number 5> carrier wavelength (frequency) which is modulated by data sent on the various signal paths from controller 104 to converter 205. Likewise, the optical receiver portion of each transmitter/receiver 201 operates at a respective optical wavelength. ] As further shown in Fig. 2, the various optical signals from transmitter/receivers 201 and transmitters 203 are optically combined in a multiplexing portion of a wavelength division multiplexer/demultiplexer 207 for transmission over the common optical link 108a to memory modules 113. [0014] Each module 113 also contains a wave division multiplexer/demultiplexer 209 which receives the optically multiplexed signals on optical link 108a and wavelength demultiplexes them in a demutiplexer portion and passes the demuliplexed signals to respective transmitter/receivers 211, which electrically connect to the data (DQ) paths of the memory devices 112. In addition, the demultiplexed optical signals for the command (CMD), address (ADD) (or combined command/address) and clock (CLK) signal paths are passed on to receivers 213 which convert optical signals to electrical signals which are electrically coupled to the electrical command (CMD), address (ADD) and clock (CLK) signal paths of the memory devices 112. ] Data read from memory devices 112 is transmitted on the data (DQ) paths of the memory devices 112 to respective transmitter/receivers 211 where the electrical data is converted to an optical signal at a respective wavelength and sent to multiplexer/demultiplexer 209 where the data on the respective DQ optical paths is combined in the wave division multiplexer of multiplexer/demultiplexer 209. This data is then sent over optical link 108a to multiplexer/demultiplexer 207 where it is demultiplexed and passed to respective transmitter/receivers 201 where the DQ optical data is connected to electrical DQ data which is sent to respective DQ data paths of <Desc/Clms Page number 6> controller 104. Fig. 2 illustates the optical coupling of two memory modules 113 to memory controller 104 through the electro-optical converter 205 provided at the memory controller 104 side of optical link 108 and an electro-optical converter and 219 provided on the memory modules 113; however, it should be understood that any number of memory modules 113, containing any number of memory devices 112, may be optically coupled to controller 104 over optical link 108a. ] Fig. 3 shows a simplified optical transmitter 116 and optical receiver 120 which may be used in the electro/optical transmitter/receivers 201,211 and in the electro/optical transmitters 203 and receivers 213. A LED (Light Emitting Diode) or ILD (Injection Laser Diode) light emitter 124 in transmitter 116 provides a light output signal to an optical path 241 at a redefined wavelength, in response to an applied electrical signal at the gate of a transistor 126. At the receiver 120 side, a photodiode 128 couples light pulses received from an optical path 241 to the gate of an n-channel transistor 134. A p- channel biasing transistor 138 sources current to the n-channel transistor 134. A resistor 135 is positioned between the gate of transistor 134, as well as the drain of transistor 138. The transistors 134 and 138 and resistor 135 form an inverting amplifier 137. The output 139 of the inverting amplifier 137 is an electrical signal. [0017] Although Fig. 3 illustrates the light transmitter 116 and receiver 120 as discrete components, these devices are actually integrated devices which may be integrated together with multiplexer/demultiplexer 207 on a converter 205 chip or integrated on the same chip as the memory controller 104. At the module 113, the transmitter 116 and receiver 120 are preferably integrated on the same chip which contains the multiplexer/demultiplexer 209. It is also possible to integrate the transmitter 116 and <Desc/Clms Page number 7> receiver 120 on the module side within the actual memory devices 112 in which case each memory device 112 would contain its own converter circuit 219 shown in Fig. 3. [0018] Although a silicon substrate may be used for integrating the LED or ILD light emitter 124 and/or photodiode 128, the more preferred substrate material for such devices, particularly for LED or ILD 124 is gallium arsenide, as known in the art. Finally, it should be understood that while Fig. 3 illustrates a unidirectional data path, in actuality the data (DQ) paths in a memory system are bi-directional and that an optical transmitter 116 and receiver 120 are therefore understood to be employed at each path end of a bidirectional optical link 108a, as shown by transmitter/receivers 201 and 211. ] As noted, the Fig. 2 arrangement relies on wavelength division multiplexing of the different signal paths which exist between memory controllers 104 and the individual memory devices 112. Thus, each transmitter/receiver 201, transmitter 203 and receiver 235 as well as multiplexer/demultiplexers 207,209 must operate at specified optical wavelengths. These wavelengths can be controlled using known filter circuits. However, it is often difficult to ensure that a manufacturer's device operates precisely at a predetermined wavelength. To this end, it is also known to adjust operating conditions of an electro/optical device to ensure that it operates at a predetermined wavelength. ] Fig. 4 shows a modification of a portion of the system of Fig. 2, where transmitting devices 201 and receiving devices 203 are shown as being wavelength- adjustable. For clarity, only the DQO pin is shown, while DQ1-DQ15 are implied, similar to the representation in Fig. 2. During fabrication, the thicknesses and purities of the materials deposited as well as other factors make it difficult to fabricate a transmitter 203 and the transmitter portion of receiver/transmitters 201 and 211 to transmit at a precise predefined wavelength. Accordingly, the light emitters are wavelength adjustable. <Desc/Clms Page number 8> Wavelength detectors 233 are used to sense the nominal wavelength of an optically transmitted signal from each of the transmitters of devices 201 and 203 and data representing the sensed wavelength is fed back to controller 104 which determines if a transmitter is transmitting at its assigned wavelength and, if not, a wavelength adjuster 231 is operated by controller 104 which sends data to an addressed wavelength adjuster 231 for adjusting the wavelength over the command (CMD) signal path. Separate control signal paths can also be used for this purpose. The wavelength of optical signals sent by the data transmitters 211 in the modules 113 can also be sensed by the wavelength detector 233 and adjustment data can be sent to addressed wavelength adjuster 235 on the module 113 which adjusts the wavelength of the transmitter portion of transmitter/receiver 211. The adjustments can be accomplished during initialization of the memory system for operation. [0021] Fig. 5 shows another embodiment of the invention, which utilizes an optical link 108b for each data path on an optical bus 111. In this embodiment there is a one-to- one replacement of an electrical bus line which normally interconnects memory controller 104 with a memory module 113 with an optical link 108b. For simplicity, Fig. 5 only shows four such optical links (two DQ, one CMD of a CLK path). The individual optical links 108b connect with transmitter/receivers 211 or receivers 213 on the memory modules which convert the optical signals to electrical signals for use by memory devices 112 and electrical signals to optical signals for data read from the memory devices 112. As seen, there are several different techniques of optical data transmission which can be used on the optical link 108 in the present invention. These techniques can include but are not limited to Time Division Multiplexing (TDM). Using TDM, data from multiple pins can be used to occupy a single optical channel. Also, TDM can be used in <Desc/Clms Page number 9> conjunction with other optical data transmission schemes to reduce the number of optical channels (either fiber or wavelength) needed within an optical system. Two more examples of such techniques are Wavelength Division Multiplexing (WDM) and Frequency Division Multiplexing (FDM). Additionally, data compression techniques can be used. Such techniques have in common that they reduce the volume of data transmitted, the number of optical channels needed, or both. An embodiment of the present invention using WDM is shown in Fig. 2. WDM enables the simultaneous transmission of multiple data channels on the same physical optical link, by utilizing several different wavelengths on that optical link at the same time. An optical multiplexer (mux) portion of the multiplexer/demultiplexer 207, 209 combines different wavelength bands from individual optical sources into a multiple wavelength light beam for simultaneous transmission through a common optical link. At the receiving end of the optical link, an optical demultiplexer (demux) portion of a multiplexer/demultiplexer 209 demultiplexes or spatially disburses collimated multiple wavelength light from the optical link into separate wavelength bands, each of which can be directed to an individual optical receiver. Although Fig. 2 shows combination of multiplexer/demultiplexer devices 207,209 it should be apparent that separate multiplexers and demultiplexers can be used as well to perform the required multiplexing and demultiplexing functions. Another optical transmission technique, as shown in Fig. 5, uses a separate optical link for each data path. [0024] It should also be noted that although all data paths (e. g. , write/read data (DQ), command (CMD), address (ADD), clock (CLK) between the memory controller 104 and modules 113 are shown as utilizing optical transmission, it is also possible to use optical transmission only on the high speed data paths, e. g. the write/read data (CD) and <Desc/Clms Page number 10> clock (CLK) paths and utilize conventional electrical bus lines for slower speed data paths, e. g. command (CMB), address (ADD). The present invention can use any modulation format in the optical link to optimize either Signal to Noise Ration (SNR) or bandwidth utilization. This could include conventional digital modulation techniques such as FM or Non Return To Zero (NRTZ). ] The processor 100, controller 104, and memory devices 112 are typically located on separate dies with the memory devices being mounted on modules 113 which connect with the optical link 108a or 108b. However, it is also possible to integrate the processor and memory devices on the same die, with the processor incorporating the functions of the memory controller or with the memory controller also being integrated on the processor die. In the case where they are located on the same die, an integrated optical waveguide can be used to link them. Fig. 6, for example, shows an exemplary confined square pipe waveguide 212. Positioned on die 200, the waveguide 202 connects a processor with an integrated memory controller with DRAM 112. The waveguide 200 has a first metal layer 208 on top, a second metal layer 210 on the bottom, end plates 212 connecting the top and bottom layers, and an optically transmissive insulator 214 in middle through which light pulses carrying data are transmitted. The two metal layers (208,210) act as waveguides confining the light pulses. The insulator 214 could be made of Si02 which is commonly used in chip formation. Furthermore, in those configurations where the processor 204 and memory devices 206 are not on the same wafer or die and the module 113 and controller 104 are omitted, the waveguide 202 could also be implemented in freespace (air or vacuum). ] Fig. 7 shows an optical link 108c in the form of a flexible optical fiber. Using such a fiber, a processor 100 and memory devices 112 can be integrated on separate dies <Desc/Clms Page number 11> residing in separate planes and packaged separately or together, with the processor 100 and memory devices 112 being interconnected by the flexible optical fiber 108c. This allows easier fabrication of the bus lines as well as non-planar stacking of processor 100 and DRAM devices 112 in separate or common packaging. ] All of the above embodiments have in common that they achieve electrical isolation between the memory device 112 and the controller 104. They also make the optical link 108a, 108b, and 108c interconnections immune to noise, including at high frequency. Because the link is operated at high frequency, the clock signal for latching in data is sent with the data. Because fiber optic links do not affect pulse shape as do conventional electrical links, the memory devices 112 can be placed a greater distance from the controller 104 than is conventional. An additional advantage of the invention is that fiber optic links have lower power dissipation than conventional electrical links. This is because fiber optic links do not require I/O buffers, which consume power and also slow the propagation rate at which data is transferred. ] While the invention has been described and illustrated with reference to specific exemplary embodiments, it should be understood that many modifications and substitutions can be made without departing from the spirit and scope of the invention. Accordingly, the invention is not to be considered as limited by the foregoing description but is only limited by the scope of the appended claims. |
Systems and methods for persistent operations include a host and a memory system. The memory system, upon receiving a Persistent Write command and associated write data from the host, performs a Persistent Write of the write data to a non-volatile memory in the memory system based on the Persistent Write command. The memory system may also a receive a write identification (WID) associated with the Persistent Write command from the host and provide, upon successful completion of the Persistent Write, a Persistent Write completion indication along with the associated WID to the host.Des systèmes et des procédés pour des opérations persistantes comprennent un hôte et un système de mémoire. Le système de mémoire, lors de la réception d'une commande d'écriture persistante et de données d'écriture associées provenant de l'hôte, effectue une écriture persistante des données d'écriture dans une mémoire non volatile dans le système de mémoire sur la base de la commande d'écriture persistante. Le système de mémoire peut également recevoir une identification d'écriture (WID) associée à la commande d'écriture persistante provenant de l'hôte et fournir, lors de l'achèvement réussi de l'écriture persistante, une indication d'achèvement d'écriture persistante conjointement avec l'ID associé à l'hôte. |
A method of performing persistent operations, the method comprising: receiving, at a memory system, a Persistent Write command and associated write data from a host; and performing a Persistent Write of the write data to a non-volatile memory in the memory system based on the Persistent Write command.The method of claim 1, further comprising receiving a write identification (WID) associated with the Persistent Write command from the host.The method of claim 2, further comprising providing, from the memory system upon successful completion of the Persistent Write, a Persistent Write completion indication along with the associated WID to the host.The method of claim 3, comprising providing two or more Persistent Write completion indications to the host in a different order from an order in which corresponding two or more Persistent Write commands were received from the host.The method of claim 3, further comprising receiving, from the host a request to send status for one or more Persistent Writes along with associated WIDs.The method of claim 5, further comprising providing a status packet to the host, the status packet comprising WIDs for Persistent Write commands whose execution has been completed.The method of claim 2 wherein the WID comprises a multi-bit identification of a Persistent Write and a valid bit.The method of claim 2, further comprising receiving a group of two or more Persistent Write commands with a common WID, with the last Persistent Write command of the group having a Persist bit set to 1 and the remaining Persistent Write 20 commands having respective Persist bits set to 0 and providing a Persistent Write completion indication for the last Persistent Write command.The method of claim 1, further comprising receiving a FLUSH command from the host, wherein the FLUSH command indicates that all prior writes buffered in volatile media are to be pushed to non-volatile or persistent memory.The method of claim 9, further comprising providing a FLUSH completion indication upon completion of execution of the FLUSH command to the host.The method of claim 1, further comprising receiving one or more Persistent Write commands, maintaining statuses of the one or more Persistent Write commands completed in a completed bitmap and statuses of the one or more Persistent Write commands pending in a pending bitmap, and upon request for status from the host, providing the completed bitmap if there is no uncorrectable error or the pending bitmap if there is an uncorrectable error.The method of claim 1, wherein the memory system is a non-volatile dual in-line memory module configured to support Persistent Writes (NVDIMM-P).A method of performing persistent operations, the method comprising: providing, from a host to a memory system, a Persistent Write command and associated write data, wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non-volatile memory.The method of claim 13, further comprising providing a write identification (WID) associated with the Persistent Write command to the memory system from the host.The method of claim 14, further comprising receiving at the host, a Persistent Write completion indication along with the associated WID from the memory system upon successful completion of the Persistent Write. 21The method of claim 15, comprising receiving from the memory system, two or more Persistent Write completion indications in a different order from an order in which corresponding two or more Persistent Write commands were sent from the host to the memory system.The method of claim 14, further comprising, sending, from the host to the memory system, a request to send status for one or more Persistent Writes along with associated WIDs.The method of claim 17, further comprising receiving a status packet by the host from the memory system, the status packet comprising WIDs for Persistent Write commands whose execution has been completed.The method of claim 14, wherein the WID comprises a multi-bit identification of a Persistent Write and a valid bit.The method of claim 14, further comprising sending from the host to the memory system, a group of two or more Persistent Write commands with a common WID, with the last Persistent Write command of the group having a Persist bit set to 1 and the remaining Persistent Writes having respective Persist bits set to 0 and receiving from the memory system, a Persistent Write completion indication for the last Persistent Write.The method of claim 13, further comprising sending a FLUSH command from the host to the memory system, wherein the FLUSH command indicates that all prior writes buffered in volatile media are to be pushed to non-volatile or persistent memory by the memory system.The method of claim 21, further comprising receiving at the host, a FLUSH completion indication upon completion by the memory system of execution of the FLUSH command. 22The method of claim 13, wherein the memory system is a non-volatile dual in- line memory module configured to support Persistent Writes (NVDIMM-P).An apparatus comprising: a memory system configured to: receive a Persistent Write command and associated write data from a host; and perform a Persistent Write of the write data to a non-volatile memory in the memory system based on the Persistent Write command.The apparatus of claim 24, wherein the memory system is further configured to receive a write identification (WID) associated with the Persistent Write command from the host.The apparatus of claim 25, wherein the memory system is further configured to provide, upon successful completion of the Persistent Write, a Persistent Write completion indication along with the associated WID to the host.The apparatus of claim 26, wherein the memory system is further configured to provide two or more Persistent Write completion indications to the host in a different order from an order in which corresponding two or more Persistent Write commands were received from the host.The apparatus of claim 26, wherein the memory system is further configured to receive, from the host a request to send status for one or more Persistent Writes along with associated WIDs.The apparatus of claim 27, wherein the memory system is further configured to provide a status packet to the host, the status packet comprising WIDs for Persistent Write commands whose execution has been completed.The apparatus of claim 25, wherein the WID comprises a multi-bit identification of a Persistent Write and a valid bit. 23The apparatus of claim 25, wherein the memory system is further configured to receive a group of two or more Persistent Write commands with a common WID, with the last Persistent Write command of the group having a Persist bit set to 1 and the remaining Persistent Write commands having respective Persist bits set to 0 and providing a Persistent Write completion indication for the last Persistent Write command.The apparatus of claim 24, wherein the memory system is further configured to receive a FLUSH command from the host, wherein the FLUSH command indicates that all prior writes buffered in volatile media are to be pushed to non-volatile or persistent memory.The apparatus of claim 32, wherein the memory system is further configured to provide a FLUSH completion indication upon completion of execution of the FLUSH command to the host.The apparatus of claim 24, wherein the memory system is further configured to receive one or more Persistent Write commands; maintain statuses of the one or more Persistent Write commands completed in a completed bitmap and statuses of the one or more Persistent Write commands pending in a pending bitmap; and upon request for status from the host, provide the completed bitmap if there is no uncorrectable error or the pending bitmap if there is an uncorrectable error.The apparatus of claim 24, wherein the memory system is a non-volatile dual in- line memory module configured to support Persistent Writes (NVDIMM-P).The apparatus of claim 24 integrated into a device selected from the group consisting of a set top box, a server, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, and a mobile phone.An apparatus comprising: 24 a host configured to provide a Persistent Write command and associated write data to a memory system, wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non-volatile memory.The apparatus of claim 37, wherein the host is further configured to provide a write identification (WID) associated with the Persistent Write command to the memory system.The apparatus of claim 38, wherein the host is further configured to receive a Persistent Write completion indication along with the associated WID from the memory system upon successful completion of the Persistent Write.The apparatus of claim 39, wherein the host is further configured to receive from the memory system, two or more Persistent Write completion indications in a different order from an order in which corresponding two or more Persistent Write commands were sent to the memory system.The apparatus of claim 38, wherein the host is further configured to send to the memory system, a request to send status for one or more Persistent Writes along with associated WIDs.The apparatus of claim 41, wherein the host is further configured to receive a status packet from the memory system, the status packet comprising WIDs for Persistent Write commands whose execution has been completed.The apparatus of claim 42, wherein the WID comprises a multi-bit identification of a Persistent Write and a valid bit.The apparatus of claim 38, wherein the host is further configured to send t to the memory system, a group of two or more Persistent Write commands with a common WID, with the last Persistent Write command of the group having a Persist bit set to 1 and the remaining Persistent Writes having respective Persist bits set to 0 and receive 25 from the memory system, a Persistent Write completion indication for the last Persistent Write.The apparatus of claim 37, wherein the host is further configured to send a FLUSH command to the memory system, wherein the FLUSH command indicates that all prior writes buffered in volatile media are to be pushed to non-volatile or persistent memory by the memory system.The apparatus of claim 45, wherein the host is further configured to receive a FLUSH completion indication upon completion by the memory system of execution of the FLUSH command.The apparatus of claim 37, wherein the memory system is a non-volatile dual in- line memory module configured to support Persistent Writes (NVDIMM-P).The apparatus of claim 37 integrated into a device selected from the group consisting of a set top box, a server, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, and a mobile phone.An apparatus comprising: a means for storing data, comprising: means for receiving a Persistent Write command and associated write data from a host; and means for performing a Persistent Write of the write data to a non-volatile memory in the means for storing, based on the Persistent Write command.An apparatus comprising: a means for processing, comprising: means for providing a Persistent Write command and associated write data to a memory system; 26 wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non-volatile memory.A non-transitory computer-readable storage medium comprising code, which, when executed by a processor, causes the processor for performing persistent operations, the transitory computer-readable storage medium comprising code for receiving, at a memory system, a Persistent Write command and associated write data from a host; and code for performing a Persistent Write of the write data to a non-volatile memory in the memory system based on the Persistent Write command.A non-transitory computer-readable storage medium comprising code, which, when executed by a processor, causes the processor for performing persistent operations, the transitory computer-readable storage medium comprising code for providing, from a host to a memory system, a Persistent Write command and associated write data, wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non-volatile memory. |
CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 1 PERSISTENT WRITES FOR NON-VOLATILE MEMORY Field of Disclosure [0001] Disclosed aspects are directed to memory systems. More particularly, exemplary aspects are directed to Persistent Write operations and protocols thereof for non-volatile memory. Background [0002] Storage class memory (SCM) generally refers to high capacity memory which may also have high performance. SCM may be used in applications such as servers or other processing systems wherein an operating set of data for a processor or central processing unit may be stored in the SCM, while the complete data set may be stored in a backing memory or hard disk drive (HDD). An important expectation of the SCM is persistence of writes, which means that information written to the SCM is not to be lost if, say, the server crashes or loses power. Conventional non-volatile memory, which may meet such expectations pertaining to persistence, may not, however, be able to meet the capacity and performance metrics that may be desired of SCM. Therefore, technologies such as Phase Change Memory (PCM), Spin-Transfer Torque Magnetic Random Access Memory (STT MRAM), Resistive RAM (ReRAM), etc., are becoming more popular in implementations of SCM. [0003] When using SCM, an application may use memory write operations to update corresponding persistent memory. For a write to the SCM to be persistent, the application requesting the write operation may expect explicit confirmation that the write operation has reached the persistent memory. By contrast, write operations to non-persistent memory (such as dynamic random access memory (DRAM) or other volatile memory) are conventionally considered to be completed or posted, from the perspective of the application once the write operation and associated data have been transferred to the memory and no explicit confirmation that the data has been written is required. Thus, for applications which use SCM with an expectation of persistence, high performance techniques which provide explicit confirmation of write operations to persistent memory are desirable, wherein the high performance techniques are also compatible with different data sizes in order to maximize efficiency. [0004] There are two types of conventional schemes for persistent memory operations. A first scheme assumes that the entire memory system (e.g., a dual in-line memory module CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 2 (DIMM) comprising a series of DRAM integrated circuits, as known in the art) is energy-backed. In this case, a write operation to an intermediate buffer on the receiving end of the DIMM may be sufficient to satisfy expectations of persistence. In one implementation, once a write operation across a channel interface between the application requesting the write operation and the DIMM is successfully completed, the write operation may be considered to be persistent. However, implementing such schemes may involve the use of energy storage devices such as super-capacitors or batteries which provide power/charge for flushing the intermediate buffers on the DIMM when a power-failure is detected. But such energy storage devices may not be available on all DIMMs, and further, even if available, they come at high costs. [0005] In a second scheme, all previous write operations may be flushed to persistent memory while the application waits for a completion status from the DIMM. However, this scheme may incur a significant performance cost. For example, in cases wherein the application may be requesting Persistent Writes of fine granularity to the DIMM but there may be other concurrent but independent write operations streaming to the DIMM, flushing all previous write operations to persistent memory pending a completion status may slow down not only the Persistent Write requests but also the concurrent write operations. [0006] Accordingly, there is a need in the art for high performance and high efficiency Persistent Write operations which support different granularities or sizes of the Persistent Writes, while avoiding the aforementioned drawbacks of conventional approaches. SUMMARY [0007] Exemplary aspects of the invention include systems and methods for persistent operations. A memory system, upon receiving a Persistent Write command and associated write data from a host, performs a Persistent Write of the write data to a non- volatile memory in the memory system based on the Persistent Write command. The memory system may also a receive a write identification (WID) associated with the Persistent Write command from the host and provide, upon successful completion of the Persistent Write, a Persistent Write completion indication along with the associated WID to the host. CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 3 [0008] For example, an exemplary aspect is directed to a method of performing persistent operations, the method comprising receiving, at a memory system, a Persistent Write command and associated write data from a host, and performing a Persistent Write of the write data to a non-volatile memory in the memory system based on the Persistent Write command. [0009] Another exemplary aspect is directed to a method of performing persistent operations, the method comprising providing, from a host to a memory system, a Persistent Write command and associated write data, wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non- volatile memory. [0010] Another exemplary aspect is directed to an apparatus comprising a memory system configured to receive a Persistent Write command and associated write data from a host, and perform a Persistent Write of the write data to a non-volatile memory in the memory system based on the Persistent Write command. [0011] Another exemplary aspect is directed to an apparatus comprising a host configured to provide a Persistent Write command and associated write data to a memory system, wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non-volatile memory. [0012] Another exemplary aspect is directed to an apparatus comprising a means for storing data, comprising means for receiving a Persistent Write command and associated write data from a host, and means for performing a Persistent Write of the write data to a non- volatile memory in the means for storing, based on the Persistent Write command. [0013] Another exemplary aspect is directed to an apparatus comprising a means for processing, comprising means for providing a Persistent Write command and associated write data to a memory system, wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non- volatile memory. [0014] Another exemplary aspect is directed to a non-transitory computer- readable storage medium comprising code, which, when executed by a processor, causes the processor for performing persistent operations, the transitory computer-readable storage medium comprising code for receiving, at a memory system, a Persistent Write command and associated write data from a host, and code for performing a Persistent Write of the CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 4 write data to a non-volatile memory in the memory system based on the Persistent Write command. 100151 Another exemplary aspect is directed to a non-transitory computer- readable storage medium comprising code, which, when executed by a processor, causes the processor for performing persistent operations, the transitory computer-readable storage medium comprising code for providing, from a host to a memory system, a Persistent Write command and associated write data, wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non- volatile memory. BRIEF DESCRIPTION OF THE DRAWINGS [0016] The accompanying drawings are presented to aid in the description of aspects of the invention and are provided solely for illustration of the aspects and not limitation thereof [0017] FIG. 1 illustrates a processing system according to aspects of this disclosure [0018] FIGS. 2A-C illustrate transactions for handling Persistent Writes, according to various aspects of this disclosure. [0019] FIG. 3 illustrates an example encoding for a Persistent Write command according to this disclosure. [0020] FIGS. 4A-B illustrate sequences of events pertaining to exemplary methods of performing Persistent Writes, according to aspects of this disclosure. [0021] FIG. 5 depicts an exemplary computing device in which an aspect of the disclosure may be advantageously employed. DETAILED DESCRIPTION [0022] Aspects of the invention are disclosed in the following description and related drawings directed to specific aspects of the invention. Alternate aspects may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention. [0023] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 construed as preferred or advantageous over other aspects. Likewise, the term "aspects of the invention" does not require that all aspects of the invention include the discussed feature, advantage or mode of operation. [0024] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of aspects of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises", "comprising", "includes" and/or "including", when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof [0025] Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer-readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, "logic configured to" perform the described action. [0026] Exemplary aspects of this disclosure are directed to efficient and high performance Persistent Write operations for non-volatile memory such as non-volatile DIMM (or NVDIMM). Correspondingly, a persistent NVDIMM or NVDIMM-P is disclosed as one example memory system which supports Persistent Write operations according to exemplary aspects. A host device may be configured to provide exemplary requests/commands, e.g., for persistent operations, and corresponding data to an exemplary memory system, and the memory system may be configured to perform the requested Persistent Write operations and provide corresponding signaling to the host device as will be discussed in further detail in the following sections. CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 6 [0027] With reference now to FIG. 1, an exemplary processing system 100 is shown comprising host 120 and memory system 130. Host 120 can comprise one or more processing elements such as a central processing unit (CPU), digital signal processor (DSP), multimedia processor, system processor, graphics processing unit (GPU), modulator-demodulator (modem), applications processor, etc., even though they have not been explicitly illustrated. These processing elements may make requests for accessing memory system 130. A memory controller (not shown) may be present in host 120 to control these access requests. [0028] Memory system 130 may be a persistent memory, e.g., a NVDIMM-P according to this disclosure. Memory system 130 is shown to include input/output (I/O) block 132 and memory bank 134. Memory bank 134 may include Flash memory, DRAM, etc. [0029] Interconnect 110 is shown between host 120 and memory system 130, with data bus (DQ) 112, command and address bus (CA) 114, and response 116 separately identified. Host 120 may be able to provide commands and related addresses for memory access requests via CA 114 and send/receive data via DQ 112 (shown as a two-way bus). Response 116, although shown separately, may be configured as a part of CA 114 and may be implemented as a bidirectional bus in some cases. Response 116 may be used to provide information such as status of Persistent Writes in some example aspects. Various other buses/wires may also be present in interconnect 110 although these have not been separately identified. In some instances, memory system 130 may use separate buses for deterministic and non-deterministic responses, which will be explained further below. [0030] In an implementation wherein memory system 130 may be configured as an NVDIMM, with further support for a persistent NVDIMM (NVDIMM-P) configuration for at least some operations, host 120 may be able to provide one or more of the following exemplary commands to memory system 130, e.g., on CA 114: - READ command (e.g., with length encoding in multiples of 64B), along with a read identification (RID); - WRITE command (e.g., a conventional write command); - P-WRITE command (e.g., a Persistent Write command, along with a write identification (WID) for Persistent Writes, along with a persist bit that indicates when writes with a given WID need a Persistent Write complete (W PER) signal (e.g., to be provided on response 116) from memory system 130); CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 7 - ADRx command: Extended addressing; - SEND command (e.g., a command for memory system 130 to provide status of a read data request); - SEND Status command (e.g., a command for memory system 130 to provide error readout, WIDs, etc. related to persistent operations from memory system 130); - FLUSH command (to flush prior writes to be pushed to persistent memory) - NOP (no-operation); - Speculative Read command (e.g., used for reading cached memory); and - Other Caching commands, which may be implementation specific. [0031] As previously mentioned, separate buses may be provided in interconnect 110 for deterministic and non-deterministic responses from memory system 130 to host 120. Deterministic responses include metadata, error/parity information such as error control coding (ECC) pertaining to read data sent on DQ 112 to host 120, etc., which may be multiplexed on buses emanating from pins coupled to I/O 132, such as check bit pins. [0032] Among ECC bits, there may be media ECC specific to implementations of memory system 130 (e.g., as a NVDIMM) and channel specific ECC bits on DQ 112, for example, which may be standardized to enable cross-compatibility across various implementations. [0033] Metadata bits may include delayed RIDs for read requests sent out of program order (wherein, for in-order operations, the RID may be set to a "don't-care" status). Metadata bits may also include a write credit (WC), which refers to unused quota for write operations allocated to certain hosts or processing elements of host 120. Metadata bits may further include data poisoning bits for data from a user equipment as known in the art, and other user-defined bits. [0034] Non-deterministic responses according to this disclosure may pertain to persistent operations and may be sent through dedicated signaling such as response 116 from memory system 130 to host 120, and may indicate the following: - R RDY: a signal from memory system 130 to host 120 to indicate that read data is available; - Wr Per: a signal from memory system 130 to host 120 to indicate that a Persistent Write has completed; and CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 8 - ERROR: a signal from memory system 130 to host 120 to indicate error conditions such as CRC check, credit violation, media timeout, etc. [0035] For an implementation of memory system 130 as NVDIMM-P, the following protocol pins may be defined in I/O 132, for example. Using Pulse Width Modulation, the following pin and response signal configurations may be implemented. For example, in a double-data rate 5 (DDR5) implementation of NVDIMM-P, a single wire labeled as RSP n (one dedicated per sub-channel) may be used to provide the following signaling: 2 clock pulse low for R RDY, 4 clock pulse low for W PER and 6 clock pulse low for MESSAGE. Each low pulse may be followed by at least 2 clock high pulses. If a separate ERROR signal is needed then it may be defined as an 8 clock low pulse. [0036] For a DDR4 implementation: two pins may be used to address performance issues with a single pin (ODT1 and CKE1), wherein ODT1 represents 2 clock low pulse width for R RDY and 4 clock low for MESSAGE, and CKE1 represents 2 clock low pulse for W PER. Each low pulse may be followed by at least 2 clock high pulses, and if a separate ERROR signal is needed then it may be defined as a 6 clock low pulse on ODT1. [0037] In exemplary implementations of Persistent Writes, suitable combinations of hardware, software, firmware, etc. (e.g., applications, drivers, etc.) may be configured to enable notifications to be provided to host 120 from memory system 130 when one or more write requests from host 120 to memory system 130 achieve persistence. These notifications may be implementation specific, as explained below. [0038] When data to be written for a write operation reaches a power-fail protected buffer on a media controller (e.g., a power-fail protected memory of memory system 130), the write operation may be considered persistent during normal operations. However for certain infrequent cases or when media controller buffers are not power-fail protected, software will ensure that the writes are pushed all the way to NVM media [0039] For an implementation of memory system 130 as a NVDIMM-P, energy-backed DIMMs involve configurations wherein the aforementioned buffers are power-fail protected, which means that the NVDIMM-P Write command can be used even when persistence is required for the normal cases. Additionally, an NVDIMM-P Flush command, as defined herein, can be used to flush all writes in media controller buffers to the non-volatile memory. In the case of the Flush command, only writes that occurred prior to the Flush are guaranteed to be made persistent to non- volatile memory. CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 9 Software-implemented commands such as appropriate barrier operations may be used after the last write and before the Flush command is issued to ensure the correct order of the writes is maintained (e.g., when host 120 may be configured to send Persistent Write requests out of program order). [0040] Although non-energy-backed DIMMs may be less commonly used than the energy- backed DIMMs discussed above, the NVDIMM-P Persistent Write command may be used when persistence is required for the non-energy-backed DIMMs as well. A memory controller of host 120, as previously mentioned, may be configured to determine when to issue the Persistent Write command. In this case, memory system 130 is expected to provide explicit notification when the Persistent Write is completed, as will be discussed with reference to FIG. 2A. Further, an NVDIMM-P Flush command may also be used as before to flush all writes (even non-Persistent Writes) to the non-volatile memory. [0041] With reference now to FIG. 2A, an example set of transactions is shown between host 120 and memory system 130 to illustrate aspects of the Persistent Write command. There are some features of the exemplary Persistent Write (Wr Per) command (or simply, "Persistent Write") which may be common to the above-described Read command from host 120. These include a common write identification (WID), which may be a multi-bit identifier to identify specific write instructions. An example set of WIDs 210 are shown, which may be up to 16-bits wide each, which includes one valid bit "Vld" (accordingly, up to 31 WIDs may be present in a 64-byte command packet sent on CA 114, for example). The Persistent Write command may also have a reserved field in the WID encoding for Flush command status returns which will be further explained in the following passages. [0042] In one aspect, host 120 may be configured to issue a Persistent Write only when host 120 has associated Persistent Write credits available. Persistent Write credits (similar to Read credits known in the art) may be determined during configuration and managed by host 120, and may reflect a number of outstanding Persistent Writes host 120 is allowed to issue. [0043] Once issued, host 120 may be configured to track outstanding Persistent Writes based on their respective WIDs 210. In FIG. 2A (with combined reference to FIG. 1), two Persistent Writes (P-Write 1 with a first address and WID, and P-Write 2 with a second address and WID) labeled 202a and 204a are shown, issued from host 120 to memory CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 system 130 on CA 114, along with respective data, write data 202b and write data 204b on DQ 112, for example. [0044] Memory system 130 is configured to issue a response "Wr Per" on response 116, for a particular Persistent Write, once all the data for that Persistent Write has been written to non-volatile memory in memory system 130. Wr Per 202c and Wr Per 204c are shown for respective Persistent Writes 202a and 204a. However, Wr Per 202c and Wr Per 204c are shown to be sent in a different order than Persistent Writes 202a and 204a were received by memory system 130 to illustrate that the responses need not be in program order or in the order in which Persistent Write requests are received from host 120. In an aspect, memory system 130 may assert the signal "Req" on response 116 along with the appropriate encoding for the message "Write Rdy" for the Wr Per responses. [0045] Further, host 120 may also be configured to issue a "Send-Status for WID" command designated with the reference numeral 206a, at any time, to determine status of its outstanding Persistent Writes. In response, memory system 130 may be configured to issue a status packet with WIDs of completed Persistent Writes, e.g., in a burst length of 8 or "BL8" transfer over DQ 112. [0046] As previously mentioned, up to 31 WIDs 210 may be packed in each 64B status packet, wherein for each WID 210 there may be 16-bits assigned for the 15-bit WID and the Valid bit, combined. Further, memory system 130 may also use the previously mentioned metadata field to return status for other writes. Host 120 may use the returned WIDs 210 in WID status packet 206b, for example, to terminate tracking of outstanding Persistent Writes. [0047] In some aspects, two or more Persistent Writes may be grouped. For example, a set of 64B Persistent Writes may be grouped for committing (or writing to non- volatile memory) in the case of non-energy backed DIMMs, for example. An example implementation may involve a block of Persistent Writes to be issued to memory system 130 from host 120, wherein memory system 130 may be configured to collect up to the block of Persistent Writes in a buffer and commit all of the block of Persistent Writes at once, which may lead to improved efficiency. It will be understood, however, that grouping Persistent Writes and committing them in a block is not required for energy- backed DIMMs wherein the buffers are power-fail protected. [0048] The following modifications may be made to the Persistent Write command to implement the group commits discussed above. Host 120 may pick a single WID (from CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 11 WIDs 210, for example) for a set of two or more writes. An additional bit termed as "Persist" may be added to the Persistent Write command when sent on CA 114, for example. The Persist bit may be used to determine when the entire group of Persistent Writes has been sent to memory system 130. [0049] For example, three 64B Persistent Writes may be grouped together as follows using WID = 5 in an illustrative example. A first Persistent Write (WID=5, Persist=0), second Persistent Write (WID=5, Persist=0), and third Persistent Write (WID=5, Persist=1) may be sent on CA 114. Memory system 130 may be configured to collect the Persistent Writes with WID=5 in a buffer while Persist bit is 0, and when the last Persistent Write arrives with Persist bit set to 1, initiate the processes of persistence committing. [0050] In one implementation, only a Persistent Write with a Persist bit set to 1 may be configured to get a Wr Per response from memory system 130 (e.g., only the third Persistent Write in the above example) for the group of Persistent Writes. This may reduce the traffic on response 116. [0051] In some aspects, Persistent Writes with different WIDs may be interleaved, e.g., on CA 114. Accordingly, grouping of Persistent Writes for persistent commit does not imply that the Persistent Writes in a group with the same WID are sent consecutively from host 120. [0052] In some aspects, to address race conditions which may arise in the Wr Per responses to Persistent Writes, a Write Group ID (WGID) status method may be used to group statuses of one or more Persistent Writes, using different bitmaps, such as a WGID- completed bitmap and WGID-pending bitmap, as will be explained with reference to FIG. 2B below. Considering the Persistent Writes with respective WIDs, memory system 130 may assert a respective Wr Per (referred to as "W PER" for this case) for each Persistent Write with Persist = 1 and for each Flush completion. Host 120 may use another command Send-W PER-Status after receiving one or more W PERs (wherein, host 120 may also maintain a count of the W PERs, referred to as W PER-Count). Memory system 130 may return WGID-Completed Status with completed bits only based on W PERs already asserted. In turn, host 120 may update a list for the WGID, or "WGID list" and decrement the W PER-Count based on number of completions. [0053] In some cases, an uncorrectable error (UE) may occur in the transactions, which will be discussed with reference to FIG. 2C. When there is a UE in the Send-W PER- Status, CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 12 host 120 may stop issuing new Persistent Writes/Flushes and Send-W PER-Status. Host 120 may send a status read command referred to as Xread-Status to memory system 130. Memory system 130 in turn collects all Persistent Writes prior to receiving the Xread-Status to return WGID-Pending Status to host 120 (the status packets cover W PER assertion before a RD RDY is received) and memory system 130 can continue issuing W PER during status reads. Host 120 may update the WGID List maintained by host 120 and decrement W PER-Count based on pending writes. Host 120 can then start to re-issue the Persistent Writes/Flushes. [0054] For energy-backed DIMM implementations of memory system 130, in a normal protocol, host 120 may issue commands Persistent Writes (with Persist = 0/1), and Flush, but memory system 130 will not assert W PER for each Persistent Write with Persist=1, but memory system 130 will assert W PER for the Flush command when the Flush completes. In the case of WGID implementations, the W PER handling by memory system 130 remains the same as the normal protocol only for Flushes. A WGID Completed Status bitmap provided by memory system 130 will have Flush WGID bits set when they complete. When there is a UE in Send-W PER-Status, the operation remains the same as the normal case, except that the WGID Pending Status is only applicable for Flushes. [0055] Credits for WGID implementations may be handled as follows. Separate Credits may be maintained for status writes or Xwrites and for Persistent Writes, wherein host 120 may determine how a pool of credits may be allocated by memory system 130. Incremental Credit Return may be provided by Read Metadata, wherein an encoding scheme to return Xwrite or Persistent Write credits may be used. X-Read-Status returns may be available for Xwrite and Persistent Write buffer slots based on credit allocation. [0056] In an implementation, e.g., which will be described with reference to FIGS. 2B-C, memory system 130 may complete Persistent Writes (referred to as PWRITEs herein) and Flushes in any order. To persist a specific PWRITE to media, host 120 may issue a PWRITE for a given WGID with Persist = 1 or issue a PWRITE with Persist = 0 followed by any of the Flush types. Memory system 130 may issue W PER for each completed PWRITE that has the Persist=1 in the command as well as every completed Flush. If multiple PWRITEs are grouped with a single WGID with Persist=1 only in the last PWRITE terminating the group, memory system 130 may issue W PER only when the entire group of PWRITEs complete. CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 13 [0057] Referring now to FIG. 2B, W PER handling will be described for a normal case. System 250 is shown with host 120 and memory system 130. If both Write-Credits and free WGID are available, then host 120 may issue one or more PWRITEs or FLUSH commands shown as 252a, 254a. Host 120 may track the issued PWRITEs or FLUSH commands 252a, 254a in a Host-WGID-Pending list (not shown, but may be maintained within host 120). [0058] Correspondingly, memory system 130 may accept and track the pending PWRITEs or FLUSH commands 252a, 254a in a DIMM-WGID-Pending list (not shown). Memory system 130 may execute the pending PWRITEs or FLUSH commands 252a, 254a and assert corresponding W PERs 254b and 252b (note, shown in reverse order of the received PWRITEs or FLUSH commands 252a, 254a) to host 120 after respective completion of each received command. [0059] Memory system 130 may collect the completed received commands PWRITEs or FLUSH 252a, 254a in WGID-Completed bitmap 260, to which various updates 260a, 260b, 260c, etc., are shown. Memory system 130 may also remove the completed PWRITEs or FLUSH commands 252a, 254a from the DIMM-WGID-Pending list. [0060] Host 120 may maintain a count of received W PER events, e.g., for receiving W PERs 254b, 252b, referred to as W PER-Count. Concurrently, host 120 may handle the received W PER events as follows: if the W PER-Count>0, then host 120 may issue a status request shown as Send-W PER Status 256a. After a predefined time, referred to as Tsend time, memory system 130 may send a snapshot of WGID-Completed bitmap 260 at that time instance (260b in this case) in the response shown as a WGID Status 256b to host 120. The snapshot may include completions for W PERs issued up to start of WGID Status 256b transfer to host 120. [0061] In some aspects, 1 completion at a minimum is logged in the snapshot. Memory system 130 clears bit positions in WGID-Completed bitmap 260 based on completions sent in WGID Status 256b, shown by the transition of WGID-Completed bitmap 260b to WGID-Completed bitmap 260c after the reset or clearing of the bit positions. [0062] Host 120 receives WGID-Status 256b and may extract information regarding the completed WGIDs. Correspondingly, host 120 may free up completed WGIDs from the Host-WGID-Pending list and decrement W PER-Count by the number of completions received in WGID-Completed bitmap 260 (e.g., decrement a count of 2 based on the two W PERs received as indicated by WGID-Completed bitmap 260b). Host 120 may CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 14 repeat the above process starting with monitoring W PER-Count and if the W PER- Count>0, then issuing another status request Send-W PER Status 256a to memory system 130. [0063] In exemplary implementations, host 120 and memory system 130 may continue to issue and execute new PWRITEs while W PER event processing is underway. Although the W PER-Count and pending lists such as HOST-WGID-Pending list, DIMM-WGID- Pending list, etc., have been discussed for an example implementation, alternative structures for achieving the above-described functionality may be used without deviating from the scope of this disclosure. [0064] Referring now to FIG. 2C, system 270 for handling channel Uncorrectable Error (UE) following SEND-W PER-Status from host 120 which results in loss of the completions sent in WGID Status 256b (explained in FIG. 2B above) from memory system 130 is shown. Further, it is noted that memory system 130 may have cleared the prior completions from WGID-Completed bitmap 260 in FIG. 2B. [0065] Accordingly, in a protocol for recovering from such errors in system 270, host 120 may initiate the recovery process by stopping issue of new PWRITE or FLUSH Commands (e.g., PWRITE-3 or FLUSH-3 272a is not issued, shown in dashed lines to indicate the timeline that they would have been issued had the error not occurred), while memory system 130 may continue to issue RD RDY and/or W PER events for completed reads or PWRITEs or FLUSH commands (e.g., W PER 254b is shown to be issued whereas 252b is not issued till after error recovery). Host 120 may also continue to issue SEND and update W PER-Count. [0066] After a pre-specified minimum time delay for a write enable signal, referred to as TWE Delay following the last PWRITE, host 120 issues XREAD-STATUS 274a to memory system 130, and memory system 130 may prepare a complete Status packet with a snapshot of WGID-Pending bitmap 280, which is another bitmap provided in addition to WGID-Completed bitmap 260 discussed above, wherein WGID-Pending bitmap 280 includes the status of all Pending PWRITEs/FLUSHes. Memory system 130 may assert RD RDY 276b, and host 120 may issue SEND 278a in response. [0067] Memory system 130 may then return the prepared Status packet 278b from which host 120 may extract and processes WGID-Pending bitmap 280 received in Status packet 278b. Host 120 may free appropriate WGIDs from its Host-WGID-Pending tracking list and decrement W PER-Count by the number of freed WGIDs. Host 120 may then CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 repeat the processes starting with issuing new PWRITE/FLUSH commands and process pending W PERs at this time as per previous page [0068] In some aspects, the Status Packet 278b is configured to indicate whether it has the WGID-Completed bitmap 260 or WGID-Pending Bitmap 280. W PER response status packets contain WGID-Completed Bitmap 260, while all other status packets contain WGID-Pending Bitmap 280. The TWE Delay time is configured to account for the time to get error notification from memory system 130 for the last PWRITE issued from host 120 and the wait time from UE detection before XREAD-STATUS 274a issued from host 120 may vary depending on when the last PWRITE was issued. [0069] With reference to FIG. 3, an example encoding for Persistent Writes, e.g., for a DDR5 implementation of memory system 130 is shown. The CA1 field is typically used to differentiate between lUI and 2UI commands in DDR5 technology and may be retained for NVDIMM-P implementations. CA 114, in some implementations may be configured at DDR speeds for DDR5 with only 7 pins, and in such cases, a separate command encoding may be used for Persistent Writes, e.g., as shown in FIG. 3. [0070] In FIG. 3, if Persist bit =1 this indicates that memory system 130 is to push all Persistent Writes associated with the respective WID to non-volatile memory. If there is a single 64B Persistent Write in a group, Persist bit may be set to 1. For Persistent Writes larger than 64B, all Persistent Writes may have the same WID, with the last Persistent Write having its Persist bit set to 1 while the remaining Persistent Writes have their Persist bits set to O. [0071] In addition to the above transactions, as introduced in the prior sections, another command may also be used in association with Persistent Writes, termed as the FLUSH command. The FLUSH command is configured to indicate to memory system 130 that all prior writes buffered (e.g., in non-persistent or volatile memory) are to be pushed to persistent memory, keeping in mind that future writes may not be similarly affected or pushed to persistent memory when using the FLUSH command. [0072] When execution of the FLUSH is completed, memory system 130 may once again assert Wr, Per, e.g., on response 116 to host 120, similar to the case of the Persistent Writes discussed above. [0073] Further, host 120 may also provide the command, Send-Status for WIDs (similar to Persistent Writes) to memory system 130 in the case of the FLUSH command, to which memory system 130 may respond with WID Status Packet with a unique reserved WID CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 16 to indicate completion of the FLUSH execution (e.g., WID with all bits set to 1 may be such a reserved WID used to indicate completion of FLUSH execution). [0074] In one implementation, only one outstanding FLUSH command from host 120 may be allowed. Thus, in this implementation, host 120 may have to wait for the FLUSH completion response from memory system 130 before sending another FLUSH command. In alternative implementations, FLUSH commands may be accompanied with corresponding FLUSH IDs (e.g., selected from reserved WID fields) and corresponding Response to Send-Status may cause memory system 130 to return FLUSH IDs whose FLUSH execution has been completed. [0075] It will be appreciated that aspects include various methods for performing the processes, functions and/or algorithms disclosed herein. For example, FIG. 4A illustrates an exemplary method 400 of performing persistent operations. [0076] Block 402 comprises receiving, at a memory system (e.g., memory system 130), a Persistent Write command (e.g., Persistent Write 202a) and associated write data (e.g., data 202b) from a host (e.g., host 120). [0077] Block 404 comprises performing a Persistent Write of the write data to a non-volatile memory (e.g., to a non-volatile memory in memory system 130) in the memory system based on the Persistent Write command. A write identification (WID) associated with the Persistent Write command may be received from the host and upon successful completion of the Persistent Write, a Persistent Write completion indication (Wr Per) along with the associated WID (e.g., Wr Per 202c) may be provided to the host. [0078] Similarly, FIG. 4B illustrates another exemplary method 450 of performing persistent operations. [0079] Block 452 comprises providing, from a host (e.g., host 120) to a memory system (e.g., memory system 130), a Persistent Write command (e.g., Persistent Write 202a) and associated write data (e.g., data 202b) wherein the Persistent Write command indicates to the memory system to perform a Persistent Write of the write data to a non- volatile memory. [0080] Block 454 comprises providing a write identification (WID) (e.g., WID 210) associated with the Persistent Write command to the memory system from the host. [0081] An example apparatus in which aspects of this disclosure may be utilized, will now be discussed in relation to FIG. 5. FIG. 5 shows a block diagram of computing device 500. Computing device 500 may correspond to an exemplary implementation of a CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 17 processing system 100 of FIG. 1, wherein processor 120' may be one of the processing elements of host 120. Processor 120' is exemplarily shown to be coupled to memory system 130 through interconnect 110, with further details of interconnect 110 omitted from this view for the sake of clarity. Processor 120', interconnect 110, and memory system 130 may be configured to perform methods 400-450 as discussed above. It will be understood that other memory configurations known in the art such as involving one or more levels of caches, although not shown, may be present in computing device 500. [0082] FIG. 5 also shows display controller 526 that is coupled to processor 120' and to display 528. In some cases, computing device 500 may be used for wireless communication and FIG. 5 also shows optional blocks in dashed lines, such as coder/decoder (CODEC) 534 (e.g., an audio and/or voice CODEC) coupled to processor 120' and speaker 536 and microphone 538 can be coupled to CODEC 534; and wireless antenna 542 coupled to wireless controller 540 which is coupled to processor 120'. Where one or more of these optional blocks are present, in a particular aspect, processor 120', display controller 526, memory system 130, and wireless controller 540 are included in a system-in-package or system-on-chip device 522. [0083] Accordingly, a particular aspect, input device 530 and power supply 544 are coupled to the system-on-chip device 522. Moreover, in a particular aspect, as illustrated in FIG. 5, where one or more optional blocks are present, display 528, input device 530, speaker 536, microphone 538, wireless antenna 542, and power supply 544 are external to the system-on-chip device 522. However, each of display 528, input device 530, speaker 536, microphone 538, wireless antenna 542, and power supply 544 can be coupled to a component of the system-on-chip device 522, such as an interface or a controller. [0084] It should be noted that although FIG. 5 generally depicts a computing device, processor 120' and memory system 130, may also be integrated into a set top box, a server, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, a mobile phone, or other similar devices. [0085] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, CA 03073686 2020-02-21 WO 2019/055164 PCT/US2018/046590 18 electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof [0086] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. [0087] The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. [0088] Accordingly, an aspect of the invention can include a computer-readable media embodying a method of performing Persistent Writes. Accordingly, the invention is not limited to illustrated examples and any means for performing the functionality described herein are included in aspects of the invention. [0089] While the foregoing disclosure shows illustrative aspects of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
In described examples, a level shifter (200) includes a signal generator (206) that generates differential signals on a first output and a second output. A first capacitor (C21) is coupled between the first output and a first node (N21), and a second capacitor (C22) is coupled between the second output and a second node (N22). A third capacitor (C23) is coupled between the first node (N21) and a first voltage potential (VT). The capacitance of the third capacitor is variable (C23). A fourth capacitor (C24) is coupled between the second node (N22) and the first voltage potential (VT). The capacitance of the fourth capacitor (C24) is variable. |
CLAIMSWhat is claimed is:1. A level shifter comprising:a signal generator generating differential signals on a first output and a second output; a first capacitor coupled between the first output and a first node;a second capacitor coupled between the second output and a second node;a third capacitor coupled between the first node and a first voltage potential, wherein the capacitance of the third capacitor is variable; anda fourth capacitor coupled between the second node and the first voltage potential, wherein the capacitance of the fourth capacitor is variable.2. The level shifter of claim 1, further comprising a voltage potential selectively coupled to the first node and selectively coupled to the second node.3. The level shifter of claim 2, further comprising a first differential amplifier having inputs coupled to the first node and the second node, the output of the first differential amplifier being coupled to the output of the level shifter, wherein the voltage potential is the common mode voltage of the first differential amplifier.4. The level shifter of claim 1, further comprising a second differential amplifier having a first input coupled to the first node and a second input coupled to the second node, wherein the capacitance values of at least one of the third capacitor and the fourth capacitor are set in response to the output of the second differential amplifier.5. The level shifter of claim 4, further comprising a processor coupled to the output of the second differential amplifier, the processor providing instructions to set the capacitance values of the third and fourth capacitor in response to a signal output by the second differential amplifier.6. The level shifter of claim 5, wherein the processor couples and decouples a voltage potential to the first node and the second node.7. The level shifter of claim 6, wherein the processor measures the common mode transient immunity between the first node and the second node after a cycle of coupling the second voltage potential to the first node and the second node followed by decoupling the second voltage potential from the first node and the second node.8. The level shifter of claim 1, further comprising at least one comparator coupled to the output of the level shifter.9. The level shifter of claim 8, wherein amplitudes of signals generated by the signal generator are variable and wherein threshold levels of the at least one comparator are set in response to the amplitudes of signals generated by the signal generator.10. A method of calibrating a level shifter, the method comprising:coupling a first node to a first voltage potential, the first node being coupled to a first capacitor coupled to a signal generator, a second capacitor coupled to a second voltage potential, and a first input to a first differential amplifier;coupling a second node to the first voltage potential, the second node being coupled to a third capacitor coupled to the signal generator, a fourth capacitor coupled to the second voltage potential, and a second input to the first differential amplifier;decoupling the first voltage from the first node and the second node;sweeping a voltage across the level shifter to generate a differential voltage between the first node and the second node;measuring the voltage difference between the first node and the second node; and adjusting the capacitance value of at least one of the second capacitor and the fourth capacitor in response to the measuring.11. The method of claim 10, wherein inputs of a second differential amplifier are coupled to the first node and the second node, and wherein the measuring comprises measuring the output voltage of the second differential amplifier.12. The method of claim 10, wherein the adjusting comprises adjusting the capacitance value of at least one of the second capacitor and the fourth capacitor to equalize the voltages between the first node and the second node.13. The method of claim 12, wherein equalizing the voltages on the first node and the second node comprises making the voltage on the first node within a predetermined value of the voltage on the second node.14. The method of claim 10, wherein the first voltage potential is a function of the common-mode rejection ratio of the first differential amplifier.15. The method of claim 10, further wherein adjusting the capacitance value of at least one of the second capacitor and the fourth capacitor in response to the measuring indicating that the voltage difference between the first node and the second node exceeds a predetermined value.16. A method of calibrating a level shifter, the method comprising:generating a signal, the signal being a control signal for the level shifter, the signal having an amplitude less that the control signal used during operation of the level shifter;transmitting the signal through a capacitive voltage divider;transmitting the signal into a comparator; andadjusting the threshold of the comparator to where the signal is detected above a noise margin.17. The method of claim 16, wherein generating a signal comprises generating a signal having half the amplitude of the control signal used during operation of the level shifter.18. The method of claim 16, wherein generating a signal comprises generating a differential signal.19. The method of claim 16 further comprising transmitting the signal through at least one differential amplifier coupled between the voltage divider and the comparator. |
LEVEL SHIFTER AND METHOD OF CALIBRATIONBACKGROUND[0001] Voltage translators or level shifters are devices that resolve mixed voltage incompatibility between different parts of a system that operate in multiple voltage domains. They are common in many complex electronic systems, especially when interfacing with legacy devices. With the advent of wide-bandgap semiconductors, the switching speeds of level shifters are increasing. However, conventional level shifters do not have the required high common-mode transient immunity (CMTI) with propagation times that are fast enough to handle these high switching speeds.SUMMARY[0002] In described examples, a level shifter includes a signal generator that generates differential signals on a first output and a second output. A first capacitor is coupled between the first output and a first node and a second capacitor is coupled between the second output and a second node. A third capacitor is coupled between the first node and a first voltage potential. The capacitance of the third capacitor is variable. A fourth capacitor is coupled between the second node and the first voltage potential. The capacitance of the fourth capacitor is variable. BRIEF DESCRIPTION OF THE DRAWINGS[0003] FIG. 1 is a schematic diagram of a portion of a switching power supply.[0004] FIG. 2 is a schematic diagram of an example of a level shifter of the power supply ofFIG. 1 that is tunable so as to increase common mode transient immunity.[0005] FIG. 3 is an example of a signal generated by the pulse generator of FIG. 2 in response to an input voltage.[0006] FIG. 4 is an example of a signal at the output of an amplifier of FIG. 2 in response to a pulse generated by the pulse generator of FIG. 2.[0007] FIG. 5 is a detailed schematic diagram of an example of the first differential amplifier of FIG. 2.[0008] FIG. 6 is a flow diagram describing a method of calibrating a level shifter, such as the level shifter of FIG. 2 DETAILED DESCRIPTION OF EXAMPLE EMB ODFMENT S[0009] Level shifters with high common-mode transient immunity (CMTI) and low propagation delay are described herein. The high CMTI enables the level shifters to operate at high switching frequencies in applications such as driving high voltage field-effect transistors (FETs). In some examples, the level shifters drive high-side signal translations for FET drivers of wide-bandgap power FETs in high voltage switching power supplies. Such wide-bandgap FETs can include gallium nitride and silicon carbide (GaN and SiC) power FETs. With the emergence of such wide-bandgap semiconductors, switching speeds of switching power supplies are increasing, which is creating greater demands on the gate drivers and level shifters within the switching power supplies. Traditional switching power supplies reduce switching losses by implementing wide-bandgap drivers having slew-rates that are higher than current level shifters can support without errors.[0010] FIG. 1 is a schematic diagram of a portion of a switching power supply 100. The power supply 100 includes a controller 104 that is coupled to a switching portion 106, whereby the controller 104 drives a FET Ql 1 and a FET Q12 in the switching portion 106. The FET Ql 1 is sometimes referred to as a high-side FET and the FET Q12 is sometimes referred to as a low-side FET. In some examples, the FETs Ql l and Q12 are wide-bandgap GaN FETs with drain/source breakdown voltages of approximately 600V. The FETs Ql l and Q 12 are examples of switches that may be implemented in the switching portion 106. Other switching devices may be implemented in the power supply 100. The power supply 100 enables a high voltage swing between a transmitter (not shown) and a receiver (not shown).[0011] The drain of FET Ql l is coupled to a voltage source VI 1, which is a high voltage source and in some examples the voltage source VI 1 has a voltage potential between zero and 600V. The source of FET Q12 is coupled to a voltage potential, which in the example of FIG. 1, is a ground.[0012] The controller 104 includes control circuitry 110 that may receive and output a plurality of signals and voltages to drive the switching portion 106. In the example controller 104, the control circuitry 110 receives a control signal at a node Ni l . In some examples, the controls signals include a pulse width modulated (PWM) signal, which controls or sets the timing of the switching portion 106. In other examples, the control circuitry 110 may have other inputs coupled thereto. The control circuitry 110 has an output 112 coupled to the input of a level shifter 120 and an output 124 coupled to the input of a driver 126 that drives the FET Q12.[0013] The level shifter 120 enables the controller 104 to operate the FET Ql l at a high voltage when the controller 104 itself is operated at a much lower voltage. The level shifter 120 has an output 130 that is coupled to a driver or amplifier 132, which controls the gate voltage of the FET Ql l . Likewise, the driver 126 controls the gate voltage of the FET Q12. The driver 126 operates at a voltage VDD, such as 5V, relative to a voltage VSS, which may be ground. The level shifter 120 and the driver 132 may operate at a small voltage, but their ground reference VHS may be much higher than the VSS potential. Accordingly, the voltage difference between the ground reference VHs and a supply voltage VHBmay be VDD or 5 V.[0014] When the FET Q12 turns off, the FET Ql l turns on and the voltage VHS rapidly slews up to the voltage VI 1. The output of the level shifter 120 also slews up with the voltage VHS, which produces a very fast common-mode transient for the level shifter 120. High speed switching power supplies require a driver with very good common-mode transient immunity (CMTI) to withstand the high slew rates of wide-bandgap devices such as the FETs Ql l and Q12. Many switching power supplies further require low propagation time and propagation matching to support high switching frequencies. Furthermore, many switching power supplies require level shifters with low quiescent current consumption. Level shifters are described herein that have high CMTI, operate at high switching frequencies, and draw low quiescent current.[0015] FIG. 2 is a schematic diagram of an example of a level shifter 200 that is tunable to increase CMTI. The level shifter 200 is coupled to an input that may be coupled to the node Nl 1 of FIG. 1. The input 202 is coupled to a pulse generator 206 that converts the input signal at the input 202 to a plurality of differential pulses that are output on nodes Q and Q' . The signal output on the node Q is referred to as the signal V21 and the signal output on the node Q' is referred to as V22. In other examples, signal generation devices other than the pulse generator 206 may be implemented to generate differential signals representative of the input signal on node Ni l .[0016] The nodes Q and Q' are coupled to a plurality of drivers 208. The last of the drivers 208 are a driver 210 and a driver 212 that are coupled to or powered by a variable voltage source 216. The variable voltage source 216 sets the amplitude of the signals V23 and V24 at the output of the drivers 210 and 212. As described in greater detail below, the variable voltage source 216 varies the amplitudes of the signals V23 and V24 to calibrate the output amplitude of the level shifter 200. In some examples, the plurality of drivers 208 are implemented with a single driver coupled to the Q node and a single driver coupled to the Q' node.[0017] A capacitor C21 is coupled between the driver 210 and a node N21 and a capacitor C22 is coupled between the driver 212 and a node N22. The capacitors C21 and C22 isolate the voltage potential VHS from low voltage circuitry, such as the drivers 208 and the pulse generator 206. A capacitor C23 is coupled between the node N21 and a voltage termination Vj. A capacitor C24 is coupled between the node N22 and the voltage termination Vj. The voltage termination VTmay be a plurality of different voltages as described herein. The capacitors C23 and C24 are variable or able to be trimmed to improve the CMTI at nodes N21 and N22 as described in greater detail below. In some examples, the capacitance values of the capacitors C23 and C24 are greater than the capacitance values of the capacitors C21 and C22. The capacitors C21 and C23 form a voltage divider at node N21 and capacitors C22 and C24 form a voltage divider at node N22. The signals V23 and V24 are usually high frequency signals or contain high frequency components, such as step functions, which are able to pass through capacitors C21 and C22 and become a differential signal at nodes N21 and N22. Common-mode signals are generated on N21 and N22 in response to CMTI across the level shifter 200. During calibration, the ratio of C21 to C23 is closely matched to the ratio of C22 to C24 to minimize the differential output produced on nodes N21 and N22 in response to CMTI. If the ratios are not closely matched, transient common mode voltages may cause delays and/or errors in processing of the signals V23 and V24 as described herein.[0018] Differential inputs of a differential amplifier 220 are coupled to the nodes N21 and N22. The differential amplifier 220 processes the signals V21 and V22 as described herein. Differential inputs of another differential amplifier 222 are also coupled to the nodes N21 and N22. The differential amplifier 222 measures the differential transient response on the nodes N21 and N22 during a transient test and generates a signal VTEST, which is proportional to the differential transient response. The signal VTESTis input to a processor 224 that trims the capacitance values of the capacitors C23 and C24 in response to the signal VTEST- [0019] A resistor R21 couples a voltage source VCMto the node N21 by way of a switch SW21 and a resistor R22 couples the voltage source VCMto the node N22 by way of the switch SW21. The state of the switch SW21 is set by the processor 224 and the switch SW21 serves to charge the nodes N21 and N22 to the voltage VCM, which is the common mode voltage of the differential amplifier 220. The charges on the nodes N21 and N22 are analyzed by the processor 224 to determine the proper capacitance values of the capacitors C23 and C24 to maximize CMTI as described herein.[0020] In the example of FIG. 2, the output of the differential amplifier 220 is coupled to the input of a second differential amplifier 230. In the example of FIG. 2, the differential amplifier 220 has a very good high-frequency common mode rejection ratio (CMRR). For example, a two volt swing over a two nanosecond period may produce a maximum 2mV differential swing on the output of the differential amplifier 220. The CMRR of the differential amplifier 220 is a factor that limits the CMTI of the level shifter 200. The differential amplifier 220 is sometimes referred to herein as the first stage. Common-mode voltage swings on the nodes N21 and N22 have little effect on the gain of the differential amplifier 220. The differential amplifier 230 has moderate gain, which may be less than the gain of the differential amplifier 220. Furthermore, the differential amplifier 230 has low output impedance to drive large loads of components coupled to the outputs of the differential amplifier 230 as described herein.[0021] The differential output of the differential amplifier 230 is coupled to a first RC network, which in turn is coupled to the inputs of a comparator 234. The differential output of the differential amplifier 230 is also coupled to a second RC network, which in turn is coupled to the inputs of a comparator 236. A high output of the differential amplifier 230 is coupled to capacitors C25 and C26 and a low output of the differential amplifier 230 is coupled to capacitors C27 and C28. The capacitors C25 and C27 are coupled to inputs of the comparator 234 and capacitors C26 and C28 are coupled to inputs of the comparator 236. Resistors R23 and R24 couple the inputs of the comparator 234 to a voltage source V25 and resistors R25 and R26 coupled the inputs of the comparator 236 to a voltage source V26. The voltage source V25 sets a threshold for triggering voltage transitions on the output of the comparator 234 and the voltage source V26 sets a threshold for triggering voltage transitions on the output of the comparator 236. The outputs of the comparators 234 and 236 are coupled to the input of a latch 240 that, in the example of FIG. 2, includes two NA D gates. The output of the latch 240 is coupled to the gate of transistor Ql 1. In some examples, an amplifier or driver (not shown) is coupled between the latch 240 and the gate of transistor Ql 1. [0022] FIG. 3 is an example of the signal V21, FIG. 2, generated by the pulse generator 206 in response to the signal received on node Ni l . The signal V22 is the complement of the signal V21. The signal V21 shown in FIG. 3 is an example of a plurality of different signal types that may be generated by the pulse generator 206. In the example of FIG. 3, the pulse generator 206 generates either positive or negative pulses on the rising and falling edges of the input signal at node Nl 1. The pulse generator 206 further generates pulses to keep the level shifter 200 active. The input signal has a rising edge 300, which causes the pulse generator 206 to generate a pulse 302 that has a predetermined pulse width t31. In the example of FIG. 3, the predetermined pulse width t31 is 3ns. The pulse 302 is referenced by the letter M to denote that it is a main pulse generated at the beginning of a transition in the input signal. Insurance pulses, referenced as the letter I, are transmitted after a predetermined time t32 from the main pulses. In the example of FIG. 3, an insurance pulse 306 is shown being transmitted after a predetermined time t32 from the main pulse 302. In the example of FIG. 3, the predetermined time t32 between the main pulse and the insurance pulse is 20ns. If the input signal has not transitioned after a predetermined time t33, the pulse generator 206 generates a keep pulse, referenced by the letter K. In the example of FIG. 3, the pulse generator 206 has generated a keep pulse 310 at a time t33 from the generation of the insurance pulse 312.[0023] The pulses in the signals V23 and V24 conduct through the capacitors C21 and C22, respectively, and are terminated at the capacitors C23 and C24, which may have capacitance values substantially larger than the capacitance values of the capacitors C21 and C22. The differences in capacitance values form capacitive voltage dividers between the outputs of the drivers 210, 212 and the nodes N21, N22. In the examples described herein, the voltage dividers have a large ratio, such as 330V/V. The ratio is chosen such that the full voltage swing of the input relative to the output is equal to at least half of the overall common-mode range of the differential amplifier 220.[0024] As described above, the capacitors C23 and C24 are trimmable in order to trim out the common-mode to differential conversion which would otherwise occur due to mismatched ratios in the capacitance values of C21/C23 and C22/C24 as described herein. Trimming the capacitors C23 and C24 may be performed after assembly of the level shifter 200, such as during testing. The input signal on node Nl 1 is inactive during testing, so the pulse generator 206 does not generate any pulses. The processor 224 closes switch SW21, which charges the capacitors C21, C22, C23, and C24 by way of the common mode voltage VCM- A high impedance situation is then created by the processor 224 opening switch SW21, which allows any differential errors on the nodes N21 and N22 to be held there for readout through the amplifier 222. The VHS voltage is then swept to a high voltage relative to the input of the level shifter. Then, any differential errors related to capacitor mismatch are held on the capacitors C21, C22, C23, and C24 and read by the processor 224 via the differential amplifier 222. If the ratio of the capacitance values of the capacitors C21 to C23 is equal to the ratio of the capacitance values of the capacitors C22 to C24, then the voltage on node N21 will be equal to the voltage on node N22. The amplifier 222 measures the difference between the voltages on nodes N21 and N22 and outputs the difference to the processor 224. In the example described herein, the amplifier 222 has a gain of twenty, but other gain values may be implemented as required by specific applications. The processor 224 then determines the values of the capacitors C23 and C24. In some examples, the processor 224 is separate from the level shifter 220.[0025] As described above, mismatch in the ratios of the capacitances of the capacitors C21, C22, C23, and C24 creates a common-mode to differential conversion and trimming the capacitors C23 and C24 improves the common-mode to differential conversion performance. The trimming process is converted into a low frequency trim by disconnecting the common voltage source VCMfrom resistors R21 and R22, which sets DC voltages on the capacitors C23 and C24. The DC voltages on the capacitors C23 and C24 are the voltage on the nodes N21 and N22, respectively. Then, the common-mode is swept and any errors created by the mismatch are left on the capacitors C23 and C24 and are measured via the amplifier 222. Sweeping the common mode includes moving the high-voltage side of the level shifter 200 from 0V where it was when the switch SW21 was open to a high voltage. The high voltage develops across the C21 and C22. The measuring may be accomplished over a long period due to a slow time constant associated with the capacitors C23 and C24. The amplifier 222 can be double-sampled to eliminate any offset error in the amplifier itself. For example, the output of the amplifier 222 may be sampled before SW21 is opened and both inputs are still at the same voltage potential, and then sampled again after the error on N21 and N22 have settled. The difference of the two readings gives an error which is independent of the offset of the amplifier 222.[0026] As described above, the output signal or voltage of the amplifier 222 is received by the processor 224. The processor 224 then analyzes the voltage output by the amplifier 222 to determine which of the capacitors C23 and/or C24 needs to be trimmed and how much trimming needs to occur so the above-described ratios are equal. The process of measuring the common-mode to differential conversion may be repeated after an initial trimming to be sure that the capacitors C23 and C24 have been trimmed correctly.[0027] FIG. 4 is a graph showing an example signal 400 at the output of the amplifier 230 in response to a pulse generated by the pulse generator 206, FIG. 2. The graph shows a noise margin between a positive comparison threshold and a negative comparison threshold where the signal 400 is not detectable. As shown in FIG. 4, a CMTI induced signal exists in the signal 400, but it is within the noise margin and will not induce errors. The signal 400 exceeds the positive comparison threshold and enters a signal margin at a time 402. The signal amplitude of the signal 400 determines how far in excess of the noise margin the signal 400 extends. If the signal amplitude is too low, the signal 400 will not be detected above the noise margin.[0028] The level shifter 200 provides the ability to set the threshold level of the comparators 234 and 236 to achieve a signal, such as the signal 400 of FIG. 4 with appropriate signal and noise margins. In the examples described herein, the signal amplitude is set to twice that of the noise margin. The process includes adjusting the output of the drivers 210 and 212 to lower voltages. In the example of FIG. 2, the output voltages of the drivers are set to half of their normal operating voltage by way of the variable voltage source 216 supplying a lower or half voltage to the drivers 210 and 212. The voltages V25 and V26 are then adjusted to where the signal 400 just exceeds the noise margin. The output of the comparators 234, 236 or the output of the latch 240 may be monitored to determine if the signal 400 has exceeded the noise margin. The processor 224 then instructs the variable voltage source 216 to output the full voltage to the drivers 210 and 212, which returns the output of the drivers 210 and 212 to their full voltages. The signal amplitude 400 is then as shown in FIG. 4.[0029] FIG. 5 is a schematic diagram of an example of the differential amplifier 220 of FIG. 2. The first stage of the amplifier 220 provides benefits that improve the operation of the level shifter 200, FIG. 2. The capacitors C23 and C24 may be terminated with a voltage VDD, ground, or a voltage in between ground and VDD. The amplifier 220 has a very high common-mode rejection ratio (CMRR), which is achieved by taking advantage of the inputs and nodes N21 and N22, which can be loaded with high capacitance without affecting the circuit amplifier 220. [0030] FIG. 6 is a flow diagram describing a method of calibrating a level shifter, such as the level shifter 200 of FIG. 2. The method commences at step 600 with coupling a first node to a first voltage potential. The first node is coupled to a first capacitor that is coupled to a signal generator, a second capacitor coupled to a second voltage potential, and a first input to a first differential amplifier. Step 602 includes coupling a second node to the first voltage potential. The second node is coupled to a third capacitor that is coupled to the signal generator, a fourth capacitor coupled to the second voltage potential, and a second input to the first differential amplifier. Step 604 includes decoupling the first voltage from the first node and the second node. Step 606 includes sweeping a voltage across the level shifter to generate a differential voltage between the first node and the second node. Step 608 includes measuring the voltage difference between the first node and the second node. Step 610 includes adjusting the capacitance value of at least one of the second capacitor and the fourth capacitor in response to the measuring.[0031] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
Dies are divided from a wafer and tested and sorted into acceptable and non acceptable devices. The acceptable dies are arranged in a grid array and moulded in a resin layer to form a reconstituted wafer comprising only acceptable dies. Multiple reconstituted wafers are laminated together to form a stacked assembly and electrical redistribution layers can be formed over the reconstituted wafer surface. Through mould vias (TMVs), formed by drilling holes and refilling the holes with metal, are used to electrically interconnect the dies in a vertical direction after the reconstituted wafers have been stacked. Individual devices comprising stacked dies may be formed by subdividing the laminated stack of reconstituted wafers. |
Claims 1. A meLhod for making a sLacked semioonducLor devioe comprising: forming rims on a first die and a seoond die, the rims extending laterally away from the first and seoond dice; stacking the second die over the first die; and drilling one or more vias Lhrough ihe rims after sLacking, Lhe one or more vias extending between the first and second dice. 2. The method of claim 1 further comprising filling the one or more iias wiLh a conducLive maLeriai Lo elecLricaiiy inLerconnecL Lhe firsL and second dice. 3. The method of claim 1, wherein forming rims includes forming a dielectric portion over the first die and the second die, the rims formed with the dielectric portion. 4. The method of ciaim 3, wherein forming the dielectric portion includes molding resin around the first die and the second die, the rims formed with the resin. 5. The method of ciaim 1 comprising: forming a first reconstituted dice panel inciuding a first plurality of dice moided in a panef frame, the first piurality of dice inciuding the first die, and forming a second reconstituted dice panei inciuding a second pluraliLy of dice moided in anoLher panel frame, Lhe second pluraliLy of dice including the second die; and forming rims includes surrounding a periphery of the dice in the first and second reconstituted dice panels with a dieiectric materiai. 6. The method of ciaim 5 comprising sorting the dice in the first plurality of dice and second piurafity of dice to ensure oniy operational dice are used to form the first and second reconstituted dice paneis. 7. The method of claim 6 comprising separating individual stacks of flrsL and second adhered dice from ihe firsi and second reconsLiiuLed dice panels. 8. The method of claim 1, wherein drilling the one or more vias consists of one or more of laser drilling, mechanical drilling or chemical etching. 9. The method of claim 1, wherein drilling the one or more vias is continuous through the first and second dice. 10. The meLhod of claim 1 comprising forming one or more redisLribuLion layers of conducLive Lraces over one or more of Lhe first or second dice or the rims, the one or more vias in communication with the conductive traces at the rims. 11. The method of claim 1, wherein stacking the first die over the second die includes staggering the second die relative to the first die to expose at least one bond pad of the second die. 12. The method of claim 11, wherein drilling the one or more vias includes drilling at least one via through the rim of the first die, the at least one via extending to the at least one bond pad of the second die. 13. A method for making a stacked semiconductor device comprising: sorting dice into a plurality of operational dice, the plurality of operaLional dice LesLed for operabiliLy; and forming at least a first reconstituted dice panel including: arranging the sorted plurality of operational dice within a panel frame, and molding a resin around Lhe pluraliLy of operaLional dice within the panel frame to form the first reconstituted dice panel, rims formed with the resin extend laterally from each of the plurality operational dice. 14. The meLhod of claim 13 comprising repeaLing arranging and molding Lo form a second reconsLiLuLed dice panel, rims exLend laLerally away from each die of the plurality of operational dice of the second reconsiiLuLed dice panel. 15. The meLhod of claim 14 comprising coupling Lhe firsL reconstituted dice panel to the second reconstituted dice panel; and drilling one or more vias in the coupled first and second reconstituted dice panels, the one or more vias within the rims of the piuraliby of operabional dice and Lhe one or more vias exbend hebween the first and second reconstituted dice panels. 16. The method of claim 15, wherein coupling the first reconstituted dice panel Lo Lhe second reoonsLiLuLed dice panel inoludes aligning Lhe pluraliLies of operaLional dice of each of Lhe firsL and second 17. The method of claim 15 comprising separating the first and second reconstituted dice panels into a plurality of multi-layered packages, each of the multi-layered packages including: at least two dice of the plurality of operational dice of the first and second reconstituted dice panels, and at least one via of the one or more vias.18. The method of claim 15, wherein drilling one or more vias in the coupled first and second reconstituted dice panels includes drilling one or more vias through the rims of the plurality of operational dice.19. The meLhod of claim 15 comprising filling Lhe one or more vias with a conductive material to electrically couple the first and second 20. The meLhod of claim 13, wherein forming aL leasL Lhe firsL reconstituted dice panel includes forming one or more redistribution layers of conductive traces over the plurality of operational dice and the respective rims, the one or more vias in communication with the conductive traces at the rims.21. The method of claim 13, wherein arranging the sorted plurality of operaLional dice wiLhin Lhe panel frame includes arranging Lhe sorLed piuraliLy of operaLional dice inLo one or more sLaggered sLacks of dice wiLhin Lhe panel frame, each of Lhe one or more sLaggered sLacks of dice including two or more dice and at least one of the two or more dice is staggered relative to an adjacent die.22. The meLhod of claim 21, wherein molding Lhe resin around Lhe plurality of operation dice includes molding the resin around each of the one or more staggered stacks of dice.23. A stacked semiconductor device comprising: a firsL die; a second die sLacked over Lhe firsL die; rims extending laterally away from each of the first and second dice; a first redistribution layer extending over the first die and the rim of the first die; and one or more vias extending through at least one of the respective rims, the one or more vias in communication with the first and second dice through the rims.24. The stacked semiconductor device of claim 23, wherein the respective rims are molded resin rims molded around the respective first and second dice, the one or more vias extend through at least one of the molded resin rims.25. The stacked semiconductor device of claim 23 comprising dielecLric porLions formed over each of Lhe firsL and second dice, Lhe dielectric portions including the one or more rims, and the one or more vias extend through the dielectric portions.26. The sLacked semioonducLor device of claim 23, where±n Lhe one or more vias are laterally spaced from the first and second dice.27. The semiconductor device of claim 23 comprising a second redistribution layer extending over the second die and the rim of the second die.28. The stacked semiccnductor device of claim 27, the first and second redisLribuiion layers provide a fan-ouL configuraLion of conduciive Lraces exiending over and beyond respeoiive fooLprinLs of Lhe firsL and second dice, and Lhe one or more vias are in communication with the first and second redistribution layers.29. The stacked semiconductor device of claim 23, wherein the vias are drilled vias formed in aL leasL one of Lhe respeciive rims afber stacking of the second die over the first die.30. The stacked semiconductor device of claim 23 comprising a pluraliLy of dice including Lhe firsL and second dice, rims exLend laLerally from each of die pluraliLy of dice, Lhe piuraliLy of dice are in a stacked configuration, and the one or more vias extend through at least two of the respective rims of the plurality of dice.31. The stacked semiconductor device of claim 23, wherein the second die is staggered relative to the first die, the second die include at least one exposed bond pad according to the staggering.32. The stacked semiconductor device of claim 31, wherein the one or more vias extend through the rim of the first die to the at least one exposed bond pad of the second die.33. A method substantially as hereinbefore described with reference Lo and as illusLraLed in any one of Figures 3, 5, 7, $ and 10 of Lhe accompanying drawings.34. A stacked semiconductor device substantially as hereinbefore described with reference to and as illustrated in any one of Figures 1 to 5 and 9 to 11 of the accompanying drawings. |
METHOD FOR INTERCONNECTING STACKED SEMICONDUCTOR DEVICESTechnical FieldEmbodimenLs described herein generaliy relaLe La muiLi-layer fabrication and electrical interconnections in microelectronic devices.BackgroundMulti-layer semiconductor devices include a plurality of dice stacked and adhered with electrical connections extending therebetween. In one example, the stacked device is farmed from two or more wafers (inciuding a pluraliLy of dice Lherein) LhaL are coupied LogeLher aL inLerfaces beLween Lhe Lwo or more wafers. The coupied wafers are diced and wire banded to form the piurality of devices.In some examples, some of the dice (e.g., chips within the dice) of the wafers are defective and unusabie. These defective dice are stiii incorporated into the muiti-fayered semi-conductor devices by virtue of coupiing between the wafers and the resulting devices are also defective and unusable even where many of the other dice within the devices are otherwise fully usable. Accordingly, wafer based fabrication decreases the overail yieid of usable muiti-iayer devices.In other exampies, interconneotions between dice within a muiti-iayered semi-conductor devioe are provided through wirebonding between the various iayers. For instanoe, two or more semiconductor dioe are stacked (e.g., adhered) on a substrate and eiectricai wires extend along the wire bond pads of the semi-conductor dice to the substrate.On Lhe subsLraLe Lhe elecLricai inLerconnecLions are furLher rouied Lo the bail grid arrays on the other side of the substrate. The stacked semiconductor dice are molded to protect both the dioe and the eiectricai wires. The eiectricai wires provide indirect coupling beLween Lwo or more iayers of Lhe mulLi-layered device. The indireci coupling between two or more of the layers with bond wires iimits data and power transmission (e.g., the speed of data transmission and corresponding performance) . Additionally, the introduction of a substrate and moid cap over the stacked dice increases the height (z heighL) of a muiLi-layered device.Improved muiLi-iayer fabricaLion Lechniques and fasLer interconnection techniques between layers are desirable that address Lhese and oLher Lechnical challenges.Brief Description of the DrawingsFigure 1 is a crcss sectional view cf a multi-layered semiccnductcr device including vias extending through rims that laterally extend from the dice.Figure 2 is a debailed cross secLional view of Lhe inulLi-layered semiconductor device of Figure 1.Figure 3 is a process flow diagram showing one example of a method for making a multi-layered semiconductor device.Figure 4 is a Lable showing Lhe differences in heighL of semiconducLor devices.Figure 5 is a flow chart showing one example of a method for making a multi-layered semiconductor device.Figure 6 is a table comparing the Z height of a semiconductor device including wire bonding and a semiconductor device including vias within lateral rims.Figure 7 is a block diagram showing another example of a method for making a multi-layered semiconductor device.Figure 8 is a block diagram showing yet another example of a method for making a multi-layered semiconductor device.Figure 9 is a cross sectional view of another example of a multi-layered semiconductor device including vias extending through one or more lateral rims.Figure 10 is a flow chart showing another example of a method for making a multi-layered semiconductor device.Figure 11 is a schemaLic diagram of an elecLronic sysLem in accordance with some embodiments of the disclosure.Description of EmbodimentsThe following descripLion and Lhe drawings suffioienLly illusLraLe specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodimenLs. EmbodimenLo seL forLh in Lhe claims encompass all available eguivalenLs of Lhose claims.Figure 1 shows one example of a semiconductor device 100 including a pluraliLy of dice 102. As shown for insLance in Figure 1 Lhe semiconducLor device 100 includes aL leasL a firsL die and a second die 104, 106. As shown Lhe firsL and second dice 104, 106 are coupled along upper and lower surfaces of the respective dice. As further shown in Figure 1, the semiconductor device 100 includes one or more rims 108 extending laterally, for instance according to a rim lateral exbension 110 dimension from each of Lhe dice 102. In an example, as shown with regard to the first and second dice 104, 106 the respective rims 108 extend laterally away from the corresponding edges of the first and second dice 104, 106.In one example, Lhe rims 108 are consLrucLed wiLh buL noL limiLed Lo a polymer maLerial, such as a dfeleoLric molding compound configured to mold around the first and second dice 104, 106 and accordingly protect the dice therein. In another example, the first and second dice 104, 106 are constructed with but not limited to harder materials than the molding compound used in the rims 108. For instance, the first and second dice 104, 106 are constructed with silicon. In another example, the rims 108 are constructed with a softer polymer (e.g., a lower elastic modulus) configured to protect the first and second dice 104, 106 of the semiconductor device 100.The softer polymer of the rims 108 is easier to cut through as described herein (e.g., laser drilf, mechanically drill, FIB removal, etch or the like) Referring again to Figure 1, as shown a plurality of vias 112 extend through one or more of the dice 102. As will be described herein, the conductive vias 112 alfow for communication and data Lransfer beLween each of Lhe dice 102 as well as exLernal circuiLry including, but not limited to, a ball grid array 114, a land grid array, a pin grid array cr the like positioned along a surface of the semiconductor device 100. As shown in the cross-sectional view of Figure 1, a pluraliLy of vias 112 are formed Lhrough Lhe rims 108 as opposed to the first and second dice 104, 106. As will be described herein, the vias 112 are in one example formed after stacking of the dice 102 into the configuration shown in Figure 1. For instance, the vias 112 are drilled into the rims 108 for instance with one or more mechanical, chemical (liLhography), or laser drilling meLhods.As will be furLher described herein, each of Lhe dice 102 in one example includes a redistribution layer, for instance a patterned series of conduciive Lraces provided adjacenL Lo each of Lhe dice 102.The redisLribuLion layer exLends over a fooiprinL of Lhe dice 102 and inLo Lhe rims 108. The conducLive Lraces formed along Lhe redistribution layer are configured for coupling with the vias 112.Accordingly each of the dice 102 of the semiconductor device 100 is able to communicate through the vias 112 with one or more of the other dice 102 and opLionally wibh Lhe ball grid array 114. By providing rims 108 for each of the dice 102 and corresponding vias 112 therein direct coupling between one or more of the dice 102 and the ball grid array 114 is accomplished in contrast to otherwise indirect couplings provided by wire bonding wiLh one or more dice covered in a mold cap (sized Lo encapsulaLe free wires) , and an underlying subsLraLe wiLh a ball grid array. That is to say, in one example the rims 108 extending from the plurality of dice 102 (e.g., according to the dimension of the rim lateral extension 110) provide a mechanism for compactly receiving a plurality of vias 112 therein that allow for the direct communication between the dice 102 of the semiconductor device without otherwise requiring a molded cap overlying wire bonds of the plurality of dice 102 and a substrate to or the like provide such communication. Accordingly, the height of the semiconductor device (e.g., a Z height) is substantially less than the height of a semiconductor device including a plurality of dice interconnected with wire bonding and then encapsulated within a molded cap and having an underlying substrate. For instance, in some examples, the Z height savings for the semiconductor device 100 having the vias 112 provided in the rims 108 may approach 0.2 mm relative to a comparable wire bonded device.Referring again to Figure 1, as further shown the semiconductor device 100 in one example includes a ball grid array 114 including a plurality of solder balls 116 provided along one or more of the dice 102. In Lhe example shown in Figure 1, Lhe firoL die 104 (e.g., Lhe redistribution layer of the first die 104 described herein) is directly coupled with the solder balls 116. Accordingly, the data transfer for each of the dice 102 through the vies 112 is correspondingly transmitted to the first die 104 and any of the other dice 102 Lhrough Lhe vias 112. Solder balls 116 provided in Lhe ball grid array 114 provide inpuL and ouLpuL Lo and from Lhe semiconducLor device 100 while at the same time avciding the need for a substrate underlying Lhe pluraliLy of dice 102 Lo oLherwise receive informaLion and LransmiL informaiion from a semiconducLor device. ThaL is Lo say, by direcLly coupling Lhe ball grid array 114 Lo Lhe redisLribuLion layer of the first die 104 the substrate otherwise used with some semiconductor devices is not needed with the semiconductor device 100 shown in Figure 1 thereby realizing additional space savings and providing a more compacL device. By providing a pluraliLy of vias 112 through the rims 108 along with a ball grid array 114 directly coupled along the first die 104 high speed transmission within (and to and from) the semiconductor device 100 is facilitated while at the same Lime Lhe overall heighL of Lhe semiconducLor device 100 is minimized.Referring now Lo Figure 2, a more deLailed cross-secLional view of the semiconductor device 100 previously shown in Figure 1 is provided.In the detailed view of Figure 2, the plurality of dice 102 are shown again in the stacked configuration and each of the dice 102 include a corresponding rim 108 extending laterally, for instance according to a rim lateral extension 110 from the dice 102. In one example, each of the dice 102 is part of a die assembly 201 including the respective die 102, a rim 108 and a redistribution layer 202 as described herein (and optionally a molding compound 200) As shown in Figure 2, a via 112 or a plurality of vias is provided through the rims 108 and extends continuously between the dice 102.Tn another example one or more of the vias 112 extends through one or more of the rims 108 to provide communication between two or more dice 102 of the semiconductor device 100 or between a die 102 and the ball grid array (through the redistribution layer 202) . That is to say, Lhe vias 112 provided in Lhe rims 108 exLend parLially or fully through the stack of die assemblies 201. Other vias 112 provided through the rims 108 extend through two or more of the rims 108 to accordingly provide communication between two or more of the dice 102 of Lhe sLacked semiconducLor device 100. The vias 112 are in one example drilled from both sides of the rims 108, for instance the upper surface 203 and a bottom surface 205 of the semiconductor device 100. In another example, the plurality of vias 112 are drilled from one or both sides of the semiconductor device 203, 205. In another example, Lhe vias 112 are drilled afLer sLacking. Accordingly, Lhe vias 112 are more easily aligned Lhrough Lhe previously sLacked diceS102. Drilling is conducted in a single efficient operation that consolidaLes formaLion of Lhe vias in a single sLep as opposed Lo Lhe formaLion of mulLiple separaLe vias and laLer sLacking and alignmenL of Lhe vias (e.g., Lhe dice).As described above, each of the die assemblies 201 includes a die 102 as well as a redistribution layer 202 formed adjacent to the die 102. As shown, the redistribution layer 202 extends beyond the foobprinb (e.g., Lhe laLeral foobprinL of Lhe die 102) and exbends into the rim 108. For instance, in one example the die 102 is encapsulated in a molding compound 200, for instance in a panel frame as described herein. Once received within the panel frame the molding compound 200 is inLroduoed Lo Lhe panel frame and hardens around each of Lhe dice 102. A paLLerning Lechnique is used Lo prouide Lhe conductive traces of the redistribution layer 202 along each of the dice 102. As shown for instance in Figure 2, the redistribution layer 202 accordingly extends laterally from the plurality of dice 102 over and across the plurality of rims 108 of each of the die assemblies 201. The redistribution layer 202 thereby provides a "fan-out" configuration that allows for the distributed interconnection of each of the dice 102 with other dice within the semiconductor device 100 as well as the ball grid array 114 (e.g., by way of the vias 112).Additionally, the fanned out redistribution layer 202 cooperates with the plurality of vias 112 provided through the rims 108 to accordingly minimize the overall height of the semiconductor device 100 while at the same time providing direct connection between each of the dice 102 and corresponding direct connections to the ball grid array 114 underlying the first die 104. The redistribution layer provides conducLive Lraces LhaL exLend laLerally from Lhe dice LhaL are Lhen interconnected by way of the vias 112. Stated another way, the vias 108 and the redistribution layers 202 provide interconnections that are housed within the rims 108 without requiring a larger mold cap (e.g., used Lo encapsulaLe oLherwise free wires).As further shown in Figure 2, the molding compound 200 (e.g., a dielectric resin that forms a corresponding polymer) is provided laterally and over top of the plurality of dice 102 prior to stacking of the dice. In another example, the molding compound 200 is provided on Lhe sides of Lhe pluraliLy of dice 102 as opposed Lo along an upper surface of each of Lhe dice 102. The molding compound 200 exLends laterally to form the rims 108 having a rim lateral extension 110 relaLive Lo Lhe dioe 102. As previously desoribed, afier molding of Lhe pluraliLy of dice 102 (as described herein in a flaL panel having a wafer or panel configuraLion) Lhe pluraliLy of dioe 102 are ouL from the panel, tested for their operability and then staoked into the oonfiguration shown in Figure 2, for instanoe the stacked configuration of the semiconductor device 100. In another example, Lhe pluraliLy of dioe are LesLed prior Lo boLh singulabion from an original silicon wafer and formation of a reconstituted dioe panel (described herein) Each of the dice 102 is coupled with one another with a layer of an adhesive 204 or oLher bonding subsLance provided beLween each of Lhe die assemblies 201. As shown fn Figure 2, Lhe adhesive 204 aligns each of the dice 102 and maintains the dice 102 in an aligned configuration. After stacking of the dice 102, in one example the plurality of vias 112 are drilled through the semiconductor device 100 to thereby provide the interconnections between each of the dice 102 by way of the redistribution layers 202 of each of the die assemblies 201.In another example, the vias 112 are formed separately in each of the die assemblies 201 prior to stacking of the die assemblies in the configuration shown in Figure 2. Accordingly, the vias 112 are aligned during the stacking procedure to accordingly ensure communication between each of the die assemblies 201 (and the ball grid array 114) . Tn one example, the vias 112 are filled with a conductive material, such as copper or the like, sputtered or provided by vapor deposited to interconnect each of the dice 102 of the semiconducLor device 100 as well as connecL Lhe dice 102 wiLh Lhe ball grid array 114.Referring again to Figure 2, as previously described herein each of the vias 112 are shown within the rims 108 and laterally spaced relaLive Lo each of Lhe dice 102. ThaL is Lo say, Lhe dice 102 are interconnected by way of conductive vias 112 provided through the laterally extending rims 108. By providing interconnections between the dice 102 in the lateral portions of each of the die assemblies 201 the connections between each of the dice 102 as well as the ball grid array 114 are consolidaLed Lo Lhe vias 112 as well as Lo Lhe redisLribuLion layers 202 fanned ouL from each of Lhe dice 102 (e.g., the lateral rims 108) . Accordingly, components of other semiconductor devices such as a conducLive subsLraLe provided underneaLh Lhe sLacked dice and a mold cap provided Lo encapsulaLe and proLecL Lhe dice as well as wire bonds beLween each of Lhe dice and Lhe underlying substrate are accordingly avoided. Instead, with the semiconductor device 100 each of the dice 102 is molded with the molding compound to provide a laterally extending rim 108 for the redistribution layers 202 as well as space for Lhe laLerally posibioned vias 112.Accordingly, the vertical height or I height of the semiconductor device 100 is minimized relative to the I height of other configurations of semiconductor devices using wire bonds and underlying subsLraLes (as well as corresponding molding caps over Lop of Lhe wire bonds) Additionally, because the vias 112 are provided through the rims 108 the vias 112 are more easily formed within the semiconductor device 100. For instance, vias in at least some examples are provided through the silicon of the dice 102. Silicon is more difficult to drill through because it is brittle and harder (e.g., has a higher elastic modulus) . However, the polymer used in the molding compound 200 of the semiconductor device 100 provides a softer material (relative to silicon) for ready drilling of each of the vias 112. The softer material of the rims 108 accordingly ensures the vias 112 are easily formed in the semiconductor device 100 and accordingly a conductive material is easily deposited within the vias 112 to interconnect each of the redistribution layers 202 of the corresponding dice 102 of the die assemblies 201. Similarly, because the vias 112 are easily formed through the molding compound of the rims 108 damage Lo Lhe semiconducLor device 100 for insLance before or after forming of the stacked configuration of dice 102 is thereby minimized. Tn contrast, drilling through the silicon of one or more of silicon dice is problematic as chipping or damage to the semiconducLor wiLhin Lhe die is a risk. One example of Lhe molding compound 200 includes, but is not limited to, an epoxy resin including one or more additives configured to adjust the properties of the rims 108 (e.g., the package of the semiconductor device 100) to meet packaging requirements. For instance, an epoxy resin includes addiLives Lo adjusL one or more of elasLic modulus, coefficienL of Lhe Lhermal expansion, curing LemperaLure, curing Lime, glass LransiLion temperature, thermal conductivity and the like.Figure 3 shows a process flow diagram of a series of sohemaiio views of one example of a process for Lhe fabricaLion of a semiconduoLor device, such as Lhe semioonducLor device 100 shown in Figures 1 and 2. In a first stage 301 a plurality of dice 302 are shown in a monolithic semiconductor wafer 300. For Instance, the plurality of dice 302 are formed in a silicon wafer as is previously known (by way of masking and eLohing of Lhe wafer) . The dice 302 In the silicon wafer 300 are probed to determine which of the dice are operable (operational dice without manufacturing or performance errors) . The semiconductor wafer 300 is singulated to accordingly separaLe each of Lhe dice 302. OpLionally, Lhe dice 302 are probed afLer singulaLion and Lhen separaLed.The operational dioe 306 are separated from the remainder of the dice 302 and in stage 303 the operational dice 306 are positioned within a panel frame 304. As shown in Figure 3, the panel frame 304 in one example has a substantially similar configuration to the semiconductor wafer 300 shown in stage 301. In another example as described herein the panel frame 304 has another shape, for instance a square or rectangle. The plurality of operational dice 306 are fit into the panel frame 304 and a reconstituted dice panel 308 is formed.For instance, a molding compound such as a resin or the like that hardens into a dielectric polymer is provided to the panel frame 304.The molding compound hardens around each of the operational dice 306 to accordingly form the separate die assemblies 201 shown in Figure 2 (including the dice 102 as well as the corresponding rims 108) . In the configuration shown in stage 303 the reconstituted dice panel 308 is ready for sLacking for insLance Lo form one or more of Lhe semiconductor devices 100 previousfy described herein.In another example, after forming the reconstituted dice panel (e.g., after molding of the operational dice 306) the redistribution layers 202 for each of Lhe dice 306 are formed. For insLanoe, making and lithography are used to etch the conductive traces of the redistribution layers 202 on the molding compound 200 and the dice 306. As previously described, the redistribution layers 202 have a fanned out configured extending over the footprint of the operational dice 306 as well as Lhe rims 108 (e.g., see Figure 2).Referring now Lo sLage 305 Lhe reconsLiLuLed dice panels 308 are shown in an exploded configuration with each of the plurality of dice panels 310 sLacked. As shown, Lhe operaLion die 306 of each of Lhe pluraliLy of reoonsLiLuLed dice panels 310 are shown in a subsLanLialiy similar configuraLion and are accordingly aligned between each of the reconstituted dice panels 310. That is to say, the operational dice 306 of each of the dice panels 310, for instance including first and second reconstituted dice panels 312, 314, are aligned Lo accordingly provide a sLacked semiconducLor device upon separation (singulation) of the stacked dice in a later step of the process. As previously described, in one example an adhesive 204 is applied between each of the plurality of reconstituted dice panels 310 Lo ensure Lhe coupling beLween Lhe piuraliLy of reconsLiLuLed dice panels 310 including Lhe alignmenL of Lhe dice Lhereln is reLained.At stage 307 the plurality of vias 112 are formed in the stacked plurality of reconstituted dice panels 310. For instance, as shown at stage 307 the stacked panel assembly 316 includes the plurality of reconstituted dice panels 310 in a stacked and adhered configuration.Accordingly, the plurality of dice 102 (corresponding to the operational dice 306) of the panels 310 are aligned in a configuration corresponding to the arrangement of the device 100 shown in Figures 1 and 2. The vias 112 are formed within the rims 108 (including the redistribution layers 202 shown in Figure 2) extending laterally away from each of the dice 102 (306 shown in Figure 3) In one example, the vias 112 are formed in a batch process, for instance including drilling through the rims 108 of each of the respective dice 102. That is to say, in the stacked panel assembly 316 (prior to singulation) the plurality of vias 112 are drilled Lhrough Lhe sLacked panel assembly 316 Lo accordingly faciliLaLe rapid formation of the vias 112 in each of the semiconductor devices at a single manufacturing stage. In yet another example, the stacked panel assembly 316 is singulated into a plurality of the semiconductor devices 100. The pluraliLy of separaLed semiconducLor devices 100 are thereafter separately drilled to form the vias 112 extending through the rims 108. After formation of the vias 112 a conductive material, such as copper, is sputtered or vapor deposited within the channels of the vias 112 to electrically couple the dice 306 (e.g., through the redisLribuLion layers 202 of Lhe rims 108) As shown aL sLage 309 Lhe ball grid array 114 (also shown in Figures 1 and 2) is also provided. In a similar manner to stage 307, in one example Lhe ball grid arrays 114 for each of Lhe semiconduoLor devices 100 are formed along Lhe semiconducior devices while sLill reLalned wiLhin Lhe sLacked panel assembly 316 shown aL sLage 307.Optionally the ball grid arrays 114 are formed along the semiconductor devices 100 after singulation, for instance into the semiconductor device 100 shown in stage 309.Referring again Lo sLage 309, Lhe finished semiconducLor device is shown with the stacked dice 102 and the vias 112 extending through the rims 108. The ball grid array 114 is also shown on the bottom layer of the semiconductor device 100, for instance coupled wlLh Lhe redisLribuLion layer associaLed wiLh Lhe firsL die 104 (as shown in Figure 2).The process shown in Figure 3 schematically provides a plurality of semiconductor devices 100 such as the device shown in Figures 1 and 2. Because each of the panel frames 304 and the corresponding reconstituted dice panels 310 including only operational dice 306 semiconductor devices 100 including one or more damaged or faulty dice 102 are substantially avoided. That is to say, referring again to the stage 305, each of the operational dice 306 incorporated into each of the plurality of the reconstituted dice panels 310 is previously tested and known to be operational. Accordingly, the semiconductor devices 100 generated from the stacked panel assembly 316 are accordingly operational. The process shown in Figure minimizes or avoids the incorporation of faulty or damaged semiconductors relative to prior fabrication techniques, for instance using a monolithic semiconductor wafer having operational, faulty and damaged semiccnducLors Lherein. In previous fabricaLion Lechnigues Lhe fauliy or damaged semiconductors are incorporated into the finished devices resulting in disposal of the entire otherwise serviceable device.Stated another way, with the process described herein one or more (e.g., a pluraliLy of) faulLy or damaged dice 302 oLherwise provided in one or more of the semiconductor wafers 300 do not make their way into the otherwise fully operational semiconductor devices 100 fabricated as discussed above.Accordingly, the yield rate of the semiconductor devices 100 is subsLanLially higher Lhan LhaL of oLher processes using a full semiccnducLor wafer 300 including operaLional and faulLy or damaged dice. In addition to the higher yield the provision of the vias 112 for insLance Lhrough Lhe rims 108 provides direcL inLerconnecLion beLween each of die dice 102 wiLhoLtL requiring a larger mold cap and subsLraLe cLherwise needed for wire bonded semiconducLor devices.Accordingly, the semiconductor device 100 generated from the process shown in Figure 3 has a more reliable operational character as well as a minimized vertical height (Z height) relative to other semiconductor devices formed by way of wire bond inLerconneobions along wibli substrates.Referring now to Figure 4, two additional stages 403, 405 are provided as an alternative to the stages 303 and 305 shown in Figure 3. For insLance, Lhe panel frame 400 shown in Figure 4 has a square or recLangular (e.g., a non-circular) configuraLion relaLive La Lhe wafer configuration of the panel frame 304 shown in stage 303. The panel frame 400 accordingly arranges the operational dice 306 in a grid like pattern having a square 2ectangular configuration. The reconstituted dice panel 402 shown in stage 403 is then stacked into a plurality of reconstituted dice panels 404 as shown at stage 405 in Figure 4. As further shown in Figure 4, the plurality of reconstituted dice panels 404 includes at least first and second The process previously described in Figure 3 is then carried out in a substantially similar manner with the plurality of reconstituted dice panels 404 provided in a stacked configuration. That is to say, the vias 112 are in one example formed through the plurality of rims 108 extending laterally away from each of the dice 102. In one example, the vias 112 are formed in the rims 108 while the dice 102 are reLained in die sLacked configLiraLion (e.g., prior Lo singulation) . Tn a similar manner the ball grid array 114 is also applied to the first reconstituted dice panel 406 while the first reconstituted dice panel 406 of the semiconductor device 100 is reLained in die sLacked panel assembly as shown in Figure 3 aL sLaqe 307. In another example, as previously described herein the vias 112 and the ball grid arrays 114 are formed on the separated semiconductor devices 100, for instance after singulation of the semiconductor device 100 from the stacked plurality of reconstituted dice panels 404.Figure 5 shows one cross-secLional view of a semiconducLor device 500 including an underlying substrate 506 and wire bonding between the dice 502 of Lhe device 500. As furiher shown in Figure 5, each of Lhe dice 502 are connecLed wiLh Lhe subsLraie 506 by way of one or more wires 504 bound Lo each of Lhe dice 502 and exLending Lhrough Lhe semioonduotor devioe 500 for instance through a mold cap 510. As shown, at least some of the plurality of wires 504 provide interconnection between each of the dice 502 by first extending from Lhe respecLive dice 502 Lo Lhe suhsbraLe 506 (Lhe suhsbraLe including a plurality of conductive traces) and then extending from the substrate 506 by way of additional wires 504 to one or more of the other dice 502. As further shown in Figure 5, a bail grid array 508 is provided along Lhe opposed surface of Lhe subsLraLe 506 and inLerconnecLed wiLh Lhe dice by way of Lhe wires 504 exLending from the substrate 506 to the dioe 502.In contrast to the assembly shown in Figure 5, the semiconductor device 100 described herein (Figures 1 and 2) includes a plurality of dice 102 in a stacked configuration including a plurality of laterally extending rims 108 extending laterally (e.g., see the lateral extension 110) from each of the dice 102. The rims 108 provide a molding compound, resin or the like configured for drilling and formation of vias 112 therein. As previously described herein, each of the die assemblies 201 is formed with a redistribution layer 202, for instance to provide a fanned-out configuration of conduotive traces extending beyond the horizontal footprint of each of the dioe 102. Accordingly, with the vias 112 extending through the redistribution layers 202 electrical interconnections between each of the dice 102 is provided at a compact lateral location relative to the dice 102 (e.g., in Lhe rims 108). The inLerconneciions beLween Lhe dice are provided in the lateral spaces adaoent to each of the dioe 102 without otherwise requiring a farge mold oap 510 to house the plurality of wires 504 of the semiconductor device 500 shown in Figure 5. AddiLionally, Lhe vias 112 exiend beLween each of Lhe dice 102.For instance, the vias 112 extend between two or more of the dioe 102 to provide direot conneotions between the dioe 102 and accordingly avoid an intervening substrate 506 as shown in Figure 5.Further, the semiconductor device 100 shown in Figures 1 and 2 does noL need Lhe subsLraLe 506 for inpuL or ouLpuL Lo or from Lhe device 100. InsLead, Lhe device 100 including Lhe dice 102 interconnected with the vias 112 and the redistribution layers 202 are configured Lo provide inpuL and oupuL Lhrough Lhe ball grid array 114 coupled along Lhe redisLribuLion layer 202 of Lhe firsb die 104.SLaLed anoLher way, Lhe subsLraLe 506 and Lhe mold cap 510 as shown In Figure 5 are not otherwise needed in the semiconductor device 100 shown in Figures 1 and 2. Instead, the rims 108 laterally extending from the dice 102 provide space for both the redistribution layer 202 including iLs conducbive braces as well as Lhe vias 112 drilled through the rims 108. Accordingly, by using the semiconductor device 100 space savings are realized vertically (1 height) relative to the semiconductor device 500 shown in Figure 5 (requiring the larger mold cap 510 as well as Lhe subsLraLe 506) . AddiLionally, Lhe semiconducLor devIce 100 shown in FIgure 1 includes relaLively direcL connections by way of the vlas 112 between each of the dice 102 (without an intervening substrate 506) . This arrangement provides for direct and correspondingly faster and more reliable data transmission between the dice 102 and the ball grid array 114 associated with the redistribution layer 202 of the first die 104 (see Figure 2) Referring now to Figures 6, a S height comparison table is provided for a variety of semiconductor devices having the configuration provided herein, for instance the configuration shown with the device 100 of Figures 1 and 2. As described herein, the semiconductor devices 100 include one or more die assemblies 201 each having a die 102, a rim 108, and one or more vias extending through the rim 108 to a redistribution layer 202. The S heights 602 for each die assembly and the corresponding molding compound used in the rims 108 of each die assembly are shown in the rows for the Semiconductor Device wiLt Vias in Rims of Lhe Lable. The ToLal Z heighLs 602 correspond to the number of die assemblies 201 (each having a height of approximately 25 microns and 10 microns for the molding compound) stacked for a particular package type. The semiconductor devices 100 are arranged in ascending order wiLh Lhe firsL device (single die package or SDP) including a single die assembly, the second (double die package, DDP) with two die assemblies, and so on (e.g., QDP includes four assemblies, DDP includes eight assemblies and HDP includes 16 assemblies) The corresponding S heighLs 604 of Lhe semiconducbor devices including wire bonding and a subsLraLe (see Lhe semiconducLor device 500 shown in Figure 5) are provided in the first row of the table. As shown, Lhe die assembly Z heighLs for a wire bonded device are 25 microns, and ihe mold cap and clearance Z heighis per die assembly vary according Lo Lhe number of die assemblies of Lhe devices. The total Z heights for each of the devices is shown along the bottom row and based on the Die Assembly Z height and the Nold Cap and Clearance Z height multiplied by the number of die assemblies for the device.As shown in Figure 6, Lhe Tobal Z heighLs 602 of each of Lhe devices having a fanned out redistribution layer 202 with vias 112 in the rims 108 is smaller relative to the corresponding Total Z heights of the corresponding devices with the arrangement shown in Figure 5 (e.g., including wire bonding, a mold cap and a subsLraLe) . The savings in Z heighL for each of Lhe respecLive die assemblies 201 is carried forward to the stacked semiconductor devices 100 having two or more die assemblies. That is to say, a device having two more dice (e.g., die assemblies 201) with the configuration described herein multiplies the Z height savings for each of the stacked die assemblies 201 relative to the corresponding die assembly used in a package that uses wire bonding, a mold cap and a substrate.Figure 7 shows one example of a method 700 for making a stacked semiconductor device, such as the semiconductor device 100 previously shown herein. In describing the method 700 reference is made to one or more components, features, functions and the like described herein.Where convenient, reference is made to the components and features with reference numerals. Reference numerals are exemplary and are not exclusive. For instance, components, features, functions and the like described in the method 700 include, but are not limited to, the corresponding numbered elemenis, oLher corresponding feaLures described herein (both numbered and unnumbered) , as well as their equivalents.At 702, the method 700 includes forming rims 108 on a first die 104 and a second die 106. The rims 108 exLend laLerally away from Lhe first and second dice 104, 106. For instance, as shown in Figure 1 the plurality of rims 108 extend from each of the respective dice according to a rim lateral extension 110.At 704, the second die 106 is stacked over the first die 104. For insiance, as shown in Figure 2 Lhe die assemblies 201 including, for insiance, Lhe respeciive dice 102 and Lhe respecLive redisLribuLion layers 202 are coupled together in a stacked configuration. In one example, sLacking Lhe dice such as Lhe second die 106 over Lhe firsL die 104 includes appiying an adhesive Lo a surface beLween aL ieasL Lhe firsL and second dice 104, 106 Lc correspondingly adhere Lhe dice together in the stacked configuration.At 706, one or more vias 112 are drilled through the rims 108 after stacking of the die assemblies 201 in the configuration shown in Figure 2. The one or more vias 112 exLend heLween aL leasb Lhe firsb and second dice 104, 106. In another example, the method 700 includes drilling the one or more vias 112 through the rims 108 prior to stacking, for instance while the plurality of dice 102 are retained wiLhin a panel frame, such as Lhe panel frame 304 shown aL sLage 303 in Figure 3. The piuraliLy of dice 102 are Lhen arranged in Lhe stacked configuration with the corresponding vias 112 aligned according to the alignment of the plurality of dice 102 (e.g., dice assemblies 201) relative to one another. After drilling of the one or more vias 112 a conductive material is applied through the vias 112 for instance by vapor deposition, sputtering or plating to correspondingly interconnect the dice 102. For instance, the plurality of vias 112 provide interconnections through redistribution layers 202 associated with each of the dice 102.Additionally, in another example the one or more vias 112 provide interconnections between the dice 102 as well as a ball grid array 114 provided along the redistribution layer 202 associated with the first die 104.Referring now to Figure 8, another example of a method 800 for making a stacked semiconductor device 100 is provided. In describing Lhe meLhod 800 reference is made Lo one or more componenLs, feaLures, functions and the like described herein. Where convenient reference is made to the oomponents with reference numerals. The reference numerals provided are exemplary and are not exclusive. For instance, Lhe feaLures, componenLs, funcLions and Lhe like described in Lhe method 800 include, but are not limited to, corresponding numbered elements, other corresponding features described herein (both numbered and unnumbered) as well as their equivalents.Referring again to Figure 8, at 802 the method 800 includes sorLing dice 302 inLo a pluraliLy of operaLional dice, such as Lhe operaLional dice 306 shown aL sLage 303 in Figure 3. The pluraliLy of operational dice 306 are probed or tested to determine their operabiliLy. AL 804, aL leasi a firsL reconsLiLuLed dice panel 308 is formed.In one example, forming Lhe firsL reconsLiLuLed dice panel (as well as additional dice panels) includes arranging a sorted plurality of operational dice 306 within a panel frame 304 at 806. In another example, the sorted operational dice 306 are arranged within a non-circular panel frame, such as Lhe panel frame 400 shown in Figure 4.At 808, a resin is molded around the plurality of operational dice 306 within the panel frame 304 (or the panel frame 400) to form the first reconstituted dice panel 308. As previously described herein, rims 108 are formed wiLhin Lhe resin and exLend laLerally from each of Lhe pluraliLy of operaLional dice 306.In one example, the process for forming a reconstituted dice panel at 804 is repeated for additional dice panels to accordingly generate the plurality of reconstituted dice panels 312 or 404 shown in Figures 3 and 4, respectively. As previously described herein, the plurality of reconstituted dice panels are then stacked into the stacked panel assemblies 316 and the corresponding square or noncircular configuration shown in Figure 4 to provide a stacked series of dice 102 for each of the resulting semiconductor devices 100 prior to singulation (shown at stage 309 in Figure 3) While in the stacked panel assembly 316, for instance shown at stage 307 of Figure 3, a plurality of vias 112 are formed through the associated rims 108 of each of the dice assemblies 201 included in the semiconductor devices 100. For instance, while in the stacked panel assembly 316 shown at 307 the plurality of vias 112 are formed in a baLch process Lo accordingly minimize Lhe Lime needed for generaLion of vias 112 while the semiconductor device 100 are otherwise separated. After formation of the vias 112 the semiconductor devices are singulated from the stacked panel assembly 316 to form the semiconducLor devices 100 shown aL sLage 309 in Figure 3 and furLher shown in detail in Figures 1 and 2.Additionally, in another example a ball grid array 114 (shown in Figures 1 and 2) is provided to the first die 104 associated with each of the semiconductor devices 100 while still part of the stacked panel assembly 316. In yeL anoLher example, boLt of Lhe vlas 112 as well as Lhe ball grid arrays 114 associaLed wiLh each of Lhe semiconducLor devices 100 are formed after singulation of the semiconductor devices from Lhe sLacked panel assembly 316.Figure 9 shows anoLher example of a semiconduoLor device 900 including a pluraliLy of dice 102 having corresponding rims 904. As shown in Figure 9, the dice 102 are provided in a staggered configuration (e.g., a shifted or stepped configuration). For instance, each of the dice assemblies 902 is shifted relative to one anobher Lo form a sLaggered series of dice in Lhe semioonduobor device 900. As shown is Figure 9, each of the dice 102 are shifted relative to one another to expose at least one face including one or more bond pads 905 of each of the dice 102. In one example, each of the dice 102 is shifLed for insLance according Lo a die shifL 906 LhaL accordingly sLaggers Lhe respeoLive die relaLive Lo an adjaoenL die.Tn another example, the dice 102 are shifted varying degrees (and optionally in different directions) to accordingly expose one or more bond pads 905 according to the shifting. That is to say, one or more of the dice 102 are shifted one or more of a greater or iesser degree or in a differing direction according to the positions of the respective bond pads 905.As shown in Figure 9, each of the dice are staggered in the same direction providing a staggered configuration (stair stepped) to accordingly expose the corresponding bond pads 905 of each of the dice 102 (excepting the bottom most die 102 of the semiconductor device 900) . As previousiy described herein each of the dice 102 are incorporated into respective die assemblies 902. As shown, each of the die assembiies 902 include a die 102 as well as one or more corresponding rims 904 for each of the dice 102.As furLher shown in Figure 9, each of Lhe pluraliLy of dice 102 are bonded with one another, for instance, with an adhesive 908 provided on the surfaces facing the adjacent dice 102. The adhesive 908 retains each of the dice 102 in the staggered configuration and accordingly reLains Lhe die shift 906 as shown in Figure 9 (one exampie of a die shift) to thereby maintain the bond pads 905 in an exposed configuration for eventual interconnection. In one example, the plurality of dice 102 are bonded together with the adhesive 908 prior to the application of a molding compound, such as the molding compound 200 previously shown in Figure 2. As previousiy described Lhe molding compound 202 cures inLo a dieleoLrio poiymer and correspondingly provides the rims 904 for each of the die assemblies 902. AfLer adhesion of each of Lhe dice 102 Lhe molding compound 202 is applied around Lhe sLacked dice 102 Lo accordingly form an inLermediaLe sLage of Lhe semioonducLor device 900.One or more vias 912 are drilled through one or more of the rims 904 to accordingly provide interconnection between the dice 102 and a corresponding redistribution layer 910 associated with one or more of Lhe dice 102 (e.g., Lhe bobLom mosL die shown in Figure 9) adjacenL Lo the ball grid array 114. As shown in Figure 9 each of the vias 912 couple with the corresponding bond pads 905 for the respective overlying dice 102. The plurality of vias 912 associated with each of Lhe dice 102 correspondingly exLend from Lhe bond pads 905 Lhrough one or more of Lhe rims 904 associaLed wiLh Lhe corresponding die assemblies 902. That is to say, the top most die 102 of the semiconductor device 900 includes one or more vias 912 extending through the respective rims of the underlying dice 102.After formation of the vias 912 (e.g., by mechanical drilling, lithography, laser drilling or the like) a redistribution layer 910 similar to the redistribution layer 202 shown in Figure 2 is provided for at least one of the dice 102, such as the die 102 corresponding to the bottom of the semiconductor device 900 adjacent to the ball grid array 114. In one example the redistribution layer 910 provides a fanned out configuration of conductive traces extending over the footprint of the die 102 as well as the corresponding overall footprint of the stacked dice 102. That is to say, as shown in Figure 9 the redistribution layer 910 extends beneath each of the dice 102 and provides conductive traces for interconnection with the vias 912 exLending from Lhe respecLive bond pads 905 of each of Lhe dice 102 through the rims 904. Tn another example, after formation of the redistribution layer 910 the ball grid array 114 is applied to the semiconductor device 900 along the redistribution layer 910 to provide inpuL and ouLpuL connecLions for Lhe semiconducLor device 900.Referring now to Figure 10, another example of a method for forming a semiconductor (e.g., the semiconductor device 900 shown in Figure 9) is provided. As with the method previously described and shown in Figure 5 the method is shown in a series of schematic stages 1001, 1003, 1005, 1007. AL 1001 a pluraliLy of dice 102 singulaLed from one or more monoliLhic semiconducLor wafers are LesLed for operability. The operational dice 102 (without faults or damage) are Lhen assembled inLo a dice sLack 1002. For insLance, Lhe dice 102 of one or more dice sLacks 1002 are adhered. As shown aL sLage 1001 Lhe dice sLack 1002 has a sLaggered configuraLion (sLepped, shifLed or Lhe like) that correspondingly exposes the bond pads 905 of at least one surface of each the dice 102 of the dice stack 1002. As described above, in another example, the dice 102 are shifted one or more of varying degrees or direcLions according Lo Lhe locaLion and number of the respective bond pads 905.Referring now to stage 1003 in Figure 10 each of the dice stacks 1002 is positioned within a panel frame 1004 including a series of caviLies siked and shaped Lo receive each of Lhe dice sLacks 1002.AfLer posiLioning of Lhe die sLacks 1002 wiLhin Lhe caviLies of Lhe panel frame 1004 a molding compound is applied around the plurality of dice stacks 1002 within the panel frame 1004 to form the rims 904 of the die assemblies 902 previously shown in Figure 9. As described herein, in one example, the molding compound 202 is a resin that forms a dielectric polymer having a lower modulus of elasticity compared to the material of the dice (e.g., sificon) . The panel frame 1004 in combination with forms a reconstituted dice panel 1006 including a plurality of the molded dice stacks therein. Stage 3 shows a circular (wafer shaped) panel frame 1004. In another example, the panel frame has a different shape such as the rectangle or square shown in Figure 4.As shown in stage 1003, the die assemblies 902 formed by the dice stack 1002 include the rims 904 extending laterally from each of the dice 102. As shown in this configuration the dice stack 1002 is sLaggered wiLhin Lhe molding compound 202. Each of Lhe rims 904 for the respective dice 102 correspondingly vary in the lateral dimension according to the shifted location of the each of the dice 102 within the dice stack 1002. The bond pads 905 exposed through the shifting of Lhe dice face Lhe bcLLcm (as presenLed in Figure 10) of Lhe dice stack 1002 toward the rims 904 of the underlying dice 1002.At stage 1005, a plurality of vias 912 are drilled into the rims 904 underlying the bond pads 905 to interconnect each of the dice 102 with a redistribution layer 910 provided along one of the dice 102.For insLance, in Lhe example shown in Figure 10 Lhe boLLcm mosL die (shown as Lhe Lop mcsL die in Lhis inverLed ccnfiguraLicn) is provided with the redistribution layer 910. Optionally, prior to forming the conduciive Lraces of Lhe redisLribuiion layer 910 ihe pluraliiy of vias 912 are drilled inLo Lhe rims 904 Lo accordingly form Lhe passages LhaL will receive oonducLfve maLerial Lo InLerconnecL wILh the later formed redistribution layer 910. A conduotive material is applied to the channels of the vias 912 to eventually interoonnect the plurality of dioe 102 of the die stack 1002 with the redistribution layer of Lhe semiconduoLor devioe 900. In anobher example, Lhe redistribution layer 910 is formed prior to drilling of the vias 912.At stage 1007 the semioonductor devioe 900 is finished by applying a ball grid array 114 to the redistribution layer 910 previously formed aL sLage 1005. As shown aL sLage 1007 Lhe semiconduoLor device 900 is Lhen singulaLed from Lhe reconsLiLuLed dioe panel 1006. A plurality of semiconductor devioes 900 are singulated from the same As with the previously described semiconductor device 100 the semiconductor device 900 shown in Figures 9 and 10 provides direct connections with a redistribution fayer 910 for instance a redistribution layer 910 associated with the bottom most die 102 and the die stack 1002. The plurality of vias 912 provide direct connection with the redistribution layer 910 without requiring an otherwise larger mold cap to accordingly contain and encapsulate a plurality of wire bonds extending from each of the dice to a substrate (larger than the redistribution layer 910) underneath the dice stack.The staggered configuration of the dice stack 1002 exposes the bond pads 905 of one or more of the dice 102 and thereby allows for the vias 912 extending from the bond pads 905 through the rims 904 to inLerconnecL each of Lhe respecLive dice 102 wiLh Lhe redisLribuLion layer 910. The direct connections provided by the vias 912 between the bond pads 905 and the redistribution layer allows for a shallow layer of molding compound compared to the otherwise deeper (thicker) mold cap needed Lo reliably encapsulaLe wires, such as Lhe 504 shown in Figure 5.Additionally and as previously described by providing the vias 912 through the molding compound 202 (a dielectric polymer) damage to the semiconductor device 900 is minimized as drilling through the semiconducLor device 900 is conducLed Lhrough Lhe sofLer maLerial (lower elasLic modulus) of Lhe molding compound 202 compared Lo Lhe harder material of the silicon of the dice 102. Additionally, with Lhe meihod shown in Figure 10 Lhe process of forming Lhe redisLribuLion layer 910 is isolaied Lo one of Lhe dice 102 of Lhe dice sLack 1002. For insLance, as described herein Lhe redisLribuLion layer 910 is provided to the bottom most die 102 of the die stack 1002. Accordingly the vias 912 extend through the lateral rims 904 of the dice 102 of the dice stack 1002 to the redistribution layer 910 associaLed wibh Lhe boLLom mosL die 102. The redisLribuLion layer 910 thereby consolidates the interconnections of each of a plurality of redistribution layers otherwise associated with each of the dice 102 into a single redistribution layer that also provides interconnections wiLh Lhe ball grid array 114. In anoLher example, Lhe boLLom mosL die 102 includes a pluraliLy of redisLribuLion layers (e.g., mulLiple adjacent layers 910) that are localized to the die while the remainder of the dice 102 overlying the bottom most die 102 are interconnected with the vias 912. In still another example, each of the dice 102 includes a respective redistribution layer 910 and the dice 102 are interconnected through the redistribution layers 910 with the vias 912.An example of an electronic device using semiconductor devices 100, 900 as described in the present disclosure is included to show an example of a higher level device application for the present disclosure. Figure 11 is a block diagram of an electronic device 1100 incorporating at least one semiconductor device constructed with the fabrication methods and structure in accordance with at least one embodiment of the disclosure. The electronic device 1100 is merely one example of an electronic system in which embodiments of the presenL disclosure are used. Examples of elecironic devices 1100 include, but are not limited to, personal computers, tablet computers, mobile telephones, game devices, MP3 or other digital music players, etc. In this example, the electronic device 1100 comprises a data processing sysLem LhaL includes a sysLem bus 1102 Lo couple Lhe various components of the system. System bus 1102 provides communication links among the various components of the electronic device 1100 and can be implemented as a single bus, as a combination of busses, or in any other suitable manner.An elecLronic assembly 1110 is coupled Lo sysLem bus 1102. The elecLronic assembly 1110 can include any circuiL or combinaLion of circuits. In one embodiment, the eleotronio assembly 1110 includes a processor 1112 which can be of any Lype. As used herein, "processor" means any Lype of compuLaLional circuiL, such as buL noL limiLed Lo a microprocessor, a microoonLroller, a complex insLrucLion seL compuLing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor (DSP), multiple core processor, or any oLher Lype of processor or processing circuiL.Other types of circuits that may be included in the electronic assembly 1110 are a custom circuit, an application-specific integrated circuit (ASIC) , or the like, such as, for example, one or more circuiLs (such as a communicaLions circuiL 1114) for use in wireless devices like mobile Lelephones, personal daLa assisLanLs, porLable computers, two-way radios, and similar eleotronio systems. The IC can perform any other type of function.The electronic device 1100 (for instance a drive such as a Solid State Drive or flash memory) can also include an external memory 1120, which in turn can include one or more memory elements suitable to the particular application, such as a main memory 1122 in the form of random access memory (RAM) , one or more hard drives 1124, or one or more drives that handle removable media 1126 such as compact disks (CD), flash memory cards, digital video disks (DVD), and the like.The electronic device 1100 can also include one or more of a display device 1116, one or more speakers 1118, a keyboard or controller 1130, which may optionally include a mouse, trackball, touch screen, voice-recognition device, or any other device that permits a system user to input information into and receive informaLion from Lhe elecLronic device 1100.To better illustrate the method and apparatuses disclosed herein, a non-limiting list of embodiments is provided here: Example 1 is an apparatus for a method for making a stacked semiconducLor device comprising: forming rims on a firsL die and a second die, the rims extending laterally away from the first and second dice; stacking the second die over the first die; and drilling one or more vias through the rims after stacking, the one or more vies extending between the first and second dice.In Example 2, Lhe sublecL maLLer of Example 1 can opLionally include filling Lhe one or more vias wiLh a conducLive maLerial Lo electrically interconnect the first and second dice.In Example 3, Lhe sublecL maiLer of any one of examples 1-2 can opiionally include wherein forming rims includes forming a dieleciric porLion over Lhe firsL die and Lhe second die, Lhe rims formed wiLh the dielectric portion.In Example 4, the subject matter of any one of examples 1-3 can optionally include wherein forming the dielectric portion includes molding resin around Lhe firsb die and Lhe second die, Lhe rims formed with the resin.In Example 5, the subject matter of any one of examples 1-4 can optionally include forming a first reconstituted dice panel including a firsL pluraliLy of dice molded in a panel frame, Lhe firsL pluraliLy of dice including Lhe firsL die, and forming a second reconsLiLuLed dice panel including a second plurality of dice molded in another panel frame, the second plurality of dice including the second die; and forming rims includes surrounding a periphery of the dice in the first and second reconstituted dice panels with a dielectric material.In Example 6, the subect matter of any one of examples 1-5 can optionally include sorting the dice in the first plurality of dice and second plurality of dice to ensure only operational dice are used to form the first and second reconstituted dice panels.In Example 7, the subject matter of any one of examples 1-6 can optionally include separating individual stacks of first and second adhered dice from the first and second reconstituted dice panels.In Example 8, the subect matter of any one of examples 1-7 can optionally include wherein drilling the one or more vias consists of one or more of laser drilling, mechanical drilling or chemical eLching.In Example 9, the subect matter of any one of examples 1-8 can optionally include wherein drilling the one or more vias is continuous through the first and second dice.In Example 10, Lhe sub lecL maLLer of any one of examples 1-9 can optionally include forming one or more redistribution layers of conductive traces over one or more of the first or second dice or the rims, the one or more vias in communication with the conductive traces at the rims.In Example 11, Lhe sub lecL maLLer of any one of examples 1-10 can opLionally include wherein sLacking Lhe firsL die over Lhe second die includes staggering the second die relative to the first die to expose aL leasL one bond pad of Lhe second die.In Example 12, Lhe sub lecL maLLer of any one of examples 1-11 can opLionally include wherein drilling Lhe one or more vias includes drilling at least one via through the rim of the first die, the at least one via extending to the at least one bond pad of the second die.In Example 13, Lhe subecL maLber of any one of examples 1-12 can optionally include A method for making a stacked semiconductor device comprising: sorting dice into a plurality of operational dice, the plurality of operational dice tested for operability; and forming at leasL a firsL reconsLiLuLed dice panel including: arranging Lhe sorLed pluraliLy of operaLional dice wiLhin a panel frame, and molding a resin around the plurality of operational dioe within the panel frame to form the first reconstituted dice panel, rims formed with the resin extend laterally from each of the plurality operational dice.In Example 14, the subject matter of any one of examples 1-13 can optionally include repeating arranging and molding to form a second reconstituted dice panel, rims extend laterally away from each die of the plurality of operational dice of the second reconstituted dice panel.In Example 15, the subject matter of any one of examples 1-14 can optionally include coupling the first reconstituted dice panel to the second reconstituted dice panel; and drilling one or more vias in the coupled first and second reconstituted dice panels, the one or more vias within the rims of the plurality of operational dice and the one or more vias extend between the first and second reconstituted dice panels.In Example 16, the subject matter of any one of examples 1-15 can optionally include wherein coupling the first reconstituted dice panel to the second reconstituted dice panel includes aligning the pluraliLies of operaLional dice of each of Lhe firsL and second In Example 17, the subect matter of any one of examples 1-16 can optionally include separating the first and second reconstituted dice panels into a plurality of multi-layered packages, each of the multi-layered packages including: aL leasL Lwo dice of Lhe pluraliLy of operaLional dice of Lhe firsL and second reconsLiLuLed dice panels, and at least one via of the one or more vias.In Example 18, Lhe sub lecL maLLer of any one of examples 1-17 oan opLionally include wherein drilling one or more vias In Lhe coupled flrsL and seoond reconsLiLuLed dice panels includes drilling one or more vias through the rims of the plurality of operational dice.In Example 19, the subject matter of any one of examples 1-18 can optionally include filling the one or more vias with a conductive maberial Lo elecbrically couple Lhe firsb and second reconsLibuLed dice panels.In Example 20, the subject matter of any one of examples 1-19 can optionally include wherein forming at least the first reconstituted dIce panel includes forming one or more redisLribuLion layers of conducLive Lraces over Lhe pluraliLy of operaLional dice and Lhe respective rims, the one or more vias in communication with the conductive traces at the rims.In Example 21, the subject matter of any one of examples 1-20 can optionally include wherein arranging the sorted plurality of operational dice within the panel frame includes arranging the sorted plurality of operational dice into one or more staggered stacks of dice within the panel frame, each of the one or more staggered stacks of dice including two or more dice and at least one of the two or more dice is staggered relative to an adjacent die.In Example 22, the subject matter of any one of examples 1-21 can optionally include wherein molding the resin around the plurality of operation dice includes molding the resin around each of the one or more staggered stacks of dice.In Example 23, the subject matter of any one of examples 1-22 can opLionally include a semiconducLor device comprising: a firsL die; a second die stacked over the first die; rims extending laterally away from each of the first and second dice; a first redistribution layer extending over the first die and the rim of the first die; and one or more vias exLending Lhrough aL leauL one of Lhe respecLive rims, Lhe one or more vias in communication with the first and second dice through the rims.In Example 24, the subject matter of any one of examples 1-23 can optionally include wherein the respective rims are molded resin rims molded around Lhe respecLive firsL and second dice, Lhe one or more vlas exLend Lhrough aL leasL one of Lhe molded resin rims.In Example 25, the subject matter of any one of examples 1-24 can opiionally include dielecLric poriions formed over each of Lhe firsi and second dice, ihe dielecLric poniions including Lhe one or more rims, and Lhe one or more vias exLend Lhrough Lhe dielecLric porLions.In Example 26, the subject matter of any one of examples 1-25 can optionally include wherein the one or more vias are laterally spaced from the first and second dice.In Example 27, ihe suhecL maLber of any one of examples 1-26 can optionally include a second redistribution layer extending over the second die and the rim of the second die.In Example 28, the subject matter of any one of examples 1-27 can opLionally include Lhe firsL and second redisLribuLion layers provide a fan-ouL configuraLion of conducLive Lraces exLendlng over and beyond respective footprints of the first and second dice, and the one or more vias are in communication with the first and second redistribution layers.In Example 29, the subject matter of any one of examples 1-27 can optionally include wherein the vias are drilled vias formed in at least one of the respective rims after stacking of the second die over the first die.In Example 30, the subject matter of any one of examples 1-29 can optionally include a plurality of dice including the first and second dice, rims extend laterally from each of the plurality of dice, the plurality of dice are in a stacked configuration, and the one or more vias extend through at least two of the respective rims of the plurality of dice.In Example 31, the subject matter of any one of examples 1-30 can opLionally include wherein Lhe second die is siaggered relaLive Lo Lhe first die, the second die include at least one exposed bond pad according to the staggering.In Example 32, the subject matter of any one of examples 1-31 can opLionally include wherein Lhe one or more vias exiend Lhrough Lhe rim of the first die to the at least one exposed bond pad of the second die.Each of these non-limiting examples can stand on its own, or can be combined in any permutation or combination with any one or more ofLhe oLher examples.The above deLailed descripLion includes references Lo Lhe accompanying drawings, which form a part of the detailed description.The drawings show, by way of illusLraLion, specific embodimenis in which Lhe disclosure can be praciiced. These embodimenLs are also referred Lo herein as "examples." Snob examples can include elemenLs in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also conbemplaLe examples using any combinaLion or permuLabion of Lhose elements shown or described (or one or more aspects thereof) , either with respect to a particular example (or one or more aspects thereof) or with respect to other examples (or one or more aspects thereof) shown or described herein.In Lhis dooumenL, Lhe Lerms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A and B," unless otherwise indicated. In this document, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein." Also, in the following claims, the terms "including" and "comprising" are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim.Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.The above descripLion is inbended Lo be illusbrabive, and noL restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other.Other embodiments can be used, such as by one of ordinary skill in the arL upon reviewing Lhe above descripLion. The Absbracb is provided Lo comply with 37 C.F.R. §1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. Tt is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed DescripLion, various feaLures may be grouped LogeLher Lo sLreamline Lhe disclosure. This should noL be inLerpreLed as inLending LhaL an unclaimed disclosed feature is essential to any claim. Rather, invenLive sob jecL maLLer may lie in less Lhan all feaLures of a parLicular disclosed embcdimenL. Thus, Lhe following claims are hereby incorporaLed inLo Lhe DeLaifed DescripLion, wiLh each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the disclosure should be debermined wibh reference Lo Lhe appended claims, along wiLh Lhe full scope of equivalents to which such claims are entitled. |
<P>PROBLEM TO BE SOLVED: To provide a methods and systems to display platform graphics during initialization of a computer, including functions to interrupt initialization of an operating system and to update a video frame buffer with platform graphics data when the initialization of the operating system is interrupted, and to merge graphics generated by operating system initialization logic with platform graphics data. <P>SOLUTION: The methods and systems include virtualization methods and systems and system management mode methods and systems. <P>COPYRIGHT: (C)2011,JPO&INPIT |
A method comprising: initializing a computer system, including starting a video driver; initializing an operating system on the computer system; and initializing the operating system. Repetitively interrupting, updating the video frame buffer with platform video frame data when the operating system initialization is interrupted, and after updating the video frame buffer, Resuming initialization of the operating system.The method of claim 1, comprising: intercepting an operating system video service request and corresponding graphics associated with the initialization of the operating system; and the graphics as the platform video frame. Merging with data, and updating the video frame buffer comprises updating the video frame buffer with merged video frame data.The method of claim 1 including the step of stopping the display of the platform video frame data upon completion of the operating system initialization.The method of claim 1, wherein a virtual machine is started on the computer system, a platform initialization driver and an application are transferred to the virtual machine, and the operating system within the virtual machine. The step of repeatedly interrupting includes interrupting the virtual machine repeatedly, and updating the video frame buffer is performed when the virtual machine is interrupted. Updating the frame buffer.5. The method of claim 4, wherein initializing the operating system within the virtual machine comprises virtualizing the video frame buffer for the virtual machine; Interfacing with the virtualized video frame buffer, wherein updating the video frame buffer decodes the platform video frame data into a platform video decode buffer Merging the video frame data from the virtual video frame buffer and the platform video decode buffer, and updating the video frame buffer with the merged video frame data Including the step of:The method of claim 1, wherein said interrupting includes interrupting a normal processor environment in response to a timer interrupt, and said updating of said video frame buffer comprises said platform video frame data. Decoding, updating the video frame buffer in response to the timer interrupt while the normal processor environment is interrupted, and resuming the normal processor environment following the video frame buffer update Including methods.7. The method of claim 6, wherein in the normal processor environment, a corresponding graphics and video service request associated with initialization of the operating system is intercepted and a video service interrupt is issued in response to the video service request. Generating, interrupting the normal processor environment in response to the video service interrupt, and updating an operating system initialization video frame buffer with the graphics while the normal processor environment is suspended And resuming the normal processor environment following the update of the operating system initialization video frame buffer, wherein the updating of the video frame buffer comprises the operating system initialization video frame buffer. Bag Merging the graphics in the video with the decoded platform video frame data in the platform video frame decode buffer and updating the video frame buffer with the merged video frame data A method further comprising:7. The method of claim 6, wherein during initialization of the computer system, writing values to the video frame buffer location and eliminating writing platform video data to the location. Comparing the contents of the position with the value in response to the timer interrupt, and stopping the display of the platform video frame data when the contents of the position are different from the value. Method.A computer program product comprising a computer readable medium having stored therein computer program logic, the computer program logic comprising platform initialization logic for causing a processor to initialize a computer system; Operating system initialization logic for causing the processor to initialize an operating environment, and causing the processor to repeatedly interrupt the processing of the operating system initialization logic, so that the processing of the operating system initialization logic is performed. Platform graphics video display logic to update the video frame buffer with platform video frame data when interrupted. Yuta program product.10. The computer program product of claim 9, wherein the logic and the platform for intercepting the processor to intercept corresponding graphics and video service requests associated with the operating system initialization logic. Merge logic for causing the processor to merge video frame data, wherein the platform video display sends the video frame buffer to the processor with the merged video frame data. A computer program product that contains logic for updating.10. The computer program product of claim 9, further comprising platform video display stop logic for causing the processor to stop the platform video display logic after initialization of the operating environment. ·product.10. The computer program product of claim 9, wherein the platform video display logic causes the processor to host a virtual machine, transfer platform initialization drivers and applications to the virtual machine, and Virtual machine manager logic for launching the operating system initialization logic in the machine, and causing the processor to repeatedly suspend the virtual machine, and when the virtual machine is suspended, the platform video frame A computer program product comprising interrupt logic for updating said video frame buffer with data.13. The computer program product of claim 12, wherein the virtual machine manager logic includes logic for causing the virtual machine to virtualize the video frame buffer to the processor, the platform. The video display logic further comprises virtual display logic, the virtual display logic providing one or more video interfaces between the virtual machine and the virtualized video frame buffer to the processor; Video interface logic for providing and a platform video decoding logic for causing the processor to decode the platform video frame data into a platform video decoding buffer. And causing the processor to merge video frame data from the virtual frame buffer and the platform video decode buffer, and the video frame buffer is merged with the merged video frame data. A computer program product that includes buffer merge logic for updating.10. The computer program product of claim 9, wherein the platform video display logic includes system management mode logic stored in firmware, wherein the processor interrupts a normal processor environment and the normal The system management mode logic is configured to store a state of a processor environment and activate the system management mode logic in response to an interrupt, the system management mode logic comprising: timer logic for repeatedly generating a timer interrupt; Causing the processor to decode the platform video frame data, in response to the timer interrupt, to update the video frame buffer with the platform video frame data, and to update the video frame buffer Continued Computer program product comprising a timer interrupt handler logic for resuming the normal processor environment Te.15. The computer program product of claim 14, wherein the system management mode logic causes the processor to invoke a video dispatch service in the normal processor environment to cause the processor to request a video service request, and Video service dispatch logic for intercepting corresponding graphics associated with the operating system initialization logic and generating a video service interrupt in response to the video service request; and In response to a video service interrupt, causes the operating system initialization video frame buffer to be updated with the graphics, and following the update of the operating system initialization video frame buffer And a video service interrupt handler logic for resuming the normal processor environment, wherein the timer interrupt handler logic causes the processor to initialize the operating system in the operating system initialization video frame buffer. A computer program product comprising merge logic for merging graphics and the decoding platform video data and for updating the video frame buffer with merged video frame data.15. The computer program product of claim 14, wherein the system management mode logic further comprises platform video display stop logic, wherein the platform video display stop logic comprises the system management mode logic. When activated, logic for causing the processor to write a value to the location of the video frame buffer; logic to eliminate the processor from writing platform video data to the location; Logic for causing the processor to compare the contents of the position with the value in response to the timer interrupt, and if the contents of the position differ from the value, cause the processor to stop the platform video display logic Computer program product that includes a logic for.A system comprising: a computer system; platform initialization logic for initializing the computer system; and operating system initialization logic for initializing an operating environment of the computer system. Platform graphics video display logic for interrupting the operating system initialization logic and updating the video frame buffer with platform video frame data when the operating system initialization logic is interrupted. Including system.18. The system of claim 17, wherein an intercept logic for intercepting a video service request and corresponding graphics associated with initialization of the operating system, and the graphics to the platform video frame data. And merge logic for merging, wherein the platform graphics video display logic includes logic for updating the video frame buffer with merged video frame data.18. The system of claim 17, further comprising platform video display stop logic for stopping the platform video display logic after initialization of the operating environment.18. The system of claim 17, wherein the platform video display logic hosts a virtual machine, transfers platform initialization drivers and applications to the virtual machine, and the operating system initials within the virtual machine. Virtual machine manager logic for invoking activation logic and repeatedly interrupting the virtual machine and updating the video frame buffer with the platform video frame data when the virtual machine is interrupted System with interrupt logic.18. The system of claim 17, wherein the platform video display logic includes system management mode logic stored in firmware, the computer system interrupts a normal computer system environment and the normal The system management mode logic is configured to store a state of a computer system environment and activate the system management mode logic in response to an interrupt, the system management mode logic comprising: timer logic for repeatedly generating a timer interrupt; , Decoding the platform video frame data, updating the video frame buffer with the platform video frame data in response to the timer interrupt, and following the updating of the video frame buffer Normal System including a timer interrupt handler logic to resume the computer system. |
Method and system for displaying platform graphics during system initialization operationComputer system initialization following power-on reset is platform initialization, also referred to as basic input / output system (BIOS) booting, followed by operating system (OS) initialization, ie OS booting. including. BIOS booting can take about 2 to 3 seconds. OS booting can take about 10-12 seconds.Depending on the services provided during BIOS booting, the OS booting logic may draw relatively simple graphics on the display during OS booting. Graphics are typically associated with OS vendors because they are associated with OS logic.The present specification and claims describe a method and system for displaying platform graphics during operating system initialization. Platform graphics can also be displayed during BIOS booting or a portion thereof.The term “platform graphics” in this specification and claims refers to graphics other than OS initialization graphics generated by the operating system initialization logic.Platform graphics may include one or more of audio, video, still images, text, wallpaper, and skins. The terms “platform graphics” and “platform video” can be used interchangeably in this specification and in the claims.Platform graphics is not a limited enumeration, but is owned by a third party product or service that may include advertising graphics, graphics provided by a computer platform vendor or manufacturer, a computer system, or a computer system. Graphics associated with entities that exercise control over (including managed hosting providers) and graphics related to personal graphics.Platform graphics can be displayed instead of OS initialization graphics. Alternatively, OS initialization graphics can be merged with platform graphics.It is a figure which shows the display of the platform graphics during initialization of a computer system.FIG. 5 is another diagram illustrating display of platform graphics during initialization of the computer system.6 is a process flow diagram illustrating an exemplary method for displaying platform graphics during computer system initialization.6 is a process flowchart illustrating another exemplary method for displaying platform graphics during initialization of a computer system.FIG. 2 illustrates an example video merge environment.FIG. 4 illustrates another example video merge environment.1 is a block diagram illustrating an exemplary computer system.FIG. 7 is a block diagram illustrating another exemplary computer system 800.FIG. 6 illustrates an exemplary OS initialization environment for computer system 800.6 is a process flowchart illustrating an exemplary method for displaying platform video during initialization of computer system 800.FIG. 12 is a block diagram illustrating another exemplary computer system 1100.FIG. 6 is a process flowchart illustrating an exemplary method for displaying platform video during initialization of computer system 1100, superimposed on a diagram of the initialization environment of computer system 1100. FIG.In the drawings, the leftmost digit of a reference number identifies the drawing in which the reference number first appears.FIG. 1 is a diagram of a platform video display during a computer system initialization process 100. The initialization process 100 includes a platform initialization 102 and an OS initialization 104. The initialization process 100 is followed by the OS runtime 106. Platform initialization 102 may be followed by a power-on reset and may include one or more power-on self-diagnostics and system boot procedures shown herein as basic input / output system (BIOS) boot procedures 108. Platform initialization 102 may identify and initialize one or more device drivers corresponding to the physical resources of the computer system.OS initialization 104 may include identifying and installing operating system logic.Platform video may be displayed at 110 during OS initialization 104 and may be displayed during at least a portion of platform initialization 102.At least one of platform initialization 102 and OS initialization 104 may include an extensible firmware interface (EFI) or uniform EFI (IEFI), as described below with respect to FIG.FIG. 2 is a diagram of a video display during the computer system initialization process 200. Platform initialization 102 and OS initialization 104 include EFI module initialization. In the example of FIG. 2, the platform video 110 is launched in a driver execution environment (DXE) 202 and displayed in a boot device selection (BDS) environment 204 and a transient system load (TSL) environment 206.FIG. 3 is a process flowchart of an exemplary method 300 for displaying platform video during OS initialization.At 302, the computer system is reset by a power-on reset or other reset to activate booting of the computer system.At 304, one or more power-on self-diagnosis can be performed in the computer system.At 306, platform initialization, such as platform initialization 102 shown in at least one of FIGS. 1 and 2, is performed. Platform initialization may include the installation of one or more drivers, such as a video driver associated with the video display.In 308, the operating system is initialized (such as the OS initialization 104 shown in at least one of FIGS. 1 and 2).At 310, the platform video is displayed during the OS initialization platform at 308. Platform video can be launched during platform initialization at 306.In the example of FIG. 3, the display of the platform video at 310 is to determine whether the OS has been initialized (312) and the video frame in the platform video data if the OS has not been initialized. Includes buffer update (314).Updating the video frame buffer at 314 may include retrieving and decoding data corresponding to the video frame of the platform video. Updating the video frame buffer at 314 may include updating a portion of the video frame buffer, such as a subset of the platform video frames that is different from the previously displayed video frame.The display of platform video at 310 may include an OS boot interrupt at 308 for the video frame buffer update at 310. Interrupts can be made at regular intervals. Alternatively, or in addition, it can be done depending on one or more states (eg, depending on the processor idle time that can occur when the processor waits for a response from another device, such as a storage device).The platform video display at 310 can be done without significantly affecting the time to complete OS boot 308. The reason why the OS startup time is relatively long is that it includes a process of waiting for a relatively slow input / output channel such as storage device access.FIG. 4 illustrates an exemplary method 400 for displaying platform video during OS initialization (including repeated interrupts of OS initialization and updating the video frame buffer with platform frame data during the interrupts. ) Process flowchart.At 402, the computer system is reset as described above with respect to 302.At 404, one or more power-on self tests can be performed in the computer system as described above with respect to 304.At 406, the physical resources of the computer system are initialized as described above with respect to 306.At 408, the platform video service is activated. Invoking the platform video service may include loading instructions into the memory to enable the processor to update the video frame buffer with platform video frame data due to subsequent events. Subsequent events may include periodic timer events, and the platform video service may include periodically interrupting the operating system startup and starting a timer to update the video frame buffer during the interrupt. .At 410, OS initialization begins.At 412, the operating system initialization is interrupted in response to the event and the platform video service is activated.If the operating environment initialization is not complete at 414, the platform video frame data is decoded at 416 and the video frame buffer is updated at 418.If the operating environment initialization is complete at 414, the platform video service is stopped at 420 and the runtime or operating environment is entered at 422.Displaying platform video may include merging OS initialization graphics and platform video and updating a video frame buffer with merged video frame data.Merging may include superimposing text from the OS initialization logic on the platform video. FIG. 5 is a diagram illustrating an exemplary video merge environment 500 that includes an OS initialization video frame buffer 502, a platform video decode buffer 504, and a video frame buffer 506. Text 508 from operating system initialization graphics 510 is stored in operating system video frame buffer 502. Platform video frame image 512 from platform video frame data 514 is stored in platform video frame buffer 504. Text 508 and image 512 are merged and stored in video frame buffer 506 for display.Merging may include superimposing one or more graphics windows on one or more other graphics windows, such as picture-in-picture. This displays, for example, user selectable options (such as accessing the BIOS setup configuration or proceeding to OS initialization) during platform initialization and / or OS initialization. Can be useful for. FIG. 6 is a diagram illustrating an exemplary video merge environment 600 (including OS initialization video frame buffer 602, multiple platform video frame buffers 604-606, and video frame buffer 608). is there. In the example of FIG. 6, each of the platform videos 610-612 corresponding to the platform video frame buffers 604-606 is merged within the OS initialization graphics from the OS initialization video frame buffer 602, The frame buffer 608 is updated with the corresponding merged graphics.One or more configurations described herein may be implemented with logic (which may include at least one of integrated circuit logic and computer program product logic).FIG. 7 illustrates an exemplary computer program that includes one or more computer instruction processing units shown herein as processor 702 to execute computer program product logic, also known as instructions, code, and software. 2 is a block diagram of system 700. FIG.The computer system 700 may be configured to execute at least one of computer program product logic and integrated circuit logic stored on a computer readable medium for causing the processor 702 to perform one or more functions. Includes logic 704 that may include one.In the example of FIG. 7, logic 704 may include basic input / output system (BIOS) logic and causes processor 702 to initialize components of computer system 700 that may include extensible firmware interface (EFI) logic. Platform initialization logic 710 for including.The logic 704 further includes operating system (OS) initialization logic 714 for causing the processor 702 to launch one or more operating environments. The OS initialization logic 714 may include one or more of boot manager logic, OS loader logic, and OS logic.The logic 714 further includes platform video display logic 712 for causing the processor 702 to display platform video during OS logic 714 startup. Platform video display logic 712 may include logic for causing processor 702 to display platform video during activation of at least a portion of platform activation logic 710. Platform video display logic 712, or a portion thereof, may be implemented in platform initialization logic 710.The computer system 700 further includes a memory / storage device 706 that stores data 708 that is intended for use by the processor 702 in the execution of the logic 704 or generated by the processor 702 in response to the execution of the logic 704. Including. In the example of FIG. 7, data 708 includes platform video frame data 716, platform video frame decode buffer 718, OS initialization video frame buffer 720, and video frame buffer 722. Video frame buffer 722 may represent the final video frame buffer from which processor 702 sends video frame data to display 724.Memory / storage device 706 may include a computer readable medium on which logic 704 is stored.Computer system 700 may include a network interface device or card (NIC) 726 for interfacing with one or more communication networks. The computer system 700 may include one or more other interfaces (such as a universal serial bus (USB) interface) for interfacing with one or more other devices.Computer system 700 may include a communication infrastructure 728 for communicating between processor 702, memory / storage 706, display 724, NIC 726, and other interface devices.Platform video frame data 716 may be received and / or updated over the network (such as via NIC 726) and received and / or secured to OS initialization logic 714 or independent of OS initialization logic 714. / Or may be updated.Platform video frame data 716 can be stored in one or more of firmware, flash, and hard disk storage.Video frame data 716 is received and / or updated and stored in a hidden partition, securely to OS initialization logic 714, such as by virtualization logic or system management mode logic (examples described below). Can be done. For example, the virtual machine manager (VMM) may obtain a hidden disk partition and ensure that the platform frame data 716 is available even if the operating system logic is reinstalled. Can intercept integrated drive electronics (IDE) controller I / O access. The hidden partition can be obtained in the advanced host controller interface (AHCI) mode of the computer system 700.Platform video display logic 712 may include virtualization logic for causing processor 702 to virtualize at least one of a video frame buffer 722 and a video interface to an operating system initialization environment. Exemplary virtualization methods and systems are described with respect to FIGS. 8, 9, and 10. FIG.FIG. 8 is a block diagram of an exemplary computer system 800. The configuration of the computer system 800, similar to the configuration described above with respect to FIG. 7, has the same reference numerals in the two least significant digits. Computer system 800 is described below with respect to FIGS. FIG. 9 is a diagram illustrating an exemplary OS initialization environment 900 of the computer system 800. FIG. 10 is a process flowchart of an exemplary method of OS initialization for computer system 800.In FIG. 8, logic 804 includes VMM logic 830 for causing processor 802 to generate virtual machine manager (VMM) 902 in FIG. 9 to host OS initialization virtual machine (VM) 904. The VMM logic 830 includes logic for causing the VM 904 as a driver and application 906 to transfer to the processor 802 the driver and application activated during platform activation. The VMM logic 830 further includes logic for causing the processor 802 to initialize the OS initialization logic 814 within the VM 904 as indicated by OS initialization 908. VMM logic 830 further includes video frame buffer virtualization logic 834 for causing processor 802 to initialize virtual video frame buffer 820 in FIG.Logic 804 is video interface logic 836 for causing processor 802 to initialize one or more video interfaces 910 in FIG. 9 to interface between VM 904 and virtual video frame buffer 820. Virtual display logic 832 is further included. The virtual display interface 910 includes a graphics output protocol interface 912 for interfacing with drivers and applications 906, a video frame buffer interface 914 for interfacing with OS initialization 908, and legacy OS loader logic to legacy. One or more of legacy type services 916 for receiving type video service interrupts (such as Int10 video service interrupts) may be included.Virtual display logic 832 causes processor 802 to decode platform video frame data 816 and store the decoded platform video frame data in platform video frame decode buffer 818. Further included is platform video decode logic 838.Virtual display logic 832 causes processor 802 to merge decoded platform video frame data in decode buffer 818 and OS initialization video frame data in virtual video frame buffer 820; Further included is buffer merge logic 840 for updating the video frame buffer 822 with the merged video frame data.In the example of FIG. 9, virtual interface 910, decode logic 838, and buffer merge logic 840 are shown in virtual display environment 918. Virtual display environment 918, or a portion thereof, may reside in a VM hosted by VMM 902, such as VM 904, or may represent the VM. Alternatively or additionally, the virtual display environment 918, or portion thereof, can be implemented in the VMM 902.VMM logic 830 causes timer 920 to cause processor 802 to maintain timer 920, periodically cause processor 802 to exit VM 904, and activate decode logic 840 and merge logic 838, as described below with respect to FIG. Logic 842 is further included. With the exit of the VM 904, the state value corresponding to the VM 904 can be saved to reenter the VM 904.In FIG. 10, at 1002, the computer system 800 performs one or more power-on self-tests following a platform reset.In 1004, platform initialization is performed. Platform initialization may be performed in accordance with platform initialization logic 810 of FIG.At 1006, virtual display logic 832 is invoked to cause processor 802 to initialize virtual interface 910 and platform video decode buffer 818.At 1008, the VMM logic 830 is activated to cause the processor 802 to activate the VMM 902 and VM 904, transfer the driver and application 906 to the VM 904, and activate the timer 920.At 1010, initialization of the computer system 800 is transferred to the VM 904. This may include launching the remaining logic in platform initialization logic 810 at 1012, and launching OS launch logic 814 in VM 904 at 1014.During initialization at 1010, if a video service is requested from VM 904, virtual video frame buffer 820 is updated at 1016 using one or more of virtual display interfaces 910. This may include receiving or intercepting a video service request at 1018, processing the request at a virtual interface at 1020, and updating the virtual video frame buffer 820 at 1022. At 1024, the process returns to 1010 and continues OS initialization within the VM 904.Further, during initialization at 1010, when timer 920 expires, at 1025, video frame buffer 822 is updated with platform video frame data 816. This includes determining at 1026 that the timer has expired, exiting the VM 904 at 1028, decoding the platform video frame data 816 at 1030, and decoding the decoded video frame data Storing in platform video decode buffer 818, merging the contents of platform video frame buffer 818 and virtual video frame buffer 820 at 1032, and video frame buffer 822 at 1034 And updating with the merged data. At 1036, the process returns to 1010 and continues initialization within the VM 904.At 1010, once the operating system initialization is complete, processing proceeds to a runtime environment at 1038, at which display of platform video data 816 can be stopped. In the example of FIG. 10, this is shown as detecting a video frame buffer page fault at 1040 and correcting the address to the virtual video frame buffer 820 at 1042.Multiple instances of one or more virtual display logic 840 and VMM logic 830, or portions thereof, may be activated (eg, to provide picture-in-picture features described above with respect to the figures).If the platform initialization logic 810 includes unified EFI (UEFI) logic, the platform initialization logic 810 may include logic to manage n times {Boot00X, Sound_File, Video_File, Background_Image}.The environment 900 may include one or more OS performance drivers in a wrapper or virtualization container hosted by the VMM 902. This may allow the use and scaling of an encoder / decoder to provide additional multimedia capabilities.Referring once again to FIG. 7, logic may be included to cause the processor 702 to provide video services, including platform video services outside the normal processor environment, such as in a system management mode (SMM) of operation.As described below, the SMM can provide video services even after firmware-based platform initialization moves to OS initialization. In order to prevent the video device used for platform video from being accessed by OS initialization, platform initialization is referred to as normal video represented as platform video service in this specification and claims. • may include replacing the service with an SMM video service. For example, a simulated firmware-based video driver can provide an INT 10H service utilized by a normal OS loader system to transfer video service calls to the SMM.SMM is activated by a system management interrupt (SMI). With SMI, a processor can be stored in system firmware, is not available to application software or operating system software, and / or corresponds separately from application software or operating system software Run the SMI handler code.In a typical computer system, the SMI handler can process up to 4 Gbytes of memory and can execute all or nearly all of the applicable system and input / output (I / O) instructions. The video frame buffer may be mapped to one or more locations that include an A0000-BFFFF segment in VGA mode and a 32-bit physical address in SVGA mode. The SMM can directly access the SVGA video memory space.The computer system includes an I / O controller having a system management device for enabling and controlling SMI resources including an SMM software timer for generating the SMI represented in this specification and claims as a timer SMI. A hub may be included. When the SMM software timer is enabled, a timer SMI is periodically generated. SMM software timers, for example, 0.9 milliseconds (ms) to 68 ms (including 0.9 ms to 2.1 ms range, 12 ms to 20 ms range, 28 ms to 36 ms range, and 60 ms to 68 ms range) The timer SMI can be programmed to generate at intervals within the range of Depending on the timer SMI, the corresponding timer SMI handler in the SMM can be activated even after the firmware moves to OS initialization.Depending on the available hardware devices of the computer system, the SMI-based video driver can directly access all or nearly all of the video frame buffer even after platform initialization passes control to OS initialization. can do.Exemplary SMM methods and systems are described below with respect to FIGS.FIG. 11 is a block diagram of an exemplary computer system 1100. The configuration of the computer system 1100, similar to the configuration described above with respect to FIG.Computer system 1100 is described with respect to FIG. FIG. 12 is a process flow diagram of an exemplary method 1200 for displaying platform video during initialization of a computer system 1100, superimposed on a diagram of an initialization environment that includes a normal operating mode 1250 and an SMM 1254. It is.In FIG. 11, the logic 1104 includes SMM video logic 1130 for causing the processor 1102 to display platform video frame data 1116 from within the SMM 1254 during initialization of the computer system 1100.SMM video logic 1130 includes SMI handler logic 1132 for causing processor 1102 to perform the functions in SMM 1254 in response to an interrupt. In the example of FIG. 11, the SMI header logic 1132 is also referred to herein as the Advanced Process Management (APM) SMI handler logic 1134 for causing the processor 1102 to install the APM SMI handler 1255 of FIG. Contains SMI handler logic 1134 representing.The SMI handler logic 1132 further includes timer SMI handler logic 1136 for causing the processor 1102 to install the timer SMI handler 1236 of FIG.SMI video logic 1130 further includes video service dispatch logic 1138 for causing processor 1102 to install video service dispatcher 1258 of FIG.The SMI video logic 1130 further includes SMM timer logic 1140 for causing the processor 1102 to install a software-based SMM timer 1258 in the SMM 1204, described below.Platform initialization logic 1110 includes SMM video load logic 1142 for causing processor 1102 to load or install SMM video logic 1130.Platform initialization logic 1110 includes at least one of basic input / output system (BIOS) logic and extensible firmware interface (EFI) logic (shown as BIOS / EFI logic 1144, including platform video driver logic 1146). One is further included.Platform initialization logic 1110 further includes APM SMI generation logic 1148 for generating APM SMI 1260 in FIG. 12 to invoke APM SMI handler 1254 during platform initialization.An exemplary initialization of computer system 1100 will now be described with reference to method 1200.At 1202, platform initialization occurs following a system reset. In the example of FIG. 12, platform initialization includes installing SMM video logic 1130 at 1204 in response to SMM video load logic 1146, activating platform video driver logic 1134 at 1206, and , Generating APM SMI 1260 at 1208 in response to MP SMI generation logic 1148.In response to the APM SMI 1260, the processor 1102 interrupts the normal processing mode 1250 and activates the APM SMI handler 1254 in the SMM 1252. This may include saving the normal processing mode 1250 state in the system state map.At 1210, the timer 1258 is started. At 1212, video device information corresponding to display 1124 is retrieved. At 1214, the reserved pixel can be set to a predetermined value in the video frame buffer 1122. SMM video logic 1130 is configured to eliminate the writing of platform video frame data 1116 to reserved pixels. The reserved pixels are then used to determine if the OS video driver is active, as described below with respect to 1232.At 1216, the platform video decode buffer 1118 and OS initialization buffer 1120 are initialized. At 1218, processing returns to normal processing mode 1250. This may include retrieving a state value corresponding to the normal processing mode 1250 from the state map.Returning to the normal processing mode 1250, platform initialization may resume at 1202, and then proceed to OS initialization at 1220, or may proceed directly to OS initialization at 1220.During OS initialization at 1220, a video service request 1262 is intercepted by the video service dispatcher 1258, which interrupts the processing mode 1250 and is described herein as part of the APM SMI handler 1254. And the processor 1102 activates the video service SMI handler indicated in the claims.At 1222, the OS initialization video frame buffer 1120 is updated in response to the video service request 1262 under the control of the APM SMI handler 1254. At 1224, processing returns to processing mode 1250.Further, when the timer 1258 expires during OS initialization 1220, a timer SMI 1266 is generated at 1230, the normal processing mode 1250 is interrupted, and the timer SMI handler 1256 in the SMM 1252 is activated by the processor 1102. At 1232, the reserved pixel in video frame buffer 1122 is compared to the value written at 1214. If the reserved pixel has not changed, platform video frame data 1116 is decoded 1234 and stored in platform video decode buffer 1118. At 1236, the contents of the platform video decode buffer 1118 and the OS initialization video frame buffer 1120 are merged. At 1238, the video frame buffer 1122 is updated with the merged video frame data. At 1240, timer 1258 can be reset. Alternatively, timer 1258 can cycle continuously.In 1242, the process returns to the OS initialization 1220 in the normal process mode 1250. When the OS initialization at 1220 is complete, processing proceeds to the runtime environment at 1246. When the video driver associated with the runtime environment is activated, the runtime environment may overwrite reserved pixels in the video frame buffer 1122. Returning to 1232, the platform video display can be stopped if the reserved pixel is changed. Stopping may include stopping timer 1258 and replacing video service dispatcher 1258 with a platform video service.One or more configurations described above with respect to virtualization and system management modes can be implemented in various combinations with each other.The methods and systems are described in this specification and in the claims using functions, functional units that represent configurations, and their relationships. At least a part of the boundaries of the functional structural units is arbitrarily defined in the present specification and claims for the convenience of explanation. Different boundaries can be defined as long as the specified functions and their relationships are performed appropriately.Those skilled in the art will recognize that the functional units described above can be implemented by discrete components, application specific integrated circuits, processors executing appropriate software, and combinations thereof. Let's go.700 Computer System 710 Platform Initialization Logic 712 Platform Video Display Logic 714 OS Initialization Logic |
The present invention provides a solid-state imager device (20) having a patterned buried doped region (33) in the substrate (30) , preferably an n+ doped region, that collects excess electrons and thus reduces cross-talk, minimizes blooming of excess electrons, and reduces dark current in a solid-state imager device, and a corresponding fabrication method. |
CLAIMS What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. An imager comprising: a substrate having a first conductivity type with a first dopant concentration level; an epitaxial layer having a first conductivity type with a second dopant concentration level formed on said substrate; a doped region having a second conductivity type formed in at least a part of said epitaxial layer; and an array of pixel sensor cells comprising a plurality of pixel cells formed at a first surface of said epitaxial layer. 2. The imager according to claim 1, wherein said substrate is doped to a P+ conductivity type. 3. The imager according to claim 1, wherein said epitaxial layer is doped to a P- conductivity type. 4. The imager according to claim 3, wherein said doped region is doped to an N+ conductivity type. 5. The imager according to claim 1, wherein said doped region is formed under said array in the entirety of said epitaxial layer. 6. The imager according to claim 1, wherein said imager further includes isolation regions separating said plurality of pixel cells in said array of pixel cells and said doped region is formed as a grid under said isolation regions. 7. The imager according to claim 4, wherein said doped region has a dopant concentration of from about 1 x 10<10> ions/cm<2> to about 1 x 10<18> ions/cm<2>. 8. The imager according to claim 4, wherein said doped region has a dopant concentration of from about 1 x 10<13> ions/cm<2> to about 1 x 10<15> ions/cm<2>. 9. The imager according to claim 1, wherein said imager is a CMOS imager. 10. The imager according to claim 1, wherein said imager is a CCD imager. 11. An imager comprising: a substrate having a first conductivity type with a first dopant concentration level; a first epitaxial layer having a first conductivity type with a second dopant concentration level formed on said substrate; a doped region having a second conductivity type formed in at least a part of said first epitaxial layer; a second epitaxial layer having a first conductivity type with a second dopant concentration level formed over said first epitaxial layer; and an array of pixel sensor cells comprising a plurality of pixel cells formed at a first surface of said second epitaxial layer. 12. The imager according to claim 11, wherein said substrate is doped to a P+ conductivity type. 13. The imager according to claim 11, wherein said first and second epitaxial layers are both doped to a P- conductivity type. 14. The imager according to claim 11, wherein said doped region is doped to an N+ conductivity type. 15. The imager according to claim 11, wherein said doped region is formed in the entirety of said first epitaxial layer. 16. The imager according to claim 14, wherein said doped region has a dopant concentration of from about 1 x 10<10> ions/cm<2> to about 1 x 10<18> ions/cm<2>. 17. The imager according to claim 14, wherein said doped region has a dopant concentration of from about 1 x 10<13> ions/cm<2> to about 1 x 10<15> ions/cm<2>. 18. The imager according to claim 11, wherein said doped region is formed under said array in the entirety of said epitaxial layer. 19. The imager according to claim 11, wherein said imager further includes isolation regions separating said plurality of pixel cells in said array of pixel cells and said doped region is formed as a grid under said isolation regions. 20. The imager according to claim 11, wherein said imager is a CMOS imager. 21. The imager according to claim 11, wherein said imager is a CCD imager. 22. An imager comprising: a substrate having a first conductivity type with a first dopant concentration level; a doped region having a second conductivity type formed in at least a part of said substrate layer; an epitaxial layer having a first conductivity type with a second dopant concentration level formed over said substrate; and an array of pixel sensor cells comprising a plurality of pixel cells formed at a first surface of said epitaxial layer. 23. The imager according to claim 22, wherein said substrate and said epitaxial layer are both doped to a P- conductivity type. 24. The imager according to claim 22, wherein said doped region is doped to an N+ conductivity type. 25. The imager according to claim 22, wherein said doped region is formed in the entirety of said substrate. 26. The imager according to claim 22, wherein said imager further includes isolation regions separating said plurality of pixel cells in said array of pixel cells and said doped region is formed as a grid under said isolation regions. 27. The imager according to claim 24, wherein said doped region has a dopant concentration of from about 1 x 10<13> ions/cm<2> to about 1 x 10<15> ions/cm<2>. 28. The imager according to claim 22, wherein said imager is a CMOS imager. 29. The imager according to claim 22, wherein said imager is a CCD imager. 30. A processor system comprising: a substrate having a first conductivity type with a first dopant concentration level; an epitaxial layer having a first conductivity type with a second dopant concentration level formed on said substrate; a doped region having a second conductivity type formed in at least a part of said epitaxial layer; an array of pixel sensor cells comprising a plurality of pixel cells formed at a first surface of said epitaxial layer; and a processor for receiving and processing data representing the image. 31. The processor system according to claim 30, wherein said arrays and said processor are formed on a single substrate. 32. The processor system according to claim 30, wherein said substrate is doped to a P+ conductivity type. 33. The processor system according to claim 30, wherein said epitaxial layer is doped to a P- conductivity type. 34. The processor system according to claim 33, wherein said doped region is doped to an N+ conductivity type. 35. The processor system according to claim 30, wherein said doped region is formed in the entirety of said epitaxial layer. 36. The processor system according to claim 34, wherein said doped region has a dopant concentration of from about 1 x 10<13> ions/cm<2> to about 1 x 10<15> ions/cm<2>. 37. The processor system according to claim 30, wherein said imager further includes isolation regions separating said plurality of pixel cells in said array of pixel cells and said doped region is formed as a grid under said isolation regions. 38. A processor system comprising: a substrate having a first conductivity type with a first dopant concentration level; a first epitaxial layer having a first conductivity type with a second dopant concentration level formed on said substrate; a doped region having a second conductivity type formed in at least a part of said first epitaxial layer; a second epitaxial layer having a first conductivity with a second dopant concentration level type formed over said first epitaxial layer; an array of pixel sensor cells comprising a plurality of pixel cells formed at a first surface of said second epitaxial layer; and a processor for receiving and processing data representing the image. 39. The processor system according to claim 38, wherein said arrays and said processor are formed on a single substrate. 40. The processor system according to claim 38, wherein said substrate is doped to a P+ conductivity type. 41. The processor system according to claim 38, wherein said first and second epitaxial layers are both doped to a P- conductivity type. 42. The processor system according to claim 38, wherein said doped region is doped to an N+ conductivity type. 43. The processor system according to claim 38, wherein said doped region is formed in the entirety of said first epitaxial layer. 44. The processor system according to claim 38, wherein said imager further includes isolation regions separating said plurality of pixel cells in said array of pixel cells and said doped region is formed as a grid under said isolation regions. 45. The processor system according to claim 42, wherein said doped region has a dopant concentration of from about 1 x 10<13> ions/cm<2> to about 1 x 10<15> ions/cm<2>. 46. A method of forming an imaging device, said method comprising: providing a substrate having a first conductivity type with a first dopant concentration level; forming a first epitaxial layer having a first [Conductivity type with a second dopant concentration level over said substrate; forming a doped region having a second conductivity type in said first epitaxial layer; forming a second epitaxial layer having a first conductivity type with a second dopant concentration level over said first epitaxial layer; and forming an array of pixel sensor cells formed at an upper surface of said second epitaxial layer. 47. The method according to claim 46, wherein said doped region is N+ doped formed by ion implantation. 48. The method according to claim 47, wherein said doped region is doped with arsenic. 49. The method according to claim 46, wherein said substrate has a P+ conductivity type. 50. The method according to claim 46, wherein said first and second epitaxial layer both have a P- conductivity type. 51. The method according to claim 50, wherein said second epitaxial layer has a thickness of from about 0.5 [mu]m to about 20.0 [mu]m. 52. The method according to claim 46, wherein said second epitaxial layer is doped with boron. 53. A method of forming an imaging device, said method comprising: providing a substrate having a first conductivity type with a first dopant concentration level; forming a doped region having a second conductivity type in said substrate; forming an epitaxial layer having a first conductivity type with a second dopant concentration level over said substrate; and forming an array of pixel sensor cells formed at an upper surface of said epitaxial layer. 54. The method according to claim 53, wherein said doped region is N+ doped formed by ion implantation. 55. The method according to claim 54, wherein said doped region is doped with arsenic. 56. The method according to claim 53, wherein said substrate and said epitaxial layer both have a P- conductivity type. 57. The method according to claim 53, wherein said epitaxial layer has a thickness of from about 0.5 [mu]m to about 20.0 [mu]m. 58. The method according to claim 57, wherein said epitaxial layer is doped with boron. |
BtFRIED DOPED REGION FOR VERTICAL ANTI-BLOOMING CONTROL AND CROSS-TALK REDUCTION FOR IMAGERSFIELD OF THE INVENTION[0001] The present invention relates generally to imaging devices and fabrication methods for forming an imaging pixel cell.BACKGROUND OF THE INVENTION[0002] Solid state imager devices which include charge-coupled-devices (CCD) and complementary metal oxide semiconductor (CMOS), have commonly been used in photo-imaging applications.[0003] Imager devices typically contain thousands of pixel cells in a pixel array on a single chip. Pixel cells convert light into an electrical signal that can then be stored and recalled by an electrical device such as, for example, a processor. The electrical signals that are stored may be recalled to produce an image on, for example, a computer screen or a printable media.[0004] Exemplary CMOS imaging circuits, processing steps thereof, and detailed descriptions of the functions of various CMOS elements of an imaging circuit are described, for example, in U.S. Patent No. 6,140,630, U.S. Patent No. 6,376,868, U.S. Patent No. 6,310,366, U.S. Patent No. 6,326,652, U.S. Patent No. 6,204,524, and U.S. Patent No. 6,333,205, each of which is assigned to Micron Technology, Inc. The disclosures of each of the forgoing patents are hereby incorporated by reference in their entirety.[0005] Solid state imager devices typically have an array of pixel cells containing photosensors, where each pixel cell produces a signal corresponding to the intensity of light impinging on that element when an image is focused on the array. These signals may then be used, for example, to display a corresponding image on a monitor or otherwise used to provide information about the optical image. The photosensors are typically photogates, phototransistors, photoconductors or photodiodes, where the conductivity of the photosensor corresponds to the intensity of light impinging on the photosensor. The magnitude of the signal produced by each pixel cell, therefore, is proportional to the amount of light impinging on the photosensor.[0006] CMOS active pixel sensor (APS) solid state imaging devices are described, for example, in the foregoing patents. These imaging devices include an array of pixel cells, arranged in rows and columns, that convert light energy into electric signals. Each pixel includes a photodetector and one or more active transistors. The transistors typically provide amplification, read-out control and reset control, in addition to producing the electric signal output from the cell.[0007] While CCD technology has a widespread use, CMOS imagers are being increasingly used as low cost imaging devices. A CMOS imager circuit includes a focal plane array of pixel cells, each one of the cells including a photoconversion device, for example, a photogate, photoconductor, phototransistor, or a photodiode for accumulating photo-generated charge in a portion of the substrate. A readout circuit is connected to each pixel cell and includes at least an output transistor, which receives photogenerated charges from a doped diffusion region and produces an output signal which is periodically read out through a pixel access transistor. The imager may optionally include a transistor for transferring charge from the photoconversion device to the diffusion region or the diffusion region may be directly connected to or be part of the photoconversion device. A transistor is also typically provided for resetting the diffusion region to a predetermined charge level before it receives the photoconverted charges.[0008] In a CMOS imager, the active elements of a pixel cell perform the necessary functions of: (1) photon to charge conversion; (2) accumulation of image charge; (3) transfer of charge to a floating diffusion region accompanied by charge amplification; (4) resetting the floating diffusion region to a known state; (5) selection of a pixel cell for readout; and (6) output and amplification of a signal representing the pixel cell charge. Photo-charge may be amplified when it moves from the initial charge accumulation region to the floating diffusion region. The charge at the floating diffusion region is typically converted to a pixel output voltage by a source follower output transistor.[0009] To detect color, the spectral components of incident light must be separated and collected. An absorptive color filter array (CFA) on top of an imager chip may be used for color detection in a solid state image sensor, for example, a CCD or CMOS imager. In a typical CFA layout, a color filter for each individual photosensor of the imager allows only a narrow spectral band (red, green, or blue) to pass, and absorbs the rest of the photo energy.[0010] Each pixel cell receives light that may have been focused through one or more micro-lenses. Micro-lenses on a CMOS imager help increase optical efficiency and reduce optical cross-talk between pixel cells. A reduction of the size of the pixel cells allows for a greater number of pixel cells to be arranged in a specific pixel cell array, thereby increasing the resolution of the array. In one process for forming micro- lenses, the radius of each micro-lens is correlated to the size of the pixel cell. Thus, as the pixel cells decrease in size, the radius of each micro-lens also decreases. [0011] Electrical cross-talk is also a problem with imaging devices. Electrical cross-talk occurs when photo-generated charge from a pixel is collected by an adjacent or neighboring pixel. For example, an electron generated in the silicon under the red pixel, rather than diffusing up to be collected by the red photodiode, may have a significant lateral component, and be collected by an adjacent green photodiode.[0012] Cross-talk can bring about undesirable results in the images that are produced. The undesirable results can become more pronounced as the density of pixel cells in imager arrays increases, and as pixel cell size correspondingly decreases. The shrinking pixel cell size also make it increasingly difficult to focus incoming light on the photosensor of each pixel cell, aggravating' cross-talk.[0013] Cross-talk can manifest as a blurring or reduction in contrast in images produced by a solid-state imager. In essence, cross-talk in an image sensor array degrades the spatial resolution, reduces overall sensitivity, causes color mixing, and leads to image noise after color correction. As noted above, image degradation can become more pronounced as pixel cell and device sizes are reduced.[0014] Another problem in conventional imager devices is blooming or saturation. Blooming occurs when too many photons strike a particular pixel cell and the generated electrons overflow into adjacent pixel cells, artificially increasing the electron counts of those pixel cells.[0015] Another common problem associated with conventional imager pixel cells is dark current, that is, current generated as a photo-conversion device signal in the absence of light. Dark current may be caused by many different factors, including: photosensor junction leakage, leakage along isolation edges, transistor sub-threshold leakage, drain induced barrier lowering leakage, gate induced drain leakage, trap assisted tunneling, and pixel cell fabrication defects. [0016] There is needed, therefore, an imager device having reduced cross-talk, reduced blooming and decreased dark current. Also needed is a simple method of fabricating and operating such a pixel.BRIEF SUMMARY OF THE INVENTION[0017] The present invention provides an imager method and apparatus for reducing electrical color cross-talk. The invention also reduces blooming of excess electrons and reduces dark current.[0018] The present invention provides an imager device having a buried doped region in the substrate, preferably an n+ doped region, that collects excess electrons and thus reduces cross-talk, reduces blooming of excess electrons and reduces dark current.[0019] Additional advantages and features of the present invention will be apparent from the following detailed description and drawings which illustrate preferred embodiments of the invention.BRIEF DESCRIPTION OF THE DRAWINGS[0020] FIG. 1 illustrates a schematic cross-sectional view of an imager pixel cell having a buried doped region constructed in accordance with an exemplary embodiment of the present invention.[0021] FIG. 2 is a representative diagram of the imager pixel cell of FIG. 1.[0022] FIG. 3 illustrates a schematic cross-sectional view of an imager pixel cell having a buried doped region under the isolation regions constructed in accordance with an exemplary embodiment of the present invention. [0023] FIG. 4 illustrates a cross-sectional view of a semiconductor wafer undergoing the process of forming a buried doped region according to an exemplary embodiment of the present invention.[0024] FIG. 5 illustrates the semiconductor wafer of FIG. 4 at a stage of processing subsequent to that shown in FIG.4.[0025] FIG. 6 illustrates the semiconductor wafer of FIG. 4 at a stage of processing subsequent to that shown in FIG. 5.[0026] FIG. 7 illustrates the semiconductor wafer of FIG. 4 at a stage of processing subsequent to that shown in FIG. 6.[0027] FIG. 8 illustrates the semiconductor wafer of FIG. 4 at a stage of processing subsequent to that shown in FIG. 7.[0028] FIG. 9 illustrates the semiconductor wafer of FIG. 4 at a stage of processing subsequent to that shown in FIG. 8.[0029] FIG. 10 shows an imager constructed in accordance with an embodiment of the invention.[0030] FIG. 11 is an illustration of an imaging system having an imager according to an exemplary embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTION[0031] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof and show by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized, and that structural, logical, and electrical changes may be made without departing from the spirit and scope of the present invention. The progression of processing steps described is exemplary of embodiments of the invention; however, the sequence of steps is not limited to that set forth herein and may be changed as is known in the art, with the exception of steps necessarily occurring in a certain order.[0032] The term "substrate" is to be understood to include any semiconductor- based structure. The semiconductor structure should be understood to include silicon, silicon-on-insulator (SOI), silicon-on-sapphire (SOS), silicon-germanium, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductors and semiconductor structures. When reference is made to the substrate in the following description, previous process steps may have been utilized to form regions or junctions in or over the base semiconductor or foundation. The semiconductor also need not be formed of silicon, but may be formed of other semiconductor materials.[0033] The terms "pixel" and "pixel cells" as used herein, refer to a photo- element unit cell containing at least one photosensor and additional structure for converting photons to an electrical signal. For purposes of illustration, a single representative pixel cells and its manner of formation are illustrated in the figures and description herein; however, typically fabrication of a plurality of like pixel cells proceeds simultaneously. Accordingly, the following detailed description is not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.[0034] The following description of the invention is provided within the exemplary environment of a CMOS pixel using a pinned photodiode as a photosensor; however, the invention is not limited to use in a CMOS imager or to use in a CMOS imager employing a pinned photodiode as a photosensor. Any type of photosensor may be used in the invention including photodiodes, photogates, and other photosensing devices.[0035] FIG. 1 shows an expanded view of a portion of a solid-state imager 20 according to one embodiment of the present invention. The solid-state imager 20 comprises a plurality of pixel cells 28 formed in and over a substrate 30 organized into an array of rows and columns. The substrate 30 is preferably a p+ substrate. A first p- epitaxial layer 31 is formed over tine p+ substrate 30. A n+ doped layer 33 is formed between the first p- epitaxial layer 31 and a second p- epitaxial layer 41. It should be noted that the substrate 30 may also be a p- substrate. In the case when a p- substrate is used, there is no need for the first p- epitaxial layer 31.[0036] The pixel array is covered by a protective layer 24 that acts as a passivation and planarization layer for the imager 20. Protective layer 24 may be a layer of BPSG, PSG, BSG7 silicon dioxide, silicon nitride, polyimide, or other well-known light transmissive insulator.[0037] A color filter layer 100 is formed over the passivation layer 24. The color filter layer 100 comprises an array of red, blue and green sensitive elements which may be arranged in a pattern understood by the person having ordinary skill in the art as exemplified by U.S. Patent Nos. 6,783,900 and 3,971,065, which are herein incorporated by reference.[0038] As also depicted in the figures, a micro-lens 70 is formed above each pixel cell. Each micro-lens 70 is formed such that its focal point is centered over the photosensitive elements in the corresponding pixel cell. A spacer layer 25 is also formed under the mircolens 70 and under the color filter layer 100. The thickness of spacer layer 25 is adjusted such that the photosensitive element is at a focal point for the light traveling through lenses 70. [0039] As shown in FIG. 1, p- epitaxial layer 31 is formed over a p+ substrate 30 of the pixel cell array. An n+ region 33 is formed in the p- epitaxial layer 31. In FIG. 1, the n+ region 33 is shown as being formed under the entire pixel cell array. When the n+ region 33 is formed under the isolation regions 64 (FIG. 3) there is a better ground in the array and less reduction in red quantum efficiency. FIG. 3 shows the n+ region formed under the isolation regions 64. As will be understood, when the n+ region 33 is formed under the isolation regions 64 throughout the pixel sensor array, the n+ regions33 will form a grid throughout the pixel array. Forming the n+ region 33 under the entire pixel cell array (FIG. 1) provides the advantages of lower cross-talk and allows for easier processing. In both FIGS. 1 and 3, the n+ region 33 is patterned and does not extend significantly outside of the pixel array.[0040] The n+ region 33 may be biased positive in operation. The n+ region 33 is preferably biased in operation at a positive voltage between 0.5V and Vdd. When the n+ region 33 is biased positive, dark current electrons formed in the substrate below the n+ region 33 are collected in the n+ region 33 and swept away prior to reaching the photosensor 34. Electrons generated from photons between photosensors 34 or those generated deep in the substrate and most prone to aggravate cross-talk are also collected in n+ region 33 and swept away, thereby reducing cross-talk. Electrons from pixel blooming will also be collected in n+ region 33.[0041] A patterned n+ region 33, either continuous in the array as illustrated in FIG. 1 or between pixels as illustrated in FIG. 3, provides the benefits as discussed above (i.e., reduced cross-talk, blooming and dark current) without adding unwanted substrate resistance or parasitic coupling in the periphery circuits/logic.[0042] As shown in FIGS. 1-3, each pixel sensor cell contains a photosensor 34, which may be a photodiode, photogate, or the like. A pinned photodiode photosensor34 is depicted in FIGS. 1-3. When incident radiation 101 in the form of photons passes <-> color filter layer 100 and strikes the photosensor 34, the photo-generated electrons accumulate in the doped region 36. A transfer transistor 42 is located next to the photosensor 34, and has source and drain regions 36, 40 and a gate stack controlled by a transfer control signal TX. The drain region 40 is also called a floating diffusion region, and it stores charge received from the photosensor 34. The charges are applied to the gate of a source follower transistor 44 and converted to an output signal to row select transistor 46 which is then output to readout circuitry 48 and to an array column line. A reset transistor 50 comprised of doped regions 40, 52 and gate stack 54 is controlled by a reset control signal RST which operates to reset the floating diffusion region 40 to a predetermined initial voltage just prior to signal readout. Details of the formation and function of the above-described elements of a pixel sensor cell 28 may be found, for example, in U.S. Pat. Nos. 6,376,868 and 6,333,205, the disclosures of which are incorporated by reference herein.[0043] As illustrated in FIGS. 1 and 3, the gate stacks 42, 54 for the transfer 42 and reset 54 transistors include a silicon dioxide or silicon nitride gate dielectric 56 over the p- epitaxial layer 41. A conductive layer 58 of doped polysilicon, tungsten, or other suitable material is formed over the insulating layer 56, and is covered by an insulating cap layer 60 of, for example, silicon dioxide, silicon nitride, or ONO (oxide-nitride- oxide). A silicide layer 59 may be used between the poly silicon layer 58 and the cap 60, if desired. Insulating sidewalls 62 are also formed on the sides of the gate stacks 42, 54. These sidewalls 62 may be formed of, for example, silicon dioxide, silicon nitride, or ONO. A field oxide isolation layer 64 around the pixel sensor cell 28 serves to isolate it from other pixel cells in the array. P-well or p-type implant regions 65 provide additional isolation between pixel cells in the array. Transfer transistor 42 is optional, in which case the diffusion regions 36 and 40 are connected together. [0044] The imager device 20 described above with reference to FIGS. 1-3 is manufactured through a process described as follows, and illustrated in FIGS. 4-9. Referring now to FIG. 4, a substrate 30, which may be any of the types of substrates described above is shown. The substrate 30 is preferably a p+ substrate. It should be understood that the substrate 30 could also be formed of a p- material. If the substrate 30 is formed of a p- material, then in the process according to the present invention the p- epitaxial layer 31 discussed below can be omitted.[0045] Reference is now made to FIG. 5 which shows the device according to FIG. 4 at a further stage of processing. Where the substrate 30 is a p+ material, a p- epitaxial layer 31 is grown over substrate 30. The p- epitaxial layer 31 is made conductive by adding an impurity element, such as, for example, boron which has one less valence electron than the semiconductive material, to form a p- type material. The p- epitaxial layer 31 can be formed from standard materials, such as, for example, silicon tetrachloride or silane. Preferably the p- exitaxial layer 31 is formed from silane.[0046] The p- epitaxial layer 31 is grown to form a transition between the p+ substrate 30 and the p- epitaxial layer 31. The p- epitaxial layer 31 may be grown with any method for growing single- crystal silicon. The thickness of the p- epitaxial layer 31 is from about 0.05 [mu]m to about 5.0 [mu]m, preferably from about 0.5 [mu]m to about 1.5 [mu]m.[0047] Reference is now made to FIG. 6 which shows the device according to FIG. 5 at a further stage of processing. An oxide layer 35 is deposited over the p- epitaxial layer 31. The oxide layer 35 is formed over the p- epitaxial layer 31 by conventional methods such as, for example, chemical vapor deposition or thermal oxidation. A preferred method to form oxide layer 35 is thermal oxidation by exposing the surface of the p- epitaxial layer 31 in an oxygen atmosphere at an elevated temperature. The oxide layer 35 preferably has a thickness of about 20 angstroms to about 500 angstroms.[0048] Reference is now made to FIG. 7 which shows the substrate according to FIG. 6 at a further stage of processing. The oxide layer 35 is patterned with photoresist layer 37 and etched to form opening 39. The portion of the oxide layer 35 which is removed to form opening 39 is removed by conventional photoresist patterning and etching of the oxide layer 35. It should be noted that the oxide layer 35 under the photoresist layer 37 is the preferred approach to prevent photoresist contamination of the wafer. The oxide layer 35 may be formed from any suitable material, such as nitride or ONO. Li addition, with proper cleaning techniques, the photoresist layer 37 could be applied directly to the p- epitaxial layer 31, without the oxide layer 35.[0049] Reference is now made to FIG. 8 which shows the substrate according to FIG. 7 at a further stage of processing. N+ doped region 33 is formed in p- epitaxial layer 31. The n+ doped region 33 is formed by implanting a dopant into p- epitaxial layer 31. N+ doped region 33 is doped with a dopant implant by conventional methods, preferably by ion implantation. The dopants are implanted into n+ doped region 33 at a dopant concentration of from about 1 x 10<10> ions/cm<2> to about 1 x 10<18> ions/cm<2>, preferably at a dopant concentration of from about 1 x 10<13> ions/cm<2> to about 1 x 10<15> ions/cm<2>. N+ doped region 33 may be doped with any suitable dopant containing materials, for example, materials containing one or more of phosphorous or arsenic. In a preferred embodiment, the dopant is arsenic. The n+ doped region 33 is preferably doped with the dopant by ion implantation at a power of from about 15KeV to about 50 MeV. It should be understood that the dopant concentration and power will vary depending upon a variety of physical parameters such as, for example, the material being implanted, the processing stage of the semiconductor substrate, the amount of material to be removed and other factors. Depending on the alignment tolerances, it may be necessary to pattern and etch a notch or mark in the backside of the substrate 30 at the time of the n+ implant so as to align the n+ region 33 with the pixel array of the imager for later processing and alignment.[0050] According to the present invention, it is possible to connect the n+ doped region 33 with an n-well region in an imager device. The n-well, while not disclosed in the figures, is known in the imager devices discussed above and incorporated by reference. The incorporation of an n-well in imaging devices described herein are known to the person having ordinary skill in the art. For example, it may be necessary to connect the n+ doped region 33 with the n-well to make adequate top-side contact between the imager device and the n+ doped region.[0051] Reference is now made to FIG. 9 which shows the substrate according to FIG. 8 at a further stage of processing. The photoresist 37 and oxide layer 35 are stripped off by conventional methods. A second p- epitaxial layer 41 is grown over p-epitaxial layer 31. The p- epitaxial layer 41 may be grown with any method for growing single- crystal silicon. The thickness of the p- epitaxial layer 41 is from about 0. 5 [mu]m to about 20.0 [mu]m, preferably from about 2.5 [mu]m to about 4.0 [mu]m. The p- epitaxial layer 41 is doped with a concentration of from about 1 x 10<10> ions/cm<2> to about 1 x 10<20> ions/cm<2>, preferably at a dopant concentration of from about 1 x 10<14> ions/cm<2> to about 1 x 10<15> ions/cm<2>. P- epitaxial layer 41 may be doped with any suitable dopant containing materials, for example, materials containing boron.[0052] From the resultant structure illustrated in FIG. 9, an image device is formed by standard imager processing. An exemplary imager is illustrated in FIGS. 1-3. Exemplary CMOS imaging circuits, processing steps thereof, and detailed descriptions of the functions of various CMOS elements of an imaging circuit are described, for example, in U.S. Patent No. 6,140,630, U.S. Patent No. 6,376,868, U.S. Patent No. 6,310,366, U.S. Patent No. 6,326,652, U.S. Patent No. 6,204,524, and U.S. Patent No. 6,333,205, each of which is assigned to Micron Technology, Inc.[0053] While the processes have been described with reference to a CMOS imager device, it should be understood that the process may be also used with pixel cells of other types of imagers as well, for example, with a CCD imager. Accordingly, the pixel cell formed as described above may be employed in CCD image sensors as well as CMOS image sensors.[0054] The n+ doped layer 33 reduces cross-talk, blooming and dark current by collecting excess electrons in the imaging device. As discussed below, the n+ doped layer 33 may be biased positive to aid in electron collection within the imaging device. The biasing of the region can be accomplished by well known techniques for biasing a region.[0055] FIG. 10 illustrates an exemplary imager 200 that may utilize any embodiment of the invention. The imager 200 has a pixel array 205 comprising pixel cells constructed as described above with respect to FIGS. 1-9. Row lines are selectively activated by a row driver 210 in response to row address decoder 220. A column driver 260 and column address decoder 270 are also included in the imager 200. The imager 200 is operated by the timing and control circuit 250, which controls the address decoders 220, 270. The control circuit 250 also controls the row and column driver circuitry 210, 260. [0056] A sample and hold (S/H) circuit 261 associated with the column driver 260 reads a pixel reset signal Vrst and a pixel image signal Vsig for selected pixel cells. A differential signal (Vrst-Vsig) is amplified by differential amplifier (AMP) 262 for each pixel and is digitized by analog-to-digital converter 275 (ADC). The analog-to- digital converter 275 supplies the digitized pixel signals to an image processor 280, which forms a digital image.[0057] If desired, the imager 200 may be combined with a processor, such as a CPU, digital signal processor or microprocessor. The imager 200 and the microprocessor may be formed in a single integrated circuit. An exemplary processor system 300 using a CMOS imager having a n+ region in accordance with the present invention is illustrated in FIG. 11. A processor based system is exemplary of a system having digital circuits which could include CMOS or other imager devices. Without being limiting, such a system could include a computer system, camera system, scanner, machine vision system, vehicle navigation system, video telephone, surveillance system, auto focus system, star tracker system, motion detection system, image stabilization system and other image processing systems.[0058] As shown in FIG. 11, an exemplary processor system 300, for example, a camera generally comprises a central processing unit (CPU) 344, e.g., a microprocessor, that communicates with an input/output (I/O) device 346 over a bus 352. The imager 200 also communicates with the system over bus 352. The computer system 300 also includes random access memory (RAM) 348, and may include peripheral devices such as a floppy disk drive 454, a compact disk (CD) ROM drive 356 or a removable memory or a flash memory 358 which also communicate with CPU 344 over the bus 352. The floppy disk 454, the CD ROM 356 or flash memory 358 stores images captured by imager 200. The imager 200 is preferably constructed as an integrated circuit as previously described with respect to FIGS. 1-9. [0059] While the invention has been described in detail in connection with exemplary embodiments known at the time, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims. |
A resistive memory cell (100) includes a ring-shaped bottom electrode (102), a top electrode (108), and an electrolyte layer (106) arranged between the bottom and top electrodes. A ring-shaped bottom electrode is formed by forming a dielectric layer (104) over a bottom electrode contact, etching a via in the dielectric layer to expose at least a portion of the bottom electrode contact, depositing a conductive via liner over the dielectric layer and into the via, the via liner deposited in the via forming a ring-shaped structure in the via and a contact portion in contact with the exposed bottom electrode contact, the ring-shaped structure defining a radially inward cavity of the ring-shaped structure, and filling the cavity with a dielectric fill material, such that the ring-shaped structure of the via liner forms the ring-shaped bottom electrode, depositing an electrolyte layer over the bottom electrode, and depositing a top electrode over the electrolyte layer. |
THE CLAIMS1. A resistive memory cell, comprising:a ring-shaped bottom electrode,a top electrode, andan electrolyte layer arranged between the bottom and top electrodes.2. The cell according to claim 1 , comprising, in a plane extending through the ring-shaped bottom electrode, a dielectric material arranged within a circumference defined by the ring-shaped bottom electrode.3. The cell according to claim 2, wherein the dielectric material comprises an oxide, e.g., Si02.4. The cell according to claim 1, wherein:the ring-shaped bottom electrode is formed in a substrate, anda thickness of the ring-shaped bottom electrode in a direction extending in a plane of the substrate is less than three times a thickness of the electrolyte layer in a direction perpendicular to the plane of the substrate.5. The cell according to claim 4, wherein the thickness of the ring-shaped bottom electrode is less than two times the thickness of the electrolyte layer.6. The cell according to claim 4, wherein the thickness of the ring-shaped bottom electrode is less than the thickness of the electrolyte layer.7. The cell according to claim 4, wherein the thickness of the ring-shaped bottom electrode is less than one half the thickness of the electrolyte layer.8. The cell according to claim 1, wherein the bottom electrode is formed fromTiN.9. The cell according to claim 1 , wherein the top electrode is formed from copper.10. A method for forming a resistive memory cell, comprising:forming a ring-shaped bottom electrode by a process including:forming a dielectric layer over a bottom electrode contact,etching a via in the dielectric layer to expose at least a portion of the bottom electrode contact,depositing a conductive via liner over the dielectric layer and into the via, the via liner deposited in the via forming a ring-shaped structure in the via and a contact portion in contact with the exposed bottom electrode contact, the ring-shaped structure defining a cavity radially inward of the ring-shaped structure, andfilling the cavity radially inward of the ring-shaped structure with a dielectric fill material,such that the ring-shaped structure of the via liner forms the ring-shaped bottom electrode,depositing an electrolyte layer over the bottom electrode, anddepositing a top electrode over the electrolyte layer.1 1. The method according to claim 10, wherein the process of forming the bottom electrode further includes removing upper portions of the dielectric fill material and via liner before depositing the electrolyte layer over the bottom electrode.12. The method according to claim 1 1 , wherein the upper portions of the dielectric fill material and via liner are removed by a chemical mechanical polishing or planarization process.13. The method according to claim 10, further comprising depositing a top electrode contact over the top electrode.14. The method according to claim 10, wherein the ring-shaped bottom electrode formed from the via liner comprises TiN.15. The method according to claim 10, wherein the top electrode is formed from copper.16. The method according to claim 10, wherein the dielectric fill material comprises an oxide, e.g., Si02.17. The method according to claim 10, wherein a thickness of the ring-shaped bottom electrode in a direction extending in a plane of the electrolyte layer is less than three times a thickness of the electrolyte layer.18. The method according to claim 17, wherein the thickness of the ring-shaped bottom electrode is less than two times the thickness of the electrolyte layer.19. The method according to claim 17, wherein the thickness of the ring-shaped bottom electrode is less than the thickness of the electrolyte layer.20. The method according to claim 17, wherein the thickness of the ring-shaped bottom electrode is less than one half the thickness of the electrolyte layer. |
RESISTIVE MEMORY CELL WITH REDUCED BOTTOM ELECTRODECROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of U.S. Provisional Application No. 61/780,317 filed on March 13, 2013, which is incorporated herein in its entirety.TECHNICAL FIELDThe present disclosure relates to resistive memory cells, e.g., conductive bridging random access memory (CBRAM) or resistive random-access memory (ReRAM) cells, having an asymmetrical structure (e.g., including a ring-shaped bottom electrode) providing a reduced area for the formation of conductive paths (e.g., conductive filaments or vacancy chains).BACKGROUNDResistive memory cells, such as conductive bridging memory (CBRAM) and resistive RAM (ReRAM) cells are a new type of non-volatile memory cells that provide scaling and cost advantages over conventional Flash memory cells. A CBRAM is based on the physical re-location of ions within a solid electrolyte. A CBRAM memory call can be made of two solid metal electrodes, one relatively inert (e.g., tungsten) the other electrochemically active (e.g., silver or copper), with a thin film of the electrolyte between them. The fundamental idea of a CBRAM cell is to create programmable conducting filaments, formed by either single or very few nanometer-scale ions across a normally non-conducting film through the application of a bias voltage across the non-conducting film. The non-conducting film is referred to as the electrolyte since it creates the filament through an oxidation/reduction process much like in a battery. In a ReRAM cell the conduction is through creation of a vacancy chain in an insulator. The creation of the filament/vacancy-chain creates an on-state (high conduction between the electrodes), while the dissolution of the filament/vacancy-chain is by applying a similar polarity with Joule heating current or an opposite polarity but at smaller currents to revert the electrolyte/insulator back to its nonconductive off-state.A wide range of materials have been demonstrated for possible use in resistive memory cells, both for the electrolyte and the electrodes. One example is the Cu/SiOx based cell in which the Cu is the active metal-source electrode and the SiOx is the electrolyte. One common problem facing resistive memory cells is the on-state retention, i.e., the ability of the conductive path (filament or vacancy chain) to be stable, especially at the elevated temperatures that the memory parts would typically be qualified to (85C/125C).FIGURE 1 shows a conventional CBRAM cell 1A, having a top electrode 10 (e.g., copper) arranged over a bottom electrode 12 (e.g., tungsten), with the electrolyte or middle electrode 14 (e.g., Si02) arranged between the top and bottom electrodes. Conductive filaments 18 propagate from the bottom electrode 12 to the top electrode 10 through the electrolyte 14 when a bias voltage is applied to the cell 1A. This structure has various potential limitations or drawbacks. For example, the effective cross-sectional area for filament formation, referred to herein as the effective filament formation area indicated as AFF, or alternatively the "confinement zone," is relatively large and unconfined, making the filament formation area susceptible to extrinsic defects. Also, multi-filament root formation may be likely, due to a relatively large area, which may lead to weaker (less robust) filaments. In general, the larger the ratio between the diameter or width of the effective filament formation area AFF (indicated by "x") to the filament propagation distance from the bottom electrode 12 to the top electrode 10 (in this case, the thickness of the electrolyte 14, indicated by "y"), the greater the chance of multi-root filament formation. Further, a large electrolyte volume surrounds the filament, which provides diffusion paths for the filament and thus may provide poor retention. Thus, restricting the volume of the electrolyte material in which the conductive path forms may provide a more robust filament due to spatial confinement. The volume of the electrolyte material in which the conductive path forms may be restricted by reducing the area in contact between the bottom electrode 12 and the electrolyte 14.As used herein, "conductive path" refers a conductive filament (e.g., in a CBRAM cell), vacancy chain (e.g., in an oxygen vacancy based ReRAM cell), or any other type of conductive path for connecting the bottom and top electrodes of a non-volatile memory cell (typically through an electrolyte layer or region arranged between the bottom and top electrodes). As used herein the "electrolyte layer" or "electrolyte region" refers to an electrolyte/insulator/memory layer or region between the bottom and top electrodes through which the conductive path propagates. FIGURE 2 shows certain principles of a CBRAM cell formation. Conductive paths 18 may form and grow laterally, or branch into multiple parallel paths. Further, locations of the conductive paths may change with each program/erase cycle. This may contribute to a marginal switching performance, variability, high-temp retention issues, and/or switching endurance. Restricting switching volume has been shown to benefit the operation. These principles apply to ReRAM and CBRAM cells. A key obstacle for adoption of these technologies is switching uniformity.SUMMARYAccording to various embodiments, a non-volatile memory cell structure, and associated manufacturing process, provides a reduced area of contact between the bottom electrode and the electrolyte layer, thus restricting the area in which a conductive path can form, i.e. , the "confinement zone," and thereby create thicker, single conductive path root memory cells (e.g., CBRAM cells and ReRAM cells) having improved switching performance, retention performance, and/or reliability. For example, the confinement zone may be defined by a narrow ring having a width of less than ΙΟθΑ.In one embodiment, a resistive memory cell includes a ring-shaped bottom electrode, a top electrode, and an electrolyte layer arranged between the bottom and top electrodes.In another embodiment, a method for forming a resistive memory cell comprises forming a ring-shaped bottom electrode by a process including: forming a dielectric layer over a bottom electrode contact, etching a via in the dielectric layer to expose at least a portion of the bottom electrode contact, depositing a conductive via liner over the dielectric layer and into the via, the via liner deposited in the via forming a ring-shaped structure in the via and a contact portion in contact with the exposed bottom electrode contact, the ring- shaped structure defining a radially inward cavity of the ring-shaped structure, and filling the cavity with a dielectric fill material, such that the ring-shaped structure of the via liner forms the ring-shaped bottom electrode, depositing an electrolyte layer over the bottom electrode, and depositing a top electrode over the electrolyte layer.BRIEF DESCRIPTION OF THE FIGURESExample embodiments are discussed below with reference to the drawings, in which: FIGURE 1 shows an example conventional CBRAM cell; FIGURE 2 shows certain principles of CBRAM cell formation;FIGURE 3 shows a cross-section of an example resistive memory cell structure (e.g., a CBRAM or ReRAM cell) having a ring-shaped bottom electrode, according to an example embodiment;FIGURES 4A-4B2 illustrate aspects of a conventional continuous bottom electrode structure;FIGURES 5A-5B2 illustrate aspects of a ring-shaped bottom electrode structure according to an example embodiment of the present invention, to show one advantage of the ring-shaped bottom electrode structure as compared to a conventional continuous bottom electrode structure;FIGURES 6A-6D illustrate an example process for creating a memory cell structure having a ring-shaped bottom electrode, according to one embodiment; andFIGURE 7 illustrates an example resistive memory cell structure having a ring-shaped bottom electrode, according to an example embodiment.DETAILED DESCRIPTIONFIGURE 3 illustrates a cross-section of an example structure 100 for a resistive memory cell (e.g., a CBRAM or ReRAM cell) having a ring-shaped bottom electrode 102 formed in an interlayer dielectric layer 104, an electrolyte layer 106 and top electrode 108 formed over the bottom electrode 102 such that the electrolyte layer 106 is arranged between the bottom electrode 102 and top electrode 108, and bit line(s) 1 10 connected to the top electrode 108.Each of the various component regions of structure 100 may be formed from any suitable material and formed in any suitable manner. For example, ring-shaped bottom electrode 102 may be formed from TiN or any other suitable bottom electrode material; top electrode 108 may be formed from Cu, e.g., a very thin Cu layer (e.g., 10-30nm/5-15nm) formed by PVD, or any other suitable top electrode material; electrolyte layer 106 may be formed from a thin layer (e.g., 3θΑ-15θΑ) of high quality Si02 or SiO or any other suitable electrolyte material; and bit line(s) 110 may be formed from TaN or any other suitable bit line material. An example filament, e.g., metal bridge, propagated from the ring-shaped bottom electrode 102 to the top electrode 108 through the electrolyte layer 106 is indicated at 120. The ring-shaped bottom electrode 102 provides a substantially reduced contact area between the bottom electrode 102 and overlying electrolyte layer 104 as compared with a solid bottom electrode structure, thus providing a reduced confinement zone. In this example, the ring- shaped bottom electrode 102 has a thickness (x) of less than Ι ΟθΑ. Providing a bottom electrode thickness (x) less than a thickness (y) of the electrolyte layer (i.e., x/y < 1) may provide a particularly reduced chance of multiple conductive path formation.FIGURES 4A-4B2 and FIGURES 5A-5B2 illustrate aspects of a conventional continuous bottom electrode structure (FIGURES 4A-4B2) and a ring-shaped bottom electrode structure according to an embodiment of the present invention (FIGURES 5A-5B2), to show one advantage of the ring-shaped bottom electrode structure. In particular, FIGURE 4A shows a cross-section of a filament formation, and FIGURES 4B1 and 4B2 show a top view of the formation of multiple filaments, for a conventional cell structure having a continuous bottom electrode 102', a top electrode 108', and a electrolyte layer 106' between the continuous bottom electrode 102' and the top electrode 108'. Likewise, FIGURE 5A shows a cross-section of a filament formation, and FIGURES 5B 1 and 5B2 show a top view of the formation progression of a single filament, for a cell structure according to an embodiment of the present invention having a ring-shaped bottom electrode 102, a top electrode 108, and a electrolyte layer 106 between the ring-shaped bottom electrode 102 and the top electrode 108.During SET (filament formation), a decreased number, and an increased thickness, of filament roots is preferred. In the conventional structure shown in FIGURES 4A-4B2, the volume of the electrolyte 106' in which filaments 120 may form has a relatively large horizontal/vertical length ratio (e.g., x/y > 5). In contrast, in the ring-shaped bottom electrode structure 100 disclosed herein, the volume of the electrolyte 106 in which filament(s) 120 may form has a relatively small horizontal/vertical length ratio (e.g., x/y < 1). As shown, the ring-shaped bottom electrode structure disclosed herein may provide fewer, but thicker, filament roots, thus providing an advantage over the conventional structure.FIGURES 6A-6D illustrate an example process for creating a memory cell structure 100 having a ring-shaped bottom electrode 102, according to one embodiment. As shown in FIGURE 6A, a via 150 is etched through a dielectric 152 (e.g., SiN) down to a bottom electrode contact 154 (e.g., Cu). The via 150 may have any suitable cross-sectional shape, e.g., circular, oval, elliptical, rectangular, square, etc. The bottom electrode contact 154 may be connected to a circuit or electronic components (e.g., a transistor or other controlling device) via a conductive path 156, which may be formed and connected to bottom electrode contact 154 in any suitable manner, e.g., from below as shown, or from above in any known manner. The bottom electrode contact 154 and/or conductive path 156 may be formed in an inter-layer dielectric 158 (e.g., Si02).As shown in FIGURE 6B, a via liner 160 (e.g., TiN) is then deposited and a dielectric fill is performed to fill the remaining via opening with a dielectric 162, in this example an oxide (e.g., Si02). As shown in FIGURE 6C, a Chemical Mechanical Planarization or Polishing (CMP) process is performed to remove the top portions of the oxide 162 and liner 160, thus leaving an oxide-filled ring-shaped liner region 160A (i.e., ring-shaped in a cross- section perpendicular to the page) that will become the bottom electrode 102. As shown in FIGURE 6D, an electrolyte layer 170 (e.g., SiOx/CuSixOy), a top electrode 172 (e.g., PVD Cu), and a top electrode contact 174 (e.g., TaN) are then deposited or formed over the stack. The electrolyte layer 170, top electrode 172, and top electrode contact 174 may then be etched or otherwise processed to produce a desired cell shape.FIGURE 7 illustrates an example resistive memory cell structure 200, according to an example embodiment. As shown, memory cell structure 200 may include a ring-shaped bottom electrode 202 formed in an interlayer dielectric layer 204, an electrolyte layer 206, and top electrode 208 formed over the bottom electrode 202 such that the electrolyte layer 206 is arranged between the bottom electrode 202 and top electrode 208, and bit line(s) 210 connected to the top electrode 208. A bottom electrode contact 212 is connected to a bottom region of the ring-shaped bottom electrode 202. Further, nitride spacers 214 may be formed over sidewalls of bit lines 210, top electrode 208, and electrolyte layer 206, e.g., by a nitride deposit and etch process. A conductive filament 220 is also shown for reference.In some embodiments the resistive memory cell structure 200 can be formed using two masks. First, a via (or trench) open mask is used, into which a thin TiN layer is deposited, followed by a PECVD oxide fill and CMP process. This forms the bottom electrode 202. Following this, the electrolyte layer 206 (e.g. a thin SiOx layer) is deposited, followed by the top electrode 208 (e.g. Cu/TaN/W), and this stack is then etched with a second mask. Normally a thick Cu film cannot be etched in a plasma, hence a thin (50-300A) PVD Cu layer may be formed, which can be plama-etched with this second mask.As discussed above, the disclosed concepts apply both the metallic filament type CBRAM cells and the vacancy type ReRAM cells. In the disclosed asymmetrical structure, one of electrodes in contact with the electrolyte/insulator is the source of these metallic ions/vacancies, while the other is typically inert.Various embodiments may provide one or more advantages relative to conventional cell structures and/or formation techniques. For example, the asymmetric structure (e.g., incorporating a ring-shaped bottom electrode) may improve the functionality and reliability of Cu/SiOx based cells by reducing the bottom electrode area in contact with the electrolyte. Thus, the volume in which the number of roots of metallic filaments/vacancy-chain roots can form is greatly reduced over the conventional structures. This may provide various advantages. For example, the asymmetrical structure may provide improve switching characteristics and reliability because there is a far greater likelihood of creating a single, thick filament/vacancy-chain that is more stable for retention purposes. As another example, because the bottom electrode area is reduced, a much higher current density can be achieved for the same current flow. This may allow for a uni-polar operation in switching, i.e., both the set (filament formation) and reset (filament dissolution by joule-heating) can be done at the same voltage polarity. This has been demonstrated on the Cu/SiOx cells, but has needed a much higher current level under reset, the mechanism for dissolution being based on Joule heating rather than an electrolytic reduction of the metallic filament. |
Various approaches are disclosed for protecting vehicle buses from cyber-attacks. Disclosed approaches provide for an embedded system having a hypervisor that provides a virtualized environment supporting any number of guest OSes. The virtualized environment may include a security engine on an internal communication channel between the guest OS and an external vehicle bus of a vehicle to analyze network traffic to protect the guest OS from other guest OSes or other network components, and to protect those network components from the guest OS. Each guest OS may have its own security engine customized for the guest OS to account for what is typical or expected traffic for the guest OS (e.g., using machine learning, anomaly detection, etc.). Also disclosed are approaches for corrupting a message being transmitted on a vehicle bus to prevent devices from acting on the message |
CLAIMSWhat is claimed is:1. An embedded system comprising: a hypervisor that supports virtualized components with isolated execution environments on partitions of a virtualized environment; a communication channel of the virtualized environment to a network interface that provides a guest operating system (OS) on a first of the partitions with connectivity to at least one electronic control unit (ECU) on an external vehicle bus of a vehicle; and a security manager on a second of the partitions comprising a security engine on the communication channel between the guest OS and the network interface that monitors communications over the communication channel for potential threats and determines a security response upon determining a security event from the communications.2. The embedded system of claim 1, wherein the security response is performed on a communication being sent by the guest OS to the network interface.3. The embedded system of claim 1, wherein the security response includes one or more of notifying the guest OS of the security event, logging the security event, or initiating a safe mode of operation.4. The embedded system of claim 1, wherein the security response includes transmitting data to cause a message blocking circuit to corrupt a message being communicated on the external vehicle bus.5. The embedded system of claim 1, wherein the network interface is a physical interface to the external vehicle bus and the virtualized environment further comprises a communications manager on a third of the partitions that includes a virtualized hardware interface to the network interface.6. The embedded system of claim 1, wherein the external vehicle bus is a Controller Area Network (CAN) bus.7. The embedded system of claim 1, wherein the security manager is positioned above the guest OS in a chain of trust of a certificate chain.8. The embedded system of claim 1, wherein the determining of the security event from the communications is based on performing machine learning on or more of historical traffic frequency or historical traffic patterns over the communication channel.9. An embedded system comprising: a hypervisor that supports virtualized components with isolated execution environments on partitions of a virtualized environment; a first communication channel of the virtualized environment that provides a first guest operating system (OS) on a first of the partitions with connectivity to an internal vehicle bus of the virtualized environment; a second communication channel of the virtualized environment that provides a second guest operating system (OS) on a second of the partitions with connectivity to the internal vehicle bus of the virtualized environment; and a security manager on a third of the partitions comprising a security engine that monitors communications between the first guest OS and the second guest OS over the first communication channel for potential threats and determines a security response upon determining a security event from the communications.10. The embedded system of claim 9, wherein the internal vehicle bus is a Controller Area Network (CAN) bus.11. The embedded system of claim 9, wherein the second communication channel includes a different security services engine that monitors the communications between the first guest OS and the second guest OS over the internal vehicle bus for potential threats and determines a different security response upon determining a security event from the communications.12. The embedded system of claim 9, wherein the determining of the security event from the communications is based on performing machine learning on or more of historical traffic frequency or historical traffic patterns between the first guest OS and the second guest OS over the internal vehicle bus.13. The embedded system of claim 9, wherein the internal vehicle bus provides the first guest OS with connectivity to an electronic control unit (ECU) on an external vehicle bus of a vehicle over the first communication channel, and provides the second guest OS with connectivity to the ECU on the external vehicle bus of the vehicle over the second communication channel.14. A hardware device comprising: a first register configured to store a message identifier (ID) of a Controller Area Network (CAN) message, the message ID to be received from a CAN bus during transmission of the CAN message; a second register configured to store a reference message ID; at least one logic gate coupled to the first register and the second register and configured to generate an output signal indicative of a result of a comparison between the message ID of the CAN message and the reference message ID; and an interference circuit configured to, responsive to the output signal, perform corruption of the CAN message being transmitted on the CAN bus.15. The hardware device of claim 14, wherein the corruption comprises raising arbitration on the CAN bus, and based on the arbitration, writing an erroneous value to a Cyclic Redundancy Check (CRC) field of the CAN message on the CAN bus.16. The hardware device of claim 14, wherein the corruption is performed based on the result of the comparison that the message ID matches the reference message ID.17. The hardware device of claim 14, wherein the comparison is between the message ID and each of a plurality of reference message IDs.18. The hardware device of claim 14, wherein the first register and the interference circuit are coupled to the CAN bus in a vehicle.19. The hardware device of claim 14, wherein the interference circuit includes a programmable window of time over which the corruption is performed20. The hardware device of claim 14, wherein the corruption is performed over a window of time that is time synced to a CAN controller data frame. |
PROTECTING VEHICLE BUSES FROM CYBER-ATTACKSBACKGROUNDThe number of networked devices in modern vehicles has led to a high level of interaction between the vehicles and external entities via a variety of interconnection interfaces, examples of which include Near-Field Communication (NFC), Vehicle-to-everything (V2X), Cellular, Wireless Fidelity (Wi-Fi), Ethernet, Universal Serial Bus (USB), and Bluetooth (BT). This has resulted in exposure to a broad range of cyber-attacks. Vulnerabilities in an interconnection interface may enable a malicious party to send unauthorized messages over a vehicle buss of a vehicle to interfere with control and safety features of the vehicle. For example, an electronic control unit (ECU) having a single Operating System (OS) may be on the network to power In-Vehicle Infotainment (IVI) systems, Adaptive Driver Assistance Systems (ADAS), dashboards, and head units of the vehicle. A malicious party may use a cyber-attack against the OS to interfere with components on the vehicle bus. Vehicle bus protocols - such as Controller Area Network (CAN), Local Interconnect Network (LIN), FlexRay, and Ethernet Audio Video Bridging (eAVB) - may be limited in their security features as they are designed to facilitate communications between trusted devices within a vehicle.To protect against cyber-attacks, conventional systems have used Secure Onboard Communication (SecOC), which requires a device - such as an ECU - sending information over a vehicle bus to use a secret key to authenticate its communications. Additionally, hardware transceivers of CAN interfaces have included acceptance filters that prevent a device from receiving a CAN message it its message identifier (ID) is not on a whitelist. Further approaches have separated ECUs on the vehicle bus into exclusive subnets using gateways. However, even with these approaches, a malicious party may take control of devices to bypass or disable security features, such as by sending malicious communications that pass authentication or blocking legitimate communications. Further, these approaches may not effectively protect against Denial-of-Service (DoS) attacks. Additionally, various vehicle bus protocols (e.g., CAN) use a broadcast transmit receive system in which all devices read each message broadcast on the vehicle bus. While one device may be secured from processing a malicious message, there are no known mechanisms to systematically ensure no other devices on the vehicle bus act upon the message. SUMMARYEmbodiments of the present disclosure relate to protecting vehicle buses from cyber attacks. More specifically, the present disclosure provides various inventive concepts that maybe used to implement an Intrusion Detection and Prevention System (IDPS) that is capable of protecting a vehicle bus of a vehicle by leveraging virtualization technologies.Disclosed approaches provide for an embedded system (e.g., an ECU) having a hypervisor that provides a virtualized environment supporting any number of guest OSes. Rather than only relying on SecOC, acceptance filtering, or ECU subnets to protect against cyber-attacks, the virtualized environment may include a security engine on an internal communication channel between a guest OS and a network interface (e.g., CAN interface) to an external vehicle bus of a vehicle. The security engine may monitor network traffic for potential threats and upon determining a security event, determine a security response such as actively blocking a message, notifying the guest OS, logging the incident, or initiating a safe mode of operation. The security engine may monitor traffic in both directions to protect the guest OS from other guest OSes or other network components, and to protect those network components from the guest OS. In embodiments, each guest OS may have its own security engine customized for the guest OS to account for what is typical or expected traffic for the guest OS (e.g., using machine learning on historical traffic data).In further respects, the hardware interface used by a guest OS may be virtualized and included on a separate partition than the security engine to further isolate the security engine from the hardware interface. Virtualized network interfaces may include a paravirtualized driver so that from the perspective of the guest OS, it is communicating directly with external components. The virtualized network interfaces may be configured so multiple guest OSes may share a physical network interface and may be implemented in a communications manager on a shared partition.The disclosure further provides approaches to enable systematically ensuring a CAN message is not acted upon by devices on a vehicle bus. A hardware device may analyze the message ID of a CAN message being transmitted on a CAN bus to determine an unwanted CAN message is being transmitted. As a result, an interference circuit may be used to corrupt a remaining portion of the CAN message to prevent devices from acting on the CAN message. The hardware device may be used by or integrated into the disclosed embedded system for a security response, or may be implemented in any suitable device or system. BRIEF DESCRIPTION OF THE DRAWINGSThe present systems and methods for protecting vehicle buses from cyber-attacks is described in detail below with reference to the attached drawing figures, wherein:FIG. 1 is a block diagram showing an example of an operating environment that includes an intrusion detection and protection system (IDPS), in accordance with some embodiments of the present disclosure;FIG. 2A is a block view of an example layered architecture that may be used to implement an IDPS, in accordance with some embodiments of the present disclosure;FIG. 2B is a block view of example layered architectures that may be used to implement an IDPS with a focus on network interface virtualization, in accordance with some embodiments of the present disclosure;FIG. 2C is a block view of an example layered architecture that may be used to implement an IDPS with a focus on a security manager and a communications manager of the IDPS, in accordance with some embodiments of the present disclosure;FIG. 2D is a diagram illustrating security engines, which may be used to implement an IDPS, in accordance with some embodiments of the present disclosure;FIG. 3 is a flow diagram showing a method an IDPS may use to process network communications for security threats, in accordance with some embodiments of the present disclosure;FIG. 4 is a diagram illustrating an example of a process of handling a cyber-attack that may occur over a period of time, in accordance with some embodiments of the present disclosure;FIG. 5A is a diagram illustrating examples of networking components that may be used to implement an IDPS, in accordance with some embodiments of the present disclosure;FIG. 5B is a diagram illustrating an example of a NAT gateway of an IDPS, in accordance with some embodiments of the present disclosure;FIG. 5C is a diagram illustrating an example of components of a networking subsystem of an IDPS, in accordance with some embodiments of the present disclosure;FIG. 5D is a diagram illustrating an example of a process of determining Quality of Service (QoS) parameters, in accordance with some embodiments of the present disclosure;FIG. 6 is a flow diagram showing a method for adjusting network resources of communication channels, in accordance with some embodiments of the present disclosure; FIG. 7 is a diagram illustrating examples of networking components that may be used to implement an IDPS, in accordance with some embodiments of the present disclosure;FIG. 8 is a flow diagram showing a method an IDPS may use to analyze CAN messages, in accordance with some embodiments of the present disclosure;FIG. 9 is a diagram illustrating an example a message blocking circuit, in accordance with some embodiments of the present disclosure;FIG. 10 is a flow diagram showing a method for a message blocking circuit to block a CAN message on a CAN bus, in accordance with some embodiments of the present disclosure;FIG. 11 is a flow diagram showing a method for using a message blocking circuit to block a CAN message on a CAN bus, in accordance with some embodiments of the present disclosure;FIG. 12A is an illustration of an example autonomous vehicle, in accordance with some embodiments of the present disclosure;FIG. 12B is an example of camera locations and fields of view for the example autonomous vehicle of FIG. 12 A, in accordance with some embodiments of the present disclosure;FIG. 12C is a block diagram of an example system architecture for the example autonomous vehicle of FIG. 12 A, in accordance with some embodiments of the present disclosure;FIG. 12D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle of FIG. 12A, in accordance with some embodiments of the present disclosure; andFIG. 13 is a block diagram of an example computing device suitable for use in implementing some embodiments of the present disclosure.DETAILED DESCRIPTIONThe present disclosure relates to protecting vehicle buses from cyber-attacks. More specifically, the present disclosure provides various inventive concepts that maybe used to implement an Intrusion Detection and Prevention System (IDPS) that is capable of protecting a vehicle bus of a vehicle by leveraging virtualization technologies.Although the present disclosure may be described with respect to an example autonomous vehicle 1200 (alternatively referred to herein as“vehicle 1200” or“autonomous vehicle 1200,” an example of which is described herein with respect to FIGs. 12A-12D, this is not intended to be limiting. For example, the systems and methods described herein may be used by non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more advanced driver assistance systems (ADAS)), robots, warehouse vehicles, off-road vehicles, flying vessels, boats, and/or other vehicle types. In addition, although the present disclosure may be described with respect to autonomous driving, this is not intended to be limiting. For example, the systems and methods described herein may be used in robotics (e.g., path planning for a robot), aerial systems (e.g., path planning for a drone or other aerial vehicle), boating systems (e.g., path planning for a boat or other water vessel), and/or other technology areas, such as for intrusion detection and prevention in a computing system.Disclosed approaches provide for an embedded system (e.g., an ECU) having a hypervisor that provides a virtualized environment supporting any number of guest OSes. The embedded system may protect the guest OSes and various network interfaces thereof from internal or external breaches by leveraging virtualization technologies. Further, using disclosed approaches, the embedded system may also protect devices on an external vehicle bus. In some respects, a guest OS may use a communication channel to a network interface to access the devices on the external vehicle bus (e.g., a CAN bus). Rather than only relying on SecOC, acceptance filtering, or ECU subnets to protect against cyber-attacks, the virtualized environment may include a security engine on an internal communication channel between a guest OS and a network interface (e.g., CAN interface) to an external vehicle bus of a vehicle. The security engine may monitor network traffic for potential threats and upon determining a security event, determine a security response such as actively blocking a message, notifying the guest OS, logging the incident, or initiating a safe mode of operation. The security engine may monitor traffic in both directions to protect the guest OS from other guest OSes or other network components, and to protect those network components from the guest OS.Disclosed approaches allow for protection of any number of guest OSes. For example, where multiple guest OSes are employed, a security engine may be used to protect a guest OS from other guest OSes by monitoring communications between guest OSes. In such an example, because the security engine is on a different partition than the guest OSes, it remains secure if either of the guest OSes becomes compromised. In embodiments, each guest OS may have its own security engine customized for the guest OS to account for what is typical or expected traffic for the guest OS (e.g., using machine learning on historical traffic data, anomaly detection, ingress/egress filtering, etc.). In a non-limiting example, a security engine may have access to all communications to or from a guest OS over a vehicle bus network, both with components internal to the embedded system or external to the embedded system. When a message is sent from one guest OS to another, it may be monitored by the security engine of each OS to account for what is typical or expected traffic for each guest OS over the vehicle bus network. The security engines may be in a security manager on one or more partitions that are separate from a partition of the guest OSes.In some embodiments, multiple guest OSes of the embedded system may share a physical network interface to the external vehicle bus of the vehicle - such as a CAN bus - through a communications manager of the virtualized environment. For example, the communications manager may include an internal vehicle bus that the guest OSes may use to communicate to one another or to the physical network interface. The communications manager may be on a partition that is separate from the security manager to provide additional isolation for the security manager. Further, the communications manager may include paravirtualized drivers, such that from the perspective of each guest OS, the guest OS is communicating directly with hardware despite the existence of one or more intervening virtualized services.The present disclosure further provides for filtering a vehicle bus message (e.g., CAN message) from a vehicle bus that uses a broadcast transmit receive system, such that devices on the vehicle bus do not act upon the message. This may be used to enable an OS, a security engine, and/or the communications manager to detect a malicious communication over the vehicle bus and protect itself and other components on the vehicle bus - a process that is not possible using conventional systems. During the transmission of a vehicle bus message on the vehicle bus a hardware device may analyze the message ID of the vehicle bus message to determine an unwanted message is being transmitted (e.g., using a block list or allow list). As a result, an interference circuit may be used to corrupt a remaining portion of the message on the vehicle bus to prevent the message from being acted upon by other devices on the vehicle bus. In the example of a CAN message, the interference circuit may raise arbitration during transmission of the CAN message, thereby corrupting the Cyclic Redundancy Check (CRC) on the CAN message. This prevents the devices from successfully reading the payload of the CAN message (e.g., because the devices will no longer recognize the CAN message as being valid).The hardware device may use registers to determine if an unwanted message is being transmitted. One register may store a message ID of the message received from the vehicle bus. At least one other register may store one or more reference message IDs (e.g., functioning as a permitted identifier list or a blocked identifier list). At least one logic gate may use the contents of the registers to generate an output signal indicative of a result of a comparison between the message ID of the message and the reference message ID(s). The interference circuit may then perform corruption of the message responsive to the output signal. In non limiting examples, the hardware device may be incorporated into a CAN hardware controller of the communications manager of the IDPS. Further, the reference message ID(s) may be configurable from software of the IDPS and/or the software may be used to enable or disable CAN message corruption. These and other inventive aspects are disclosed within the present application.With reference to FIG. 1, FIG. l is a block diagram showing an example of an operating environment 100 that includes an intrusion detection and protection system (IDPS) 122, in accordance with some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, groupings of functions, etc.) may be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. By way of example, the operating environment 100 may be implemented on one or more instances of the computing device 1300 of FIG. 13.The operating environment 100 may include, among other elements, one or more virtualized environments 102, one or more client devices 104A, 104B, and 104C, one or more sensors 106, one or more electronic control units (ECUs) 108, and one or more telematics control units (TCUs) 110. The virtualized environment 102 includes the IDPS 122 and one or more guest devices 120.The IDPS 122 may be provided to protect the operating environment 100 from cyber attacks, which may be perpetrated by exploiting networked end-points within vehicle 1200 or other systems in which the IDPS 122 is deployed. Examples of the end-points include the client device 104 A, the client device 104B, the client device 104C, the network switch 112, the TCU 110, the sensors 106, the ECUs 108, the guest devices 120, and/or other network devices. The operating environment 100 is shown as including one or more communication channels 150 and one or more communication channels 152, which may facilitate network communications between the network devices within the vehicle 1200.The communication channel 152 may correspond to a vehicle bus network, such as Controller Area Network (CAN), Local Interconnect Network (LIN), FlexRay, or an Ethernet Audio Video Bridging (eAVB). The vehicle bus network may be a specialized internal communications network that interconnects components inside the vehicle 1200 that are used for vehicle control (e.g., driving and/or safety systems). The vehicle bus network may accommodate for such special requirements for vehicle control as assurance of message delivery, of non-conflicting messages, of minimum time of delivery, of Electromagnetic Field (EMF) noise resilience, and of redundant routing.As shown, the vehicle bus network may include the ECUs 108, the sensors 106, one or more of the guest devices 120, and the TCU 110. Examples of the ECUs 108 include one or more actuator ECUs, such as those used to control the brake actuators 1248, the steering actuators 1256, and/or other actuators used by any of the various driving and/or safety systems of the vehicle 1200. Examples of the sensors 106 include global navigation satellite systems sensor(s) 1258 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1260, ultrasonic sensor(s) 1262, LIDAR sensor(s) 1264, inertial measurement unit (IMU) sensor(s) 1266 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1296, stereo camera(s) 1268, wide-view camera(s) 1270 (e.g., fisheye cameras), infrared camera(s) 1272, surround camera(s) 1274 (e.g., 360 degree cameras), long-range and/or mid range camera(s) 1298, speed sensor(s) 1244 (e.g., for measuring the speed of the vehicle 1200), vibration sensor(s) 1242, steering sensor(s) 1240, brake sensor(s) (e.g., as part of the brake sensor system 1246), and/or other sensor types used by any of the various driving and/or safety systems of the vehicle 1200.The communication channel 150 may correspond to an internet protocol (IP) network, such as an Ethernet network. The IP network(s) may interconnect components inside the vehicle 1200 that are used for In-Vehicle Infotainment (IVI) systems, Advanced Driver Assistance Systems (ADAS), dashboards, or head units of a vehicle. As shown, the IP network may include the client device 104 A, the client device 104B, the client device 104C, one or more of the guest devices 120, the network switch 112, and the TCU 110.The client device 104 A, the client device 104B, and the client device 104C may, for example, be thin clients controlled by one or more of the guest devices 120 over the communication channel 150 and/or by one or more external entities over the external network(s) 114. As an example, the client device 104A may include a cluster client, such as for displaying an electronic instrument cluster and/or digital instrument panel of the vehicle 1200. Examples of information that may be displayed include speed levels, gas levels, power levels, battery levels, notifications, autopilot information, driver assist information, etc. The client device 104B may include an IVI client, such as for audio, video, music, phone, etc. The client device 104C may include a Heads Up Display (HUD) client, such as for displaying autopilot and/or driver assist HUDs. These are just some examples of potential functionalities of the client devices and any number of client devices may be used for any combination of the various functionalities.The TCU 110 may be an ECU that controls communications between the vehicle 1200 and one or more external entities, such as over one or more external networks 114. The TCU 110 may support communications between the vehicle 1200 and the external entities via any of a variety of interconnection interfaces, examples of which include Near-Field Communication (NFC), Vehicle-to-everything (V2X), Car2Car, Cellular, Wireless Fidelity (Wi-Fi), Ethernet, Universal Serial Bus (USB), and Bluetooth (BT). As examples, the TCU may provide for wireless tracking and diagnostics of the vehicle 1200, consumer device integration (e.g., smartphone or tablet), or vehi cl e-to- vehicle communications.In various embodiments, the external networks 114 may comprise an IP network, such as the internet. For example, the TCU 110 may provide internet connectivity to one or more of the client devices 104A, the client devices 104B, the client devices 104C, and/or the guest devices 120 over the communication channels 150.The guest devices 120 may be used to control the IVI systems, ADAS, dashboards, or head units of the vehicle 1200, which may involve controlling one or more of the client devices 104A, 104B, or 104C. As shown, each of the guest devices 120 may include a connection to the communication channel(s) 150 and/or the communication channel(s) 152. For example, a guest device 120 that is used to power Artificial Intelligence (Al)-assisted vehicle control systems used in autonomous vehicle (AV) systems and ADAS of the vehicle 1200 may include connections to both the communication channel(s) 150 and/or the communication channel(s) 152. The communication channel(s) 150 may provide the guest devices 120 with internet connectivity (via the TCU 110), for example, to download High Definition maps used for self- driving features of the vehicle 1200, software updates, media for streaming, and more. The communication channel(s) 152 may provide the guest devices 120 with access to information from the ECUs 108 and the sensors 106, as well as with the ability to control and/or drive the vehicle via the ECUs.As indicated in FIG. 1, the external networks 114 over the interconnection interfaces of the vehicle 1200 (e.g., via the TCU 110) may provide a malicious party with access to the communication channel 150 and/or the communication channel 152. For example, a malicious party may attempt to breach the TCU 110 and one or more of the guest devices 120 to control devices, attack devices (e.g., via Distributed Denial-of-Service (DDoS) attacks), or otherwise interfere with the operation of the various end-points of the network(s).The IDPS 122 may protect the various components of vehicle 1200 from one another and from external entities. To this effect, the IDPS 122 may include a threat detector 130, a threat manager 132, a packet analyzer 134, a cryptography engine 136, a notifier 138, a mode selector 140, a logger 1342, a filter 144, and an interface manager 146. As an overview, the interface manager 146 may be configured to manage communications between components, such as between the guest devices 120, and/or the guest devices 120 and entities external to the virtualized environment 102. The threat detector 130 may be configured to monitor the communications over one or more of the communication channels 150 and/or the communication channel 152 for potential threats, and the threat manager 132 is configured to implement responses to the monitoring.To monitor communications, the threat detector 130 may use the packet analyzer 134 and/or the cryptography engine 136. The packet analyzer 134 may be configured to analyze data representative of the communications, and the cryptography engine 136 may be used to decrypt the data for the analysis and/or encrypt the communications for transmission. The cryptography engine 136 may also be used to encrypt and/or decrypt other data, such as configuration files, which may include user configurable threat profiles having detection settings on how the threat detector 130 detects threats (e.g., by threat type) and/or response settings (for executing a security response which may include one or more remedial actions) on how the threat manager 132 responds to detected threats (e.g., by threat type).The packet analyzer 134 may employ Transport Layer Security (TLS) inspection for encrypted data traffic. TLS inspection may be used to protect against the improper use of encrypted communications between the virtualized environment 102 (e.g., the embedded system) and the external world. For example, a malicious party may perform an attack using TLS or another type of encrypted connection. Conventional security appliances are unable to examine the data that is encrypted. To enable examination of such encrypted data, the package analyzer 134 may include a TLS inspector (as a TLS middleman) to open up the data so the threat detector 130 may ensure it is not being used as an attack vector. For example, the TLS inspector may allow the threat detector 130 to analyze the encrypted packets and ensure they are correctly formed, as an incorrectly formed packet may be used to exploit vulnerabilities in TLS implementations.Deep Packet Inspection (DPI) and Anti-Malware may be used to protect against anomalies in any of the communication channels shall anomalies overcome the boundary countermeasures. Cryptography may be used to prevent loss of privacy or confidentiality in sensitive data exchanged in the system and between the system and the external world.The threat detector 130 may detect threats using any suitable approach. The threat detector 130 may enforce authorization, authentication, and/or entitlement policies over the communication channels. In some examples, the threat detector 130 uses the packet analyzer 134 to apply one or more security models to detect threats. Examples include anomaly detection models, malware detection models, frequency of occurrence models, message pattern models, machine learning models, and/or other security models. For example, the threat detector 130 may use machine learning techniques in order to detect one or more security events including message patterns, message frequency and traffic that are usually encountered over one or more particular communication channels. In some embodiments, on egress, the threat detector may be used to implement a Virtual Local Area Network (VLAN) filter. VLANs may be used in the vehicle 1200 to separate out the internal networks. If there is compromised software in one Guest OS 212, this may be used to prevent the Guest OS 212 from communication with a VLAN it should normally not communicate with. This may reduce the ability of a malicious part to use components to attack other components of the operating environment 100.For example, and without limitation, the machine learning model(s) described herein may include any type of machine learning model, such as a machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naive Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, Long/Short Term Memory (LSTM), Hopfield, Boltzmann, deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models. The output(s) of the machine learning model(s) may include a threat detection decision value (e.g., a confidence value or binary value on whether a threat is detected), a security event type or class, representative of or used to derive security description values of the CDR, and/or other output types. The input(s) of the machine learning model(s) may include one or more network communications (e.g., a current message being transmitted received, a sequence of messages, messages of one or more particular types, etc.) and/or data derived therefrom, examples of which are described herein with respect to a Content Data Record (CDR) and the logger 142, which are examples of sources of the information (e.g., the security description). The machine learning models may in some examples be trained using data from the connection channel and/or connection channel type on which it is deployed. For example, anomaly detection over a connection may be based on historical data packets over that connection to capture what is typical traffic for that connection, or the historical data packets may be for all connections that leave the virtualized environment 102 and/or those between the guest OSes 212As some non-limiting examples, the threat detector 130 may perform anomaly detection by packet inspection with a Neural Network based detector (e.g., for IP -based communications). For anti-Malware, the threat detector 130 may also perform malware detection by packet inspection with Neural Network based detector (e.g., for IP -based communications). The threat detector 130 may be used to enforce cryptographic policies according to threat profiles. For example, if an entity tries to open an unauthenticated connection to a server the threat detector 130 may detect this security event and the threat manager 132 may block the connection so that only authenticated connections may be permitted. As a further example, the threat detector 130 may determine whether traffic is encrypted when it is supposed to be. The threat detector 130 may ensure that a packet is encrypted using an authorized cryptographic policy. For example, on an internal CAN bus, the threat detector 130 may be used to enforce that for CAN, all messages have to use SecOC. If the threat detector 130 detects that a packet does not have a cryptographic tag attached to it or cryptographic tag does not verify, a security event may be detected and remedial action performed.To respond to detected threats, the threat manager 132 may use the notifier 138, the mode selector 140, the logger 142, and/or the filter 144. The notifier 138 may be configured to notify services in the operating environment 100 of detected threats. The services may be on any combination of the guest devices 120, the IDPS 122, the network switch 112, the TCU 110, the ECUs 108, or the client devices 104A, 104B, 104C. The logger 142 may be configured to log detected threats (e.g., using Common Criteria principles), and/or other data related to the communications. The filter 144 may be configured to filter one or more of the communications based on the detected threats.The IDPS 122 may monitor communications over the communication channel(s) 150 and/or the communication channel(s) 152 on egress from the guest devices 120 and/or on ingress to the guest devices 120. Further, where multiple guest devices are employed, the IDPS 122 may monitor communications over the communication channel(s) 150 and/or the communication channel(s) 152 between different ones of the guest devices 120. Thus, IDPS 122 may act to protect the operating environment 100 as a whole, including the components and interconnections thereof.The IDPS 122 may also be responsible for verifying the run-time integrity of Guest OSes running on the guest devices 120 and for centralized secure collection and secure storage of security events logs for audit purposes (e.g., provided by the logger 142). The IDPS 122 may be responsible for maintaining its own backup so to restore a secure and safe image in case of failure. Further, the IDPS 122 may be configured to connect to a remote server for audit, as well as for updates and maintenance purposes (e.g., over the communication channels 150).Referring now to FIG. 2A, FIG. 2A is a block view of an example layered architecture 200A that may be used to implement the IDPS 122, in accordance with some embodiments of the present disclosure. The layered architecture 200A includes the virtualized environment(s) 102, one or more integrated circuits (ICs) 204, one or more printed circuit boards (PCBs) 206, and one or more hardware communication channels 210. As an example, a single virtualized environment 102, a single IC 204, and a single PCB may be used to implement the IDPS 122. For example, the virtualized environment 102 may be on such an embedded system that may include, but is not limited to, a System-on-a-Chip (SoC). In embodiments, the IDPS 122 may be implemented on one or more SoCs 1204 (FIG. 12C) and/or GPU(s).The IC 204 may be separate from the network switch 112, the TCU 110, the client devices 104A, 104, and 104C, the sensors 106, the ECUs 108, and/or other physical devices outside of the virtualized environment 102. In other examples, one or more portions of the virtualized environment s) 102 may be at least partially integrated in or distributed across any combination of those devices. For example, the IDPS 122 may be integrated into a TCU 110 in some embodiments. The hardware communication channel(s) 210 may refer to physical portions of the communication channel(s) 150 and/or communication channel(s) 152 that are external to the embedded system and/or the virtualized environment 102 (e.g., an external Ethernet link and an external vehicle bus).As shown, the virtualized environment(s) 102 may be managed by one or more hypervisors 220. The hypervisor 220 may run the IDPS 122, one or more guest OSes 212, and other virtualization services 214. Examples of the guest OSes include deployments of Linux, Android, GENIVI, QNX, etc. Each guest OS 212 may correspond to a respective one of the guest devices 120 of FIG. 1. As a specific example, one of the guest OSes 212 IVI, another to clusters, another to ETUDs, and yet another to ADAS and/or autonomous driving. Functionalities that do not require access to a vehicle bus may be separated out from other functionalities to further protect the operating environment 100. For example, functionalities that do not require access to the vehicle bus may be implemented on guest OSes that do not include a connection to the communication channel 152 (although information corresponding to sensor readings and the like may be received from a guest OS that does). Examples of the other virtualization services 214 include storage virtualization, ETniversal Serial Bus (ETSB) virtualization, etc. The hypervisor 220 may support the virtualized components with isolated execution environments on partitions of the virtualized environment 102. Each partition may correspond to a virtual machine and have a dedicated virtual address space. By implementing components on different partitions, if a component is breached, components on different partitions may still be protected. For example, because of the isolated execution environments, a breached component may not be used to execute malicious code on a component on a different partition.In some embodiments, each guest OS 212 may be on a different partition of the partitions supported by the hypervisor 220. The IDPS 122 may be implemented on one or more partitions that are different than the partitions used for the guest OSes 212. Thus, if a guest OS 212 is breached, the IDPS 122 may still act to protect the operating environment 100. The other virtualization services 214 may be on one or more partitions that are different than the IDPS 122 and the guest OSes 212 or may be integrated into those components. These are some examples of how the IDPS 122, the guest OSes 212, and the other virtualization services may be distributed across partitions supported by the hypervisor 220 and other examples are described herein. According to some embodiments, the IDPS 122 may be implemented as a Virtualized Security Appliance positioned above the guest OSes 212 in the chain of trust of the certificate chain of the virtualized environment 102, such as right after the Hypervisor 220. Using this approach may further protect the operational environment from breaches of the potential more vulnerable guest OSes 212.Referring now to FIG. 2B, FIG. 2B is a block view of example layered architectures 200B that may be used to implement the IDPS 122 with a focus on network interface virtualization, in accordance with some embodiments of the present disclosure. As indicated in FIG. 2B, the IDPS may in some embodiments support any number virtualized network interfaces (e.g., of the interface manager 146 of FIG. 1) which may allow the guest OSes 212 to share a hardware network interface. For example, FIG. 2B shows a guest OS 212A and a guest OS 212N, which may be included in the Guest OSes 212 of FIG. 2A. The guest OSes 212A and 212N may include a virtualized network interface 224 A to a hardware network interface 226A, which may be IP network interfaces, such as Ethernet interfaces. The hardware network interface 226 A provides access to a hardware communication channel 210A, which may correspond to physical portions of the communication channel(s) 150 that are external to the embedded system and/or the virtualized environment 102 (e.g., an external Ethernet link). Similarly, the guest OS 212A and the guest OS 212N may include a virtualized network interface 224N to a hardware network interface 226N, which may be vehicle bus network interfaces, such as CAN interfaces. The hardware network interface 226N provides access to a hardware communication channel 21 ON, which may correspond to physical portions of the communication channel(s) 152 that are external to the embedded system and/or the virtualized environment 102 (e.g., an external CAN bus).Each virtualized network interface may include any combination of drivers, virtual network devices, virtual network components, virtual network cards, and/or virtual network links or connections, which may be particular to the type of network being virtualized. Examples include virtual routers, switches, controllers, transceivers, bridges, ports, wires, links, busses, etc.By virtualizing network interfaces, any number of guest OSes may communicate with one another over the network, as if they were separate end-points on the network. Additionally or alternatively, the guest OSes may share the same physical hardware, such as a physical port to the hardware communication channel. For example, the guest OS 212A is shown as including one or more communication channels 250A and the guest OS 212A is shown as including one or more communication channels 250N, which may correspond to virtualized portions of the communication channels 150 of FIG. 1 that are dedicated to the respective guest OSes. The guest OS 212A and the guest OS 212N may communicate over the communication channels 250A and 250N or with external devices using the communication channel(s) 250A and the communication channel(s) 250N respectively. Similarly, the guest OS 212A is shown as including one or more communication channels 252A and the guest OS 212A is shown as including one or more communication channels 252N, which may correspond to virtualized portions of the communication channels 152 of FIG. 1 that are dedicated to the respective guest OSes. The guest OS 212A and the guest OS 212N may communicate over the communication channels 252A and 252N or with external devices using the communication channel(s) 252A and the communication channel(s) 252N respectively.Virtualizing the network interfaces may provide for the Guest OSes 212 not being aware of one or more components of the IDPS 122. This may be accomplished by using virtualized drivers that look like normal hardware drivers to the higher layers of the Guest OSes. Moreover, multiple guest OSes may access the same single peripheral without knowledge of each other. In embodiments without virtualization of drivers, the hardware drivers in the Guest OSes may communicate directly to the hardware devices and other peripherals. In embodiment with virtualization, the drivers of the guest OS may not communicate to the hardware directly, but to virtual hardware, which sits in the virtualization domain (managed by the hypervisor 220), and which in turn communicates to the physical hardware. This result in an abstraction level between OS drivers and actual hardware.In some embodiments, the virtual drivers may be para-virtualized drivers that the guest OSes use to communicate with the outside world and other guest OSes. Paravirtualization may introduce an additional layer of abstraction (virtualization). The Guest OS drivers may directly communicate to what they think is the hardware, but the communications may be intercepted by the IDPS 122. The IDPS 122 may in turn use the virtualized network interfaces 224A to interface with the hardware.The virtualized network interfaces may further provide for paravirtualization of peripherals to create a Virtual Local Network (VLN) for interconnection between the Guest OSes and between the Guest OSes and peripherals. Paravirtualization of peripherals may allow each of the interfaces to be isolated, limiting attacks from penetrating to other sub-systems and affecting larger portions of the virtualized environment 102.Referring now to FIG. 2C, FIG. 2C is a block view of an example layered architecture 200C that may be used to implement the IDPS 122 with a focus on a security manager 232 and a communications manager 234 of the IDPS 122, in accordance with some embodiments of the present disclosure. The IDPS 122 may include the security manager 232 and/or the communications manager 234. The communications manager 234 may include the virtualized network interfaces of the IDPS 122, such as the virtualized network interface 224 A and the virtualized network interface 224N of FIG. 2B. Further, the security manager 232 may sit between the guest OSes and the communications manager 234 on the communication channels 250 A, 25 ON, 252A, and/or 252N.The communications manager 234 may be responsible for managing communications between hardware peripherals, and the security manager 232 may be responsible for monitoring the communications for potential threats and to enact the appropriate policy once a threat is detected.The communications manager 234 may be implemented as a VM running (for example) an embedded, secure operating system. The communications that may be managed include communications between SoCs and/or between sub-systems of the same SoC. The communications manager 234 may be able to submit and receive fully formed frames (formed by upper layers) - such as Ethernet and/or CAN frames - to the network (e.g., Ethernet or CAN) infrastructure in the vehicle 1200 used by the embedded system. The communications manager 234 may also enforce bandwidth and latency guarantees needed by upper level protocols at the VMs (e.g., Guest OSes), with each VM being responsible for submitting frames from a virtualized driver.The communications manager 234 may provide network perimeter security features, such as (without limitation): distributed denial-of-service (DDoS) resistance, traffic filtering, stateless firewall, and restricted cone connection management. The Communications manager can also be implemented with infrastructure programming features, for example and without limitation: switch configuration, traffic shaping, traffic class arbiter programming, and Virtual Local Area Network (VLAN) filtering.The security manager 232 may inspect all traffic between the communications manager 234 and the Guest OSes 212. To do so, the security manager 232 may include one or more instances of the threat detector 130, the threat manager 132, the packet analyzer 134, the cryptography engine 136, the notifier 138, the mode selector 140, the logger 1342, and the filter 144. A configuration file may be used to configure the detection and policies for the traffic. Once an attack is detected by the threat detector 130, the security manager 232 may be responsible for enacting the configured policy for that attack using the threat manager 132. This may be used, for example, to implement one or more stateful firewalls. The security manager 232 may include, for example, a security Application Programming Interface (API) 240, a driver API 242, and/or a communications processor 244. The security API 240 may provide an interface to security tools and services. The driver API 242 may provide an interface to drivers of the virtualized network interface(s) 224A and/or 224N which may be included in the communications manager 234. Further, the communications processor 244 may manage the processing of communications as they are received and transmitted by the security manager 232, such as by an instance of the threat detector 130. To manage the processing of the communications, the communications processor 244 may use a state machine. The security manager 232 may be implemented using a number of threads. Each thread may be responsible for processing one communications at a time. The security manager 232 may run multiple threads simultaneously and may receive a notification message of a new communication via a FIFO buffer. The notification message may include a header and a pointer to the payload data of the communication. Once the notification message has been collected from the FIFO buffer, the next thread may then be able to poll the FIFO buffer for a notification message. Which thread gets to access the FIFO buffer may be controlled by a MUTual Exclusion object (MUTEX) implementation to prevent multiple threads from accessing the same data. FIG. 3 is used to describe an example of communications processing an instance of the security manager 232 may perform (e.g., in either direction).Like the communications manager 234, the security manager 232 may be implemented as a VM executing a secure, embedded system-specific operating system. In some embodiments the communications manager 234 and the security manager 232 are on separate VMs on separate partitions supported by the hypervisor 220. Thus, if the communications manager 234 becomes compromised, such as via a network stack of the virtualized network interfaces 224A, the security manager 232 may remain uncompromised to secure the operating environment 100. In other embodiments, the security manager 232 and/or the communications manager 234 may share a VM and/or partition. As further examples, the security manager 232 or the communications manager 234 could be incorporated into one or more of the guest OSes 212 and/or share a partition with a guest OS 212. For example, functionality of the security manager 232 may be included in one or more of the guest OSes 212 rather than the IDPS 122.The security manager 232 and/or the communications manager 234 may be used to support multiple network protocols, such as Ethernet and CAN, or multiple implementation of the security manager 232 and/or the communications manager 234 may be provided that are dedicated to one or more particular network protocols. For example, one implementation of the communications manager 234 may include the virtualized network interface 224A and another may support the virtualized network interface 224N. Thus, where multiple implementations are used for a component, each component may be customized to the supported network protocol(s) and/or other requirements such as traffic characteristics. Also where multiple implementations are used, each may be on a separate partition and/or VM, or one or more may share a partition with another implementation and/or virtual component described herein.Approaches described herein may provide for separation of domains (Security and Communications), as well as flexibility and testability. For example, in accordance with disclosed embodiments, the security manager 232 may be completely removed from the virtualized environment 102, without having to modify the communications manager 234. Also, in some embodiments, the security manager 232 may be provided without the communications manager 234 (e.g., virtualized network interfaces and/or drivers may be used). Additionally, while the guest OSes 212 have been described as being within the virtualized environment 102, one or more of the guest OSes 212 may be implemented as a separate devices, and may not necessarily be a VM. Thus, for example, one or more of the guest devices 120 of FIG. 1 may be outside of the virtualized environment 102.A typical automotive platform such as the Drive AV software stack developed by NVIDIA Corporation may be used to supports one or more of the Guest OSes 212 using the hypervisor 220. Such a platform also provides a range of virtualized services for common functions. The security manager 232 and the communications manager 234 and may be two collections of these services.Referring now to FIG. 2D, FIG. 2D is a diagram illustrating security engines, which may be used to implement the IDPS 122, in accordance with some embodiments of the present disclosure. FIG. 2D shows that multiple instances of a security engine may be used to implement communication channels to a network 260. In the example shown, a communication channel 260A and 260N may correspond to the communication channel 250A and 250N respectively, with the network 260 being provided via the hardware communication channel 210A, or the communication channel 260 A and 260N may correspond to the communication channel 252A and 252N respectively, with the network 260 being provided via the hardware communication channel 21 ON.As shown, each communication channel for a guest OS may include a dedicated instance of a security engine, with a security engine 232A being provided for the guest OS 212A and a security engine 232N being provided for the guest OS 212A. Each security engine 232A through 232N for a communication channel may, for example, include the security API, the driver API, and the communications processor 244. Further, each security engine 232A through 232N for a communications interface type and/or channel may, for example, have a threat profile (e.g., individual and independent), having detection settings on how the threat detector 130 detects threats (e.g., by threat type) and/or response settings on how the threat manager 132 responds to detected threats (e.g., by threat type) using the security engine. The detection setting may specify or define which security models to apply. As indicated above, this information may at least partially be defined in configuration files. For example, the configuration files may be user defined configuration files. A user with respect to the virtualized environment 102 may refer to a user that configured or provided a corresponding guest OS for deployment in an operational environment and/or an Original Equipment Manufacturer (OEM) of the vehicle 1200. ETsing configuration files, the guest OS 212A and the guest OS 212N may have different threat profiles for the same physical interface and/or for different logical channels over the same physical interface.The configuration file(s) may be encrypted and signed at“factory/production time,” then written to the embedded system for deployment of the IDPS 122. The IDPS 122 may use the cryptography engine 136 to decrypt and read the configuration file(s) as part of the boot process to self-configure networking parameters and security policies (e.g., detection settings and response settings) for run-time. The configuration file may be updated while the IDPS 122 is deployed in the vehicle 1200, such as by pushing over-the-air (e.g., via the external network 114) a new file to the embedded system to replace the current one. As a result, the IDPS 122 may update its configuration according to the new configuration file.A threat profile may define parameters for various classes of threats. When the threat detector 130 detects a hard policy threat, the threat manager 132 may use the filter 144 to block the threat by default. The threat policy may also be configured to allow specific cases not to be blocked for a hard policy threat. For example, all incoming traffic may be blocked unless the source of that traffic was already communicated by a guest OS (and detected by the security engine), thus effectively acting as an incoming packet filter for unsolicited traffic. However, for IP -based traffic it is possible in embodiments to configure a specific IP port address to always be open in the configuration file. For threats of a normal threat detection class, a user may configure different policies depending on a type of threat detected. Examples of these policies may include, for example: ‘no action, let through.’A temporary threat detection class, may include detected attacks that happen for a period of time, such as DoS attacks. To this effect, the threat detector 130 may be configured to determine or detect whether a particular attack is ongoing and the threat manager 132 may act or refrain from acting based on that determination in accordance with the threat profile.In any example, the threat manager 132 may take one or more remedial actions based on threats detected by the threat detector 130. Any number of remedial actions may be defined for and/or taken for each detected attack and/or a combination of detected attacks. The remedial actions to be taken may be defined in a threat profile for particular threat classes and/or types. One example of a remedial action is to allow a communication that is detected to correspond to a threat. Another example is to block, or filter out the communication. In this case, the threat manager 132 may use the filter 144 to filter the communication. A further example includes logging an event, such as the detection of the communication and/or of an attack that happens for a period of time (e.g., DoS). Here, the threat manager 132 may use the logger 142 to log the detected event, such as with metadata describing the event and/or one or more associated communications. Examples include a timestamp, communication contents, data field values, the source of the communication, the threat type, etc.In some examples, a remedial action may include verifying the integrity of the Guest OS assigned to the communication channel and/or other components of the operating environment 100. For example, for a threat detected using the security engine 232A, the threat manager 132 may initiate an integrity check of the guest OS 212A and/or a source or other conduit of one or more communications from which the threat detector 130 detected the threat (or other network components such as the TCET 110 or network routing devices). For example, the integrity check may be performed on the network interface itself, such as a virtualized or para-virtualized driver, a virtualized network interface, the communications manager 234, etc. A remedial action may also include the threat manager 132 using the mode selector 140 to change or set an operational mode of any combination of the aforementioned components (e.g., guest OSes, interfaces, the virtualized environment 102 itself, etc.). Examples include resetting (or rebooting) the component, disabling the component, blocking the component from communicating (e.g., the particular communication channel), resetting the component, blacklisting one or more components by one or more other components, and/or initiating a preconfigured safe mode of the component s). A further example or a remedial action includes notifying the component(s). For example, the threat manager 132 may notify any combination of the components, in which case the components may take any combination of the various remedial actions (e.g., they may be determined and/or implemented internally at the component level in contrast to at the level of the security manager 232). A notification message to a component may include any of the various information that the logger 142 may log and/or other information.Implementations of the threat detector(s) 130 and/or the threat managers 132 may be used by the security manager 232 and the communications manager 234 to each implement one or more stateless and/or stateful firewalls as well as countermeasures (remedial actions) to DoS attacks. A DoS attack may refer to a cyber-attack where an offender machine(s) makes a target host or network resource unavailable to its intended users by temporary or indefinitely disrupting services (e.g., internet services). These attacks may not always be avoided but can be mitigated with firewall strategies tailed to minimize the service disruptions on the target host. The firewall(s) described herein may be used for detection and recovery from at least the most common DoS attacks. For example, each security manager may include a stateful firewall (e.g., block an illegal sequence of messages over an HTTP connection) and the communications manager 234 may include a stateless firewall (e.g., block communications to/or from an unauthorized port). Firewalls and DoS countermeasures of the IDPS 122 may be used to protect the boundaries of each Virtual Network. Using the firewalls, the IDPS 122 may be able to intercept all communications between the guest OSes 212A through 212N and between the guest OSes 212A through 212N and any other peripheral component in the vehicle 1200. The IDPS 122 may be thus equipped with visibility of all data being exchanged within the virtualized environment 102 (e.g., an embedded system) and between the virtualized environment 102 and the external world.Now referring to FIG. 3, each block of a method 300, and other methods described herein, comprises a computing process that may be performed using any combination of hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The methods may also be embodied as computer-usable instructions stored on computer storage media. The methods may be provided by a standalone application, a service or hosted service (standalone or in combination with another hosted service), or a plug-in to another product, to name a few. In addition, the methods are described, by way of example, using particular components. However, these methods may additionally or alternatively be executed by any one system, or any combination of systems, including, but not limited to, those described herein.FIG. 3 is a flow diagram showing the method 300 the IDPS may use to process network communications for security threats, in accordance with some embodiments of the present disclosure. The method 300 may, for example, be performed by the communications processor 244 of FIG. 2C. The communications processor 244 may be implemented using any number of threads and each thread of the security engine 232A may perform the method 300 in parallel. The method 300, at block B302, includes waiting for a new message. For example, the thread may wait for the network communication from the FIFO buffer. This may correspond to an initialization state where each security service thread may wait for a new notification of a message. Each thread may then request a MUTEX of the FIFO buffer and access the new message. If the message retrieval is successful, the thread may then release the MUTEX and move to the block B304.The method 300, at block B304, includes reading an input message. For example, the thread may read the network communication. This may involve checking the integrity of the incoming message and accessing the payload data that the new message is associated with.The method 300, at block B306, includes populating the CDR. For example, the thread may populate a CDR for the network communication. This may include creating the CDR for the new message. The CDR may be kept and updated by the thread for as long as the message is being processed by the security services manager 232. The CDR may contain information that defines a security description of the message being inspected, e.g. (the following list is not exhaustive): a record type, a record identifier, a message identifier, a sequence identifier, a current state in a state machine (where the communications processor 244 handles messages using state machines), a data type, a message priority, a TSL/SSL message indicator, a timestamp, a message source, a message destination, a pointer to the message data payload, a size of the message data, an error field, and/or a policy scan result. It is noted that any combination of the information in the CDR may be logged by the logger 142 or used to derive logged information (e.g., as part of a remedial action). The packet analyzer 134 of FIG. 1 may analyze the message to generate one or more of the details in the CDR. For example, the packet analyzer 134 may extract at least some of the information from the message headers of the network communication. The method 300, at block B308, includes checking a threat profile. For example, using the CDR, the threat detector 130 may match the data type to a threat profile. The Threat profile may decide what type of actions should be taken for the specific message type.The method 300, at block B310, includes decrypting the message if needed. For example, the thread may use the cryptography engine 136 to decrypt the network communication if the network communication is encrypted (e.g., using a known key). As one example, a Secure Sockets Layer (SSL) proxy may be implemented to increase the level of messages that may be scanned by the thread, and by extension the overall security protection the security manager 232 may provide. The SSL proxy may allow the security manager 232 to act as a proxy between the user facing guest OS 212A and an SSL server during an SSL connection. This may allow the thread to run security scans on the data and then re-encrypt the data after it has been scanned.The method 300, at block B312, includes analyzing the message using one or more secure services. For example, the thread may use the threat detector 130 to launch one or more secure services. Each secure service may launch one or more scan requests depending on the threat profile. Scan requests may also be run by the threat detector 130 in parallel using multiple threads of secure services. The scans may be performed using the security API 240 of FIG. 2C to call on APIs in security applications running on the security manager 232. Examples of the secure services include the anti-malware, anomaly detection, and firewall filtering. Once the secure services have been completed, the threat detector 130 may collect and merge the data back into the main thread at block B314.The method 300, at block B316, includes processing the results per the threat profile. For example, the thread may use the threat detector 130 to process the results and determine whether the results correspond to a security event, and the threat manager 132 may use the threat profile to take one or more remedial actions based on the security event. As described herein, this may include whether the network communication should be allowed to continue or if other actions should be taken. These decisions may be based on the type of message that was processed and the associated threat profile.The method 300, at block B318, includes updating the CDR and/or logging results. For example, the thread may append information to the CDR and/or log results of the security services and/or records of one or more remedial actions taken by the threat manager 132 or security events detected by the thread detector 130. The logs may include statistics and scan results. All of the data may be encrypted by the cryptography engine 136 to prevent un authorized access.The method 300, at block B320, includes encrypting the message if needed. For example, the thread may use the cryptography engine 136 to encrypt the network communication if the network communication is unencrypted (e.g., using a known key).The method 300, at block B322, includes copying message data to transmit (TX) memory. For example, the thread may copy the network communication to transmit memory. This may allow the data to be copied to a location where it can be further accessed by other devices in the system for transmittal from the security engine 232A.The method 300, at block B322, includes writing an output message. For example, the thread may write an output message to the FIFO buffer to tell the driver who should get access to the network communication and what information should be passed on along with the network communication.Referring now to FIG. 4, FIG. 4 is a diagram illustrating an example of a process 400 of the threat manager 132 handling a cyber-attack that may occur over a period of time, in accordance with some embodiments of the present disclosure. The process 400 may, for example, correspond to a temporary threat detection class, such as DoS attacks and similar cyber-attacks.Cyber-attacks may originate internally, from a guest OS 212, or externally, such as via the TCU 110. The scale of a cyber-attack may depend on the type and frequency of the attack, which may lead to a system crash or to the unavailability of the system network services for a short or prolonged period. Countermeasures vary and, in the worst case, may involve the threat manager 132 performing the remedial action of a system reboot using the mode selector 140. The threat detector 130 may detect a cyber-attack based on measurements over a“Sense” period 402, which may be expressed as an integer number of a slot duration. The slot duration may refer to an internal parameter and define a duration for a measurement period (e.g., a minimum duration). The detection/determination of a cyber-attack may be performed at the end of the“Sense” period 402. Measurements and/or detected cyber-attacks may correspond to detected security events, and the process 400 may use the method 300 or other suitable method for this purpose. In case of a positive detection by the threat detector 130, an“Escalate” period 404A, 404B, or 404C may be started, and a recovery process (e.g., including one or more remedial actions) may be triggered for the type of attack using the threat manager 132. As indicated in FIG. 4, a response to a cyber-attack (e.g., a DoS attack or other type of attack that occurs over a period of time) may be performed over multiple, escalation levels Ll, L2, and L3. A three-level implementation is shown in FIG. 4, but any number of levels may be used. The escalation level Ll may be a low level, the escalation level L2 may be a medium level, and the escalation level L3 may be a high level. Each escalation level Ll, L2, or L3 may involve different remedial actions applied over one or more periods. At the end of the “Escalate” period 404 A, 404B, or 404C, the threat detector 130 may perform a check to assure the remedial actions for the current level have been successful. In case of failure, the threat detector 130 may raise the escalation level to the next level and further actions may be taken by the threat manager 132. In case of success, a“Wait” period 406A, 406B, or 406C may be started and the current remedial actions in place may be temporary removed by the threat manager 132 so the threat detector 130 may determine whether the system is still under attack. At the end of the“Wait” period 406A, 406B, or 406C, the escalation level Ll, L2, or L3 may be de-escalated by the threat detector 130 if the threat detector 130 determines the system is safe, and otherwise the remedial actions may be reinstated using the threat manager 132.Remedial actions for higher escalation levels (e.g., escalation level L3 or greater) may include the threat manager 132 requiring an action from a driver of the vehicle 1200 such as to stop the vehicle 1200 and/or power OFF or reboot the system. Further examples of remedial actions that may be used in the case of a cyber-attack and/or other detected security event is activating and/or modifying Quality of Service (QoS) on one or more of the communication channels 260A and/or 260N (e.g., to further limit network resources with an increased escalation level). The total bandwidth allocated for a connection-oriented and connectionless protocol may be artificially reduced to avoid any impact on other virtual communication channels. The QoS may be used to limit network resources available to the communication channel under attack once a DoS or other cyber-attack is detected, which may remove the risk of system resources being abused. The limit may be removed on attack de-escalation and the communication channel functionality may be restored back to normal. With this strategy any DoS flood attack may manifest itself as an increased latency of network activities targeted to the affected guest OS 212 without interruption of trusted connections. Meanwhile the full functionality of the other communication channels may be preserved. FIGs. 5D and 6 describe examples of approaches which may be used to implemented QoS-based remedial actions.Examples of the IDPS for IP-based Connections FIGs. 5A-5D and 6 provide examples of aspects which may be incorporated into the IDPS 122 for IP -based Connections. It is noted that not all of these aspects are required and that Ethernet-based IP connections are provided as a particular example, with other types of connections (IP -based or otherwise) being within the scope of the description whenever appropriate.Referring now to FIG. 5A, FIG. 5A is a diagram 500A illustrating examples of networking components that may be used to implement the IDPS 122, in accordance with some embodiments of the present disclosure. As shown, each security engine may include a firewall. For example, the security engine 232A includes the firewall 510A and the security engine 232N includes the firewall 51 ON. Further, the communications manager 234 may include a firewall 512, a Network Address Translation (NAT) gateway 514, and an Ethernet interface 516 (e.g., a virtualized interface).One or more implementations of the threat detector 130 and/or the threat manager 132 may correspond to the detection features of the firewalls 510A, 510N, or 512, and the NAT gateway 514, and those components may perform any of the various functionalities of the threat detector 130 and the threat manager 132 described herein. Further, one or more implementations of the threat manager 132 may correspond the response features of the firewalls 510A, 510N, or 512, and the NAT gateway 514. For example, where the security manager 232 and the communications manager 234 are in separate VMs, each VM may have one or more dedicated and/or customized threat detectors 130 and/or threat managers 132.In this example, the communications manager 234 may be implemented as a VM, and may serve as the central gateway for all communication services, having ownership of the Ethernet interface driver of the Ethernet interface 516. The guest OSes 212A through 212N may share a single Ethernet interface via the NAT gateway 514. The NAT gateway 514 may define the boundaries between the internal and external network. A virtualized network interface may be assigned to each of the guest OSes 212A through 212N and communications with external networks may pass through the partitions comprising the security manager 232 and the communications manager 234.According to some embodiments, Ethernet communications in the virtualized environment 102 may operate as a simulated multi-port enterprise class switch (e.g., in the Ethernet interface 516). Each guest OS 212A (e.g., VM) may be provided a port into the emulated switch environment, which is then connected to the physical Ethernet environment hosting the network 260. The Ethernet interface 516 may also enforce traffic bandwidth and latency guarantees as a physical Ethernet switch does. The combined security manager 232 and the communications manager 234 may operate on traffic at Ll - L4 of the Open Systems Interconnection (OSI) networking model.Referring now to FIG. 5B, FIG. 5B is a diagram 500B illustrating an example of the NAT gateway 514 of the IDPS 122, in accordance with some embodiments of the present disclosure. The diagram 500B includes a DHCP server 520, a DNS server 522, and a remote host 524. Each of the guest OSes 212A through 212N may be considered a node of an internal network with a pre-allocated private IP address. Access to the external network may be through the communications manager 234, which dynamically retrieves the IP address for the platform physical Ethernet interface (e.g., corresponding to the hardware network interface 226A of FIG. 2B) from the external DHCP server 520. The NAT gateway 514 may use a map function to translate network address information (e.g., an IP address and port number) between a private and a public domain and may allow nodes in the private network to share the platform physical Ethernet interface. The configuration presented in FIG. 5B may support more complex multi-OS network topology scenarios than what is shown.The NAT gateway 514 may implement restricted cone connection management. For example, the NAT gateway 514 may be used by the communications manager 234 to apply a reverse cone NAT (which may also be referred to as a restricted Cone NAT) to the Ethernet interface 516. From the perspective of each of the guest OSes 212A through 212N, its communication channel may appear as a dedicated network connection with the NAT gateway 514 connecting to a dynamic IP address. The reverse cone NAT may only allow an inbound connection to a port to which an outbound connection has already been established. This prevents an inbound connection from an IP address to a guest OS behind the NAT gateway 514 unless the guest OS has first sent a packet to the IP address. Thus, the communications manager 234 may have additional security against inbound communications over the Ethernet interface 516.Referring now to FIG. 5C, FIG. 5C is a diagram 500C illustrating an example of components of a networking subsystem of the IDPS 122, in accordance with some embodiments of the present disclosure. The diagram 500C shows examples of virtual components connecting the guest OSes 212A through 212N to a driver interface 530. The virtual components for a connection may include multiple pairs of inter- VM communication buffers IVC , both inbound and outbound, as well as a bridge BRDG to the driver interface 530. The inter- VM communication buffers JVC may be scheduled by the hypervisor 220, allowing data to move across separate VMs.The driver interface 530 may correspond to the virtualized network interface 224 A of Fig. 2B and provide para-virtualized network interface drivers for the guest OSes 212A through 212N. As described herein, the paravirtualization may add an additional abstraction level to the virtualization, with the drivers of the guest OSes 212A through 212N communicating to what they think is the hardware, but the communications may be intercepted by the IDPS 122. The IDPS 122 may in turn use the driver interface 530 to interface with the hardware.The driver interface 530 may comprise a para- virtualized driver and/or router the security manager 232 and/or the guest OSes 212 use to communicate with the outside world. The driver/router may be responsible for both notifying the security manager 232 that a new message is ready to be scanned and forwarding the message to the next partition once the scan has been completed. In the case a threat is detected during the scan by the threat detector 130, the message may be subject to one or more decisions applied using the threat manager 132 according to the threat profiles (e.g., in a configurable policy table).The communications manager may, as non-limiting examples adhere to the following standards: 802.1 AS initialization and path measurement, 802. IX authorization database, 802.1Q VLAN enforcement for inbound and outbound traffic, 802.1Q traffic classification enforcement, and/or 802.1AE enforcement at chip boundary and platform boundaries.Referring now to FIG. 5D, FIG. 5D is a diagram 500D illustrating an example of a process of the security engine 232A determining Quality of Service (QoS) parameters, in accordance with some embodiments of the present disclosure. While the security engine 232A is shown, any of the various security engines herein may include similar components and be used in a similar process. The security engine 232A is shown with a QoS handler 540, an implementation of the threat manager 132, and a dispatcher engine 542. The dispatcher engine 542 includes a communication delayer 544, communication queues 546, and a communication dispatcher 548.The QoS handler 540 may be used to allocate network resources to the communication channels 250A of the guest OS 212A. Examples of network resources may include traffic bandwidth and latency guarantees for particular communication channels (e.g., a channel to the guest OS 212N, a channel to the client device 104A, a channel to an external network 114, etc.). The QoS handler 540 may allocate the network resources according to QoS configuration parameters 566. The QoS configuration parameters 566 may define relative priorities the QoS handler 540 uses for the communication channels 250A. The QoS handler 540 may implement any suitable protocol for QoS, or may use a custom solution. As a non-limiting example, a priority may be defined using a Class of Service (CoS) of 802.1Q.The QoS configuration parameters 566 may be defined by the configuration file for the guest OS 212A and/or security engine 232A described herein. For example, the QoS configuration parameters 566 may be provided by the threat manager 132 based on one or more security events being detected using the threat detector 130. Additionally or alternatively, the QoS configuration parameters 566 may have default settings which may be used when the threat detector 130 does not indicate an attack is occurring over the communication channels 250A. As a further example, the QoS handler 540 may normally be disabled, and activated by the threat manager 132 according to a threat profile. The QoS parameters may be configurable for each dedicated communication channel of a guest OS and/or IP protocol.As an example, the QoS handler 540 may - for each network communication 560 that is received by the security engine 232A - compute an appropriate delay 550 based on the current QoS configuration parameters 566. The delay 550 may be delivered to a dispatcher thread of the dispatch 542 along with the network communication 560, as shown. The communication delayer 544 may use the delay 550 such that the communication queues 546 and the communication dispatcher 548 deliver network communications over the communication channels 250A in a correct time order.Using this approach, the security manager 232 may limit network resources for one or more of the communication channels preserve network resources for other communication channels when the one or more communication channels are compromised. Further, by using a similar approach for each security engine, the guest OSes 212A through 212N may more effectively share a single hardware network interface 226A, even where some of the communication channels are under attack (e.g., a DoS attack).Now referring to FIG. 6, FIG. 6 is a flow diagram showing a method 600 for adjusting network resources of communication channels, in accordance with some embodiments of the present disclosure. The method 600, at block B602, includes receiving a network communication over a communication channel. For example, the security engine 232A may receive the network communication 560 of FIG. 5D over a communication channel of the communication channels 250A.The method 600, at block B604, includes analyzing the network communication to detect a security event. For example, the threat detector 130 may analyze the network communication 560 to detect a security event according to a threat profile. This may be performed in accordance with a process 500 of FIG. 5, by way of example.The method 600, at block B606, includes determining QoS parameters for the communication channel. For example, based on the security event being detected and the threat profile, the threat manager 132 may activate the QoS handler 540 and provide the QoS configuration parameters 566 (e.g., defined by the threat profile) to the QoS handler 540.The method 600, at block B608, includes applying the QoS parameters to the communication channel. For example, the QoS handler 540 may apply the QoS parameters to the communication channel, which may result in the delay 550 for the network communication 560 and/or one or more subsequent communications received over the communication channel.In addition to or instead of the QoS handler 540, the communications manager 234 may act on the communication channels 150 as a whole, in contrast to the guest OS dedicated approach of the security engines. In some embodiments, the communications manager 234 may act to consolidate the QoS parameters used by the security engines 232A through 232N to QoS parameters of the network 260, so that the relative network resources of the communication channels 150 flow from the shared hardware network interface 226A to the external infrastructure of the operating environment 100. For example, the communications manager may assign a QoS tag to each outbound network communication based on the guest sending the network communication. The format of the QoS tag may be according to the protocol of the external infrastructure (e.g., the network switch(es) 112), such as using a Class of Service (CoS) of 802.1Q. Thus, the infrastructure may analyze the QoS tag and handle the network communication accordingly.Examples of the IDPS for CAN-based ConnectionsFIGs. 7, 8, and 9 provide examples of aspects which may be incorporated into the IDPS 122 for CAN-based Connections. It is noted that not all of these aspects are required and that CAN-based connections are provided as a particular example, with other types of connections (Vehicle Bus-based or otherwise) being within the scope of the description whenever appropriate.Referring now to FIG. 7, FIG. 7 is a diagram illustrating examples of networking components that may be used to implement the IDPS 122, in accordance with some embodiments of the present disclosure. Communication channels for vehicle bus protocols, such as CAN, may be implemented using one or more internal vehicle buses. The internal vehicle buses may be virtual vehicle buses that appear to the Guest OSes 212A through 212N to be physical vehicle buses. To accomplish this, the security manager 232 and/or the communications manager 234 may include one or more virtual network components, such as controllers, transmitters/receivers, filters, etc.In the example shown, the communication channels 252A and 252N may be implemented using an internal CAN bus 752 A and/or an internal CAN over IP bus 752N. The internal CAN bus 752A and/or the internal CAN over IP bus 752N may be implemented in the virtualized environment 102 using one or more virtual components in the security engine 232A, the security engine 232N, and one or more virtualized network interfaces 724. The virtualized network interface 724 may provide a virtualized connection (e.g., a shared single connection) to one or more physical CAN interfaces 726, which in turn may provide access to a CAN bus(es) 710. The virtualized network interface 724 may correspond to the virtualized network interface 224N of FIG. 2B and the CAN bus(es) 710 may correspond to the hardware communication channel(s) 21 ON of FIG. 2B.In the example of FIG. 7, the guest OSes 212A and 212N have CAN capability. However, as with other examples, the number of Guest OSes is for explanatory purposes only, as well as the number of guest OS with a particular type of network interface connectivity. The internal CAN bus 752A and the internal CAN over IP bus 752N link the Guest OSes 212A and 212N with the external CAN bus 710 that connects the ECUs 108, the TCU 110, and/or the sensors 106. The internal CAN bus 752A may be used when a CAN controller is shared by the guest OSes 212A and 212N and the internal CAN over IP bus 752N may be used when a “CAN over IP” (CAN/IP) solution is shared by the guest OSes 212A and 212N. For example, CAN over Ethernet may be used (e.g., when the IDPS 122 is connected to a microcontroller device operating as a CAN interface to/from an Ethernet transceiver).Similar to FIG. 5A, the security engine 232A may include a firewall 702A and the security engine 232N may include a firewall 702N. Although, the security engine 232A and the security engine 232N are shown in both FIG. 5A and FIG. 7, these may be represent different security engines that are, for example, dedicated to CAN-based communications in FIG. 7 and IP -based communications in FIG. 5A, or the same security engines may be used (and/or the same firewalls). The firewalls 702N may use implementations of the threat detector 130 and the threat manager 132 to apply threat profiles to the inbound/outbound CAN traffic to and from both the CAN controller interfaces or CAN over IP using approaches described herein (e.g., as in FIG. 3 or FIG. 4). The internal CAN bus 752A may be implemented using a broadcast transmit receive system according to a CAN protocol. The internal CAN over IP bus 752N may be implemented using IP messages, but the messages may be run through the same security engines 232A and 232N. For the internal CAN over IP bus 752N, the CAN interface 726 may include a CAN over IP transceiver that goes to the CAN bus 710, which may be separate from the hardware communication channel 210A and dedicated to CAN communications. For other types of vehicle bus protocols, other types of transceivers may be used, or a single transceiver may be capable of handling multiple vehicle bus protocols and busses.The virtualized network interface 724 may implement CAN virtualization, and as such, may include receive (RX) acceptance filters 704A and 704N, used in conventional CAN hardware only for receiving communications using a safe list of message IDs. Where the RX acceptance filters 704A and 704N are included, they may be disabled as filtering may be performed using the more robust firewalls 702A and 702N. Further, any RX acceptance filter on the CAN interface 726 may be disabled, as it is typically not customizable for each guest OS 212A and 212N.One or more implementations of the threat detector 130 and/or the threat manager 132 may correspond to the detection features of the firewalls 702A and 702N and/or the RX acceptance filters 704A and 704N, and those components may perform any of the various functionalities of the threat detector 130 and the threat manager 132 described herein.The firewalls 702A and 702N may use multi-stage processing where a first stage may filter against a message ID list and may use other mechanisms of analysis, such as message frequencies, patterns, anomaly detection, machine learning, etc. FIG. 8 is used to describe some examples of how the CAN messages may be processed.Now referring to FIG. 8, FIG. 8 is a flow diagram showing a method 800 the IDPS may use to analyze CAN messages, in accordance with some embodiments of the present disclosure. The method 800 may, for example, be performed in conjunction with the method 300 of FIG. 3. The method 800 may be performed for a CAN message being transmitted by the security engine 232 A from the guest OS 212A to another guest OS or external component. However, the method 800 may similarly be performed on a CAN message being transmitted to the guest OS 212A. Also, while CAN messages are described with respect to FIG. 8, the method 800 may be performed on any suitable type of message, such as IP -based messages and/or other vehicle bus message types. The method 800, at block B802, includes receiving a CAN message. For example, the security engine 232A may receive the CAN message being transmitted from the guest OS 212A. Where the CAN message is being transmitted to the guest OS 212A, block B802 may include the RX acceptance filter 704A receiving the CAN message, or block B802 may include the security engine 232A receiving the CAN message in either case.The method 800, at block B804, includes comparing a message ID of the CAN message to a message ID list. For example, the threat detector 130 of the firewall 702A may compare the message ID of the CAN message to a list of message IDs that are to be allowed for transmittal from the guest OS 212A. This allow list may be implemented using a threat profile with the message IDs being configurable by the user. In contrast, when a message is being transmitted to the guest OS 212A, the RX acceptance filter 704A may compare the message ID of the CAN message to a list of message IDs that are to be blocked from receipt by the guest OS 212A. This block list may also be implemented by allowing the user to configure the RX acceptance filter 704A in CAN virtualization. In addition to or instead of the block list being implemented by the RX acceptance filter 704A, the block list may be implemented by the security engine 232A similar to the allow list. Further in some embodiments an allow list may instead be used for received messages and/or a block list may instead be used for transmitted messages.If, at block B804, the CAN man message is unpermitted according to the list(s), the method 800 may proceed to block B806, which includes performing a remedial action(s). For example, the threat manager 132 of the firewall 702A or the RX acceptance filter 704A may perform any suitable combination of the remedial actions described herein (e.g., according to the threat profile). This may include using the logger 142 to create an entry for the suspect CAN message. This may also include notifying the guest OS 212A about the detected policy violation using the notifier 138 (e.g., that the message ID was not allowed when the CAN message is blocked on the TX/RX path). Notifications that are provided may also be available in the logs. Further, remedial actions may include using the mode selector 140 to select one or more modes. This may include the safe mode of system operation to ensure rogue CAN messages do not impact the overall functioning of the system. In some examples, the safe mode may be activated on repeated detection of security threats, such as according to the process 400 of FIG. 4. If the CAN message is determined to be unpermitted, the message may be blocked. The blocking of the message may be performed using a message blocking circuit 902 of FIG. 9 to block the CAN message on the CAN bus 710 and/or internally within the virtualized environment 102.If, at block B804, the CAN message is permitted according to the list(s), the method 800 may proceed to block B808, which includes analyzing the CAN message using a security model(s). For example, the threat detector 130 of the firewall 702A may analyze the CAN message using one or more security models. The security models may include anomaly detection models, frequency of occurrence models, message pattern models, machine learning models, and/or other security models described herein. In other examples, if, at block B804, the CAN message is unpermitted according to the list(s), the method 800 may still proceed to block B808, such as where the remedial actions do not include blocking the CAN message. In some embodiments, block B08 using machine learning techniques to match the Message ID with message patterns, message frequency and traffic that are usually encountered over the communication channel. This analysis may take into account the direction of the traffic.If, at block B808, the CAN message is permitted according to the analysis using the security models (e.g., the threat detector 130 determines the CAN message does not represent or is not part of a security threat/event), the method 800 may proceed to block B810, which includes allowing the CAN message. For example, the security engine 232A may allow the CAN message to proceed to its destination.If, at block B808, the CAN message is unpermitted according to the analysis using the security model(s) (e.g., the threat detector 130 determines the CAN message does represent or is part of a security threat/event), the method 800 may proceed to block B806, where the threat manager 132 may perform one or more remedial actions according to the threat profile (which may include blocking or allowing the CAN message).In non-limiting examples of the method 800, the threat detector 130 may check the message ID of each incoming/outgoing CAN message against pre-configured Message ID lists. The CAN message may be in a Normal or Extended Frame format with an 11 or 29 bit identifier. On a TX path, block B804 may be performed by the firewall 702A of the security manager 232 where the message ID is checked against a pre-configured allow list before being allowed to pass through to CAN Virtualization on the communications manager 234. This allow list may be used to block all messages by default unless specified in the allow list. If the message ID check fails, the method 800 may proceed to block B806, where one or more remedial actions may be performed using the threat manager 132, such as blocking the message. On the RX path, block B804 may be performed by the RX acceptance filter 704A of the communications manager 234 with the block list and an option to configure the block list using CAN Virtualization. The default behavior of the RX acceptance filter 704A may be to allow all traffic (e.g., to proceed to the security engine 232A) unless specified and the block list may be configured as an acceptance filtering mechanism that resides on the virtualized CAN controller and performed the block B806.In any example, a block list used by the firewall 702A and/or the RX acceptance filter 704A on receipt of a message may include the message ID of the guest OS 212A. Using this approach, each CAN Controller may be able to monitor its own message ID on the internal CAN bus 752A (and/or internal CAN over IP bus 752N) and perform remedial actions at block B806 if it finds an unauthorized instance of the message ID on the bus, which may indicate spoofing. For example, the remedial action may include using the message blocking circuit 902 of FIG. 9 to raise arbitration thereby corrupting the CRC on the undesired CAN message being currently transmitted by an attacker on the CAN bus 710. This may raise an error on the CAN bus 710 causing the frame to be ignored by the CAN devices on the CAN bus 710.Examples of Blocking CAN Messages on CAN BusesThe present disclosure provides for filtering a CAN message from a CAN bus, such that devices on the CAN bus do not act upon the CAN message. Interconnectedness and connectivity of the vehicle 1200 to the outside world result in significant security concerns. The CAN protocol was originally designed at a time when vehicles did not include the interconnect interfaces of modern vehicles, and as a result, CAN networks are known to be vulnerable to attacks. Once access is gained to a conventional CAN network, any unsolicited message may be sent on the CAN bus. Once this happens, not much can be done in conventional implementations to stop the rogue message from reaching the target ECU and causing mayhem. Since a CAN bus may carry actuation signals in vehicles, the severity of risks associated with hacking the CAN bus escalates dramatically in the case of a self-driving vehicle. Prior to the present disclosure, there were no known methods that allow for the ability to reject a packet from the CAN bus in real-time. In contrast, the present disclosure provides approaches to inspect a CAN message and block reception of the CAN message in real-time.Disclosed approaches may be used to enable a CAN device to detect a malicious communication over the CAN bus 710 and protect itself and other devices on the CAN bus 710 - a process that is not possible using conventional systems. Generally, disclosed approaches involve corrupting the CAN message on the CAN bus 710 so that the CAN message will be ignored by other components. This may be accomplished by raising arbitration when an invalid message ID is found on the CAN bus and corrupting the CRC field of the CAN message to raise an Error Flag on the CAN Bus.Disclosed approaches may be used with the IDPS 122, but are more generally applicable to any device and software connected to a CAN bus. In disclosed approaches, during the transmission of a CAN message on the CAN bus, and to filter the CAN message from the CAN bus, a message blocking circuit (e.g., the message blocking circuit 902 of FIG. 9) may analyze the message identifier (ID) of the CAN message to determine whether an unwanted CAN message is being transmitted. Using an interference subcircuit, the message blocking circuit may corrupt the CAN message on the CAN bus to prevent the CAN message from being used by other devices on the CAN bus. To do so, the interference circuit may raise arbitration during transmission of the CAN message, thereby corrupting the Cyclic Redundancy Check (CRC) on the CAN message. This prevents the devices from successfully reading the payload of the CAN message (e.g., because the devices will no longer recognize the CAN message as being valid).Referring now to FIG. 9, FIG. 9 is a diagram illustrating an example of the message blocking circuit 902, in accordance with some embodiments of the present disclosure. The message blocking circuit 902 may include, for example, a bus message ID register 904, one or more reference message ID registers 906, one or more logic gates 908, and an interference subcircuit 910. The message blocking circuit 902 may also include additional components that are not shown.The bus message ID register 904 is configured to receive a message ID of a CAN message as it is being transmitted on the CAN bus 710. FIG. 9 shows CAN HI and CAN LO waveforms that correspond to a typical CAN message that may be transmitted on the CAN bus 710. FIG. 9 also shows CAN bus data 920 that may correspond to the CAN HI and CAN LO waveforms. The message ID may be received by the bus message ID register 904 from the arbitration field of the CAN bus data 920 as it is being transmitted on the CAN bus 710.The reference message ID register(s) 906 include a list of message IDs the message blocking circuit 902 may use to determine whether to block the CAN message from the CAN bus 710. The reference message ID register(s) 906 may be configurable by a host CPU 912 (e.g., 1206 of FIG. 12C), which may host the hypervisor 220 and/or the IDPS 122. However, the host CPU 912 may generally be any CPU as the message blocking circuit 902 may be implemented without implementing the IDPS 122 and/or independent or separate from the IDPS 122. As examples, the reference message ID register may include either a list of message IDs to allow on the CAN bus 710 or a list of message IDs to block from the CAN bus 710. More generally, the data stored in the reference message ID register(s) 906 may refer to data the message blocking circuit 902 may analyze (e.g., compare to the current message ID) to determine whether to block one or more CAN messages from the CAN bus 710. This analysis may be performed using the logic gate(s) 908.The logic gate 908 may receive and compare the message IDs from the reference message ID register(s) 906 and the bus message ID register 904. The logic gate 908 may further generate an output signal indicative of a result of the comparison. For example, one output signal (e.g., Hi or Low) may indicate the CAN message is to be blocked, and another output signal (e.g., Hi or Low) may indicate the CAN message is to be allowed. Where the reference message ID register(s) 906 corresponds to an allow list, the logic gate 908 may be configured to generate an output signal indicating the CAN message is to be blocked when the CAN message matches a message ID in the reference message ID register(s) 906, otherwise the output signal may indicate the CAN message is to be allowed. As another example, where the reference message ID register(s) 906 corresponds to a block list, the logic gate 908 may be configured to generate an output signal indicating the CAN message is to be blocked when the CAN message does not match any message ID in the reference message ID register(s) 906, otherwise the output signal may indicate the CAN message is to be allowed.The message blocking circuit 902 may be enabled or disabled using an Enable/Disable signal, such as by the host CPU 912. When the message blocking circuit 902 is disabled, the output signal may indicate otherwise the output signal may indicate CAN messages are to be allowed regardless of contents of the reference message ID registers 906 and the bus message ID register 904.The interference subcircuit 910 may be configured to, responsive to the output signal, perform corruption of the CAN message on the CAN bus 710. To do so, the interference subcircuit 910 may raise arbitration during transmission of the CAN message, thereby corrupting the Cyclic Redundancy Check (CRC) on the CAN message. This prevents the TCU 110, the ECUs 108, or other devices that may be on the CAN bus 710 from successfully reading the payload of the CAN message (e.g., because the devices will no longer recognize the CAN message as being valid).The message ID of the CAN message may be read from the CAN bus at time TID. Between time TID and time Tsstart, the logic gate 908 may generate the output signal used by the interference subcircuit 910 to trigger corruption of the CAN message. The corruption of the CAN message may be performed by the interference subcircuit 910 in a window of time from the time Ts start and time Tsstop. This window of time may be programmable, such as by the host CPU 912. The window of time and the time TID may be time synced to the CAN controller data frame in order to properly read the message ID and corrupt the CAN bus data 920. Generally, the interference subcircuit 910 may corrupt a CAN message by altering the CAN HI and/or CAN LO waveforms. This may include, for example, holding the CAN bus 710 high, holding the CAN bus 710 low, and/or alternating between high and low on the CAN bus 710, which may be programmable, such as by the host CPU 912. The corruption may be timed to corrupt the CRC field. While in other examples, the interference subcircuit 910 may corrupt the CAN message by altering the control field, the CRC field occurs later in a CAN message, providing more time for processing the message ID from the CAN bus 710.In some embodiments, the interference subcircuit 910 uses an arbitration mechanism (e.g., available in standard CAN interface implementations) when the output signal indicates an invalid CAN message is being transmitted on the CAN bus 710. Typically, implementations of CAN interfaces use the arbitration mechanism to determine the order in which components get to transmit data during a given period. Frames with the highest assigned identifier (lowest message ID) may get access to the CAN bus 710 without delay, and the other components having lower priority wait for their turn. Raising the arbitration mechanism when the output signal indicates an invalid CAN message is being transmitted on the CAN bus 710 may effectively corrupts the CRC field. This will raise an Error Flag on the CAN Bus 710 and the CAN message will hence be ignored by the ECUs 108, the TSU 110, and/or other devices on the CAN bus 710, preventing them from being affected by a potential attack.By using the message blocking circuit 902, the IDPS 122 or other software security solution may not only raise an alarm or take other software based actions, but also take preventive/corrective action when an anomaly is found on the CAN bus 710. In some embodiments, the IC(s) 204 of FIG. 2A may include a CAN hardware controller (e.g., the CAN interface 726), which may implement the message blocking circuit 902. For example, the CAN hardware controller may be in a partition of the communications manager 234 and configured with an open hardware filter that monitors the CAN messages on the CAN bus 710. The message blocking circuit 902 may be implemented as a filter that is configurable on the communications manager 234 with a complete list of all the message IDs that are allowed/possible for the CAN Ring. The list of message IDs may be configured by the host CPU 912 loading the message IDs into the reference message ID registers 906. When the first 11 bits (or 29 bits for Extended Frame) are read and an invalid message ID is detected using the logic gates 908, the CAN Controller may raise its arbitration and corrupt the CRC field of that frame. This will cause the target ECU of the CAN message to ignore the CAN frame, thereby protecting it from attacks. While particular examples are provided, the IDPS 122 may configure and implement the message blocking circuit 902 in other ways, such as with a block list and/or using the security manager 232 (e.g., a security engine may configure the reference message ID registers 906).Referring now to FIG. 10, FIG. 10 is a flow diagram showing a method 1000 for the message blocking circuit 902 to block a CAN message on the CAN bus 710, in accordance with some embodiments of the present disclosure. The method 1000, at block B 1002, includes receiving a message ID of a CAN message from a CAN bus. For example, the bus message ID register 904 may receive the message ID of a CAN message from the CAN bus 710.The method 1000, at block B1004, includes comparing the message ID of the CAN message to at least one reference message ID. For example, the logic gates 908 may compare the message ID of the CAN message from the bus message ID register 904 to one or more reference message IDs from the reference message ID register(s) 906.The method 1000, at block B 1006, includes generating an output signal indicative of a result of the comparison. For example, the logic gates 908 may provide an output signal to the interference subcircuit based on the comparison, which indicates a result of the comparison.The method 1000, at block B 1008, includes corrupting the CAN message on the CAN bus responsive to the output signal. For example, the interference subcircuit 910 may corrupt the CAN message on the CAN bus 710 when the output signal indicates the CAN message is an invalid CAN message. This may include raising arbitration and corrupting the CRC field from the time Tsstart to the time Tsstop.Referring now to FIG. 11, FIG. 11 is a flow diagram showing a method 1100 for using the message blocking circuit 902 to block a CAN message on the CAN bus 710, in accordance with some embodiments of the present disclosure.The method 1100, at block Bl 102, includes receiving at least a message ID of a CAN message from a CAN bus. For example, the RX acceptance filter 704A, the firewall 702A, and/or other software component of the IDPS 122 may receive a message ID of a CAN message from the CAN bus 710.The method 1100, at block B1104, includes analyzing at least a portion of the CAN message. For example, the RX acceptance filter 704A, the firewall 702A, and/or other software component of the IDPS 122 may analyze at least the message ID of the CAN message from the CAN bus 710. This may include using the threat detector 130 and/or the block list and/or allow list described herein to determine whether the CAN message corresponds to a security threat or event.The method 1100, at block B1106, includes determining to block one or more CAN message on the CAN bus based on the analyzing. For example, the component(s) of the IDPS 122 may determine to block the CAN message. Additionally or alternatively, the IDPS 122 may determine to block one or more messages that have the message ID. Additionally or alternatively, the IDPS 122 may determine a set of message IDs to block from the CAN bus 710 (e.g., those on a block list or those not on an allow list). Where multiple CAN messages are to be blocked, they may be blocked for a specified period of time or until some event occurs (e.g., system rest, safe mode deactivation, etc.). In some examples, the CAN message(s) to be blocked and/or the duration of blocking message may be defined by the threat profile, and the threat manager 132 may implement the blocking.The method 1100, at block B1108, includes transmitting data causing a message blocking circuit to corrupt the one or more message on the CAN bus. For example, the IDPS 122 may configure the message blocking circuit 902 to block one or more messages using the reference message ID register(s) 906 and/or the Enable/Disable signal. This may include using the host CPU 912 to add the message ID(s) determined from the block Bl 106 to the reference message ID register(s) 906 and/or removing the message ID(s) from the reference message ID register(s) 906 to cause the determined message(s) to be blocked. Where the blocking is to occur for a period of time or until an event occurs, the IDPS 122 may control the Enable/Disable signal so the blocking is deactivated after the period of time or event is detected. As another example, the period of time may be provided to and implemented on the message blocking circuit 902 (e.g., as a value). It is noted that the method 1100 may be performed using a message blocking circuit that is configured different than the message blocking circuit 902. Further, the comparison performed by the logic gate(s) 908 may be performed by the host CPU 912 or otherwise in software if it may be accomplished fast enough to block the designed message(s) using the interference subcircuit 910.FIG. 12A is an illustration of an example autonomous vehicle 1200, in accordance with some embodiments of the present disclosure. The autonomous vehicle 1200 (alternatively referred to herein as the“vehicle 1200”) may include a passenger vehicle, such as a car, a truck, a bus, and/or another type of vehicle that accommodates one or more passengers. Autonomous vehicles are generally described in terms of automation levels, defined by the National Highway Traffic Safety Administration (NHTSA), a division of the US Department of Transportation, and the Society of Automotive Engineers (SAE) "Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles” (Standard No. J3016-201806, published on June 15, 2018, Standard No. J3016-201609, published on September 30, 2016, and previous and future versions of this standard). The vehicle 1200 may be capable of functionality in accordance with one or more of Level 3 - Level 5 of the autonomous driving levels. For example, the vehicle 1200 may be capable of conditional automation (Level 3), high automation (Level 4), and/or full automation (Level 5), depending on the embodiment.The vehicle 1200 may include components such as a chassis, a vehicle body, wheels (e.g., 2, 4, 6, 8, 18, etc.), tires, axles, and other components of a vehicle. The vehicle 1200 may include a propulsion system 1250, such as an internal combustion engine, hybrid electric power plant, an all-electric engine, and/or another propulsion system type. The propulsion system 1250 may be connected to a drive train of the vehicle 1200, which may include a transmission, to enable the propulsion of the vehicle 1200. The propulsion system 1250 may be controlled in response to receiving signals from the throttle/accelerator 1252.A steering system 1254, which may include a steering wheel, may be used to steer the vehicle 1200 (e.g., along a desired path or route) when the propulsion system 1250 is operating (e.g., when the vehicle is in motion). The steering system 1254 may receive signals from a steering actuator 1256. The steering wheel may be optional for full automation (Level 5) functionality.The brake sensor system 1246 may be used to operate the vehicle brakes in response to receiving signals from the brake actuators 1248 and/or brake sensors.Controlled s) 1236, which may include one or more system on chips (SoCs) 1204 (FIG. 12C) and/or GPU(s), may provide signals (e.g., representative of commands) to one or more components and/or systems of the vehicle 1200. For example, the controller(s) may send signals to operate the vehicle brakes via one or more brake actuators 1248, to operate the steering system 1254 via one or more steering actuators 1256, to operate the propulsion system 1250 via one or more throttle/accelerators 1252. The controller(s) 1236 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving the vehicle 1200. The controller^ s) 1236 may include a first controller 1236 for autonomous driving functions, a second controller 1236 for functional safety functions, a third controller 1236 for artificial intelligence functionality (e.g., computer vision), a fourth controller 1236 for infotainment functionality, a fifth controller 1236 for redundancy in emergency conditions, and/or other controllers. In some examples, a single controller 1236 may handle two or more of the above functionalities, two or more controllers 1236 may handle a single functionality, and/or any combination thereof.The controlled s) 1236 may provide the signals for controlling one or more components and/or systems of the vehicle 1200 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, for example and without limitation, global navigation satellite systems sensor(s) 1258 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1260, ultrasonic sensor(s) 1262, LIDAR sensor(s) 1264, inertial measurement unit (IMU) sensor(s) 1266 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1296, stereo camera(s) 1268, wide-view camera(s) 1270 (e.g., fisheye cameras), infrared camera(s) 1272, surround camera(s) 1274 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1298, speed sensor(s) 1244 (e.g., for measuring the speed of the vehicle 1200), vibration sensor(s) 1242, steering sensor(s) 1240, brake sensor(s) (e.g., as part of the brake sensor system 1246), and/or other sensor types.One or more of the controller(s) 1236 may receive inputs (e.g., represented by input data) from an instrument cluster 1232 of the vehicle 1200 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface (HMI) display 1234, an audible annunciator, a loudspeaker, and/or via other components of the vehicle 1200. The outputs may include information such as vehicle velocity, speed, time, map data (e.g., the HD map 1222 of FIG. 12C), location data (e.g., the vehicle’s 1200 location, such as on a map), direction, location of other vehicles (e.g., an occupancy grid), information about objects and status of objects as perceived by the controller(s) 1236, etc. For example, the HMI display 1234 may display information about the presence of one or more objects (e.g., a street sign, caution sign, traffic light changing, etc.), and/or information about driving maneuvers the vehicle has made, is making, or will make (e.g., changing lanes now, taking exit 34B in two miles, etc.).The vehicle 1200 further includes a network interface 1224 which may use one or more wireless antenna(s) 1226 and/or modem(s) to communicate over one or more networks. For example, the network interface 1224 may be capable of communication over LTE, WCDMA, UMTS, GSM, CDMA2000, etc. The wireless antenna(s) 1226 may also enable communication between objects in the environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth LE, Z-Wave, ZigBee, etc., and/or low power wide- area network(s) (LPWANs), such as LoRaWAN, SigFox, etc.FIG. 12B is an example of camera locations and fields of view for the example autonomous vehicle 1200 of FIG. 12A, in accordance with some embodiments of the present disclosure. The cameras and respective fields of view are one example embodiment and are not intended to be limiting. For example, additional and/or alternative cameras may be included and/or the cameras may be located at different locations on the vehicle 1200.The camera types for the cameras may include, but are not limited to, digital cameras that may be adapted for use with the components and/or systems of the vehicle 1200. The camera(s) may operate at automotive safety integrity level (ASIL) B and/or at another ASIL. The camera types may be capable of any image capture rate, such as 60 frames per second (fps), 1220 fps, 240 fps, etc., depending on the embodiment. The cameras may be capable of using rolling shutters, global shutters, another type of shutter, or a combination thereof. In some examples, the color filter array may include a red clear clear clear (RCCC) color filter array, a red clear clear blue (RCCB) color filter array, a red blue green clear (RBGC) color filter array, a Foveon X3 color filter array, a Bayer sensors (RGGB) color filter array, a monochrome sensor color filter array, and/or another type of color filter array. In some embodiments, clear pixel cameras, such as cameras with an RCCC, an RCCB, and/or an RBGC color filter array, may be used in an effort to increase light sensitivity.In some examples, one or more of the camera(s) may be used to perform advanced driver assistance systems (ADAS) functions (e.g., as part of a redundant or fail-safe design). For example, a Multi -Function Mono Camera may be installed to provide functions including lane departure warning, traffic sign assist and intelligent headlamp control. One or more of the camera(s) (e.g., all of the cameras) may record and provide image data (e.g., video) simultaneously.One or more of the cameras may be mounted in a mounting assembly, such as a custom designed (3-D printed) assembly, in order to cut out stray light and reflections from within the car (e.g., reflections from the dashboard reflected in the windshield mirrors) which may interfere with the camera’s image data capture abilities. With reference to wing-mirror mounting assemblies, the wing-mirror assemblies may be custom 3-D printed so that the camera mounting plate matches the shape of the wing-mirror. In some examples, the camera(s) may be integrated into the wing-mirror. For side-view cameras, the camera(s) may also be integrated within the four pillars at each corner of the cabin.Cameras with a field of view that include portions of the environment in front of the vehicle 1200 (e.g., front-facing cameras) may be used for surround view, to help identify forward facing paths and obstacles, as well aid in, with the help of one or more controllers 1236 and/or control SoCs, providing information critical to generating an occupancy grid and/or determining the preferred vehicle paths. Front-facing cameras may be used to perform many of the same ADAS functions as LIDAR, including emergency braking, pedestrian detection, and collision avoidance. Front-facing cameras may also be used for ADAS functions and systems including Lane Departure Warnings (“LDW”), Autonomous Cruise Control (“ACC”), and/or other functions such as traffic sign recognition.A variety of cameras may be used in a front-facing configuration, including, for example, a monocular camera platform that includes a CMOS (complementary metal oxide semiconductor) color imager. Another example may be a wide-view camera(s) 1270 that may be used to perceive objects coming into view from the periphery (e.g., pedestrians, crossing traffic or bicycles). Although only one wide-view camera is illustrated in FIG. 12B, there may any number of wide-view cameras 1270 on the vehicle 1200. In addition, long-range camera(s) 1298 (e.g., a long-view stereo camera pair) may be used for depth-based object detection, especially for objects for which a neural network has not yet been trained. The long-range camera(s) 1298 may also be used for object detection and classification, as well as basic object tracking.One or more stereo cameras 1268 may also be included in a front-facing configuration. The stereo camera(s) 1268 may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic (FPGA) and a multi-core micro processor with an integrated CAN or Ethernet interface on a single chip. Such a unit may be used to generate a 3-D map of the vehicle’s environment, including a distance estimate for all the points in the image. An alternative stereo camera(s) 1268 may include a compact stereo vision sensor(s) that may include two camera lenses (one each on the left and right) and an image processing chip that may measure the distance from the vehicle to the target object and use the generated information (e.g., metadata) to activate the autonomous emergency braking and lane departure warning functions. Other types of stereo camera(s) 1268 may be used in addition to, or alternatively from, those described herein. Cameras with a field of view that include portions of the environment to the side of the vehicle 1200 (e.g., side-view cameras) may be used for surround view, providing information used to create and update the occupancy grid, as well as to generate side impact collision warnings. For example, surround camera(s) 1274 (e.g., four surround cameras 1274 as illustrated in FIG. 12B) may be positioned to on the vehicle 1200. The surround camera(s) 1274 may include wide-view camera(s) 1270, fisheye camera(s), 360 degree camera(s), and/or the like. Four example, four fisheye cameras may be positioned on the vehicle’s front, rear, and sides. In an alternative arrangement, the vehicle may use three surround camera(s) 1274 (e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround view camera.Cameras with a field of view that include portions of the environment to the rear of the vehicle 1200 (e.g., rear- view cameras) may be used for park assistance, surround view, rear collision warnings, and creating and updating the occupancy grid. A wide variety of cameras may be used including, but not limited to, cameras that are also suitable as a front-facing camera(s) (e.g., long-range and/or mid-range camera(s) 1298, stereo camera(s) 1268), infrared camera(s) 1272, etc.), as described herein.FIG. 12C is a block diagram of an example system architecture for the example autonomous vehicle 1200 of FIG. 12A, in accordance with some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, groupings of functions, etc.) may be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory.Each of the components, features, and systems of the vehicle 1200 in FIG. 12C are illustrated as being connected via bus 1202. The bus 1202 may include a Controller Area Network (CAN) data interface (alternatively referred to herein as a“CAN bus”). A CAN may be a network inside the vehicle 1200 used to aid in control of various features and functionality of the vehicle 1200, such as actuation of brakes, acceleration, braking, steering, windshield wipers, etc. A CAN bus may be configured to have dozens or even hundreds of nodes, each with its own unique identifier (e.g., a CAN ID). The CAN bus may be read to find steering wheel angle, ground speed, engine revolutions per minute (RPMs), button positions, and/or other vehicle status indicators. The CAN bus may be ASIL B compliant.Although the bus 1202 is described herein as being a CAN bus, this is not intended to be limiting. For example, in addition to, or alternatively from, the CAN bus, FlexRay and/or Ethernet may be used. Additionally, although a single line is used to represent the bus 1202, this is not intended to be limiting. For example, there may be any number of busses 1202, which may include one or more CAN busses, one or more FlexRay busses, one or more Ethernet busses, and/or one or more other types of busses using a different protocol. In some examples, two or more busses 1202 may be used to perform different functions, and/or may be used for redundancy. For example, a first bus 1202 may be used for collision avoidance functionality and a second bus 1202 may be used for actuation control. In any example, each bus 1202 may communicate with any of the components of the vehicle 1200, and two or more busses 1202 may communicate with the same components. In some examples, each SoC 1204, each controller 1236, and/or each computer within the vehicle may have access to the same input data (e.g., inputs from sensors of the vehicle 1200), and may be connected to a common bus, such the CAN bus.The vehicle 1200 may include one or more controlled s) 1236, such as those described herein with respect to FIG. 12A. The controlled s) 1236 may be used for a variety of functions. The controller(s) 1236 may be coupled to any of the various other components and systems of the vehicle 1200, and may be used for control of the vehicle 1200, artificial intelligence of the vehicle 1200, infotainment for the vehicle 1200, and/or the like.The vehicle 1200 may include a system(s) on a chip (SoC) 1204. The SoC 1204 may include CPET(s) 1206, GPET(s) 1208, processor(s) 1210, cache(s) 1212, accelerator(s) 1214, data store(s) 1216, and/or other components and features not illustrated. The SoC(s) 1204 may be used to control the vehicle 1200 in a variety of platforms and systems. For example, the SoC(s) 1204 may be combined in a system (e.g., the system of the vehicle 1200) with an HD map 1222 which may obtain map refreshes and/or updates via a network interface 1224 from one or more servers (e.g., server(s) 1278 of FIG. 12D).The CPET(s) 1206 may include a CPET cluster or CPET complex (alternatively referred to herein as a“CCPLEX”). The CPET(s) 1206 may include multiple cores and/or L2 caches. For example, in some embodiments, the CPET(s) 1206 may include eight cores in a coherent multi-processor configuration. In some embodiments, the CPET(s) 1206 may include four dual- core clusters where each cluster has a dedicated L2 cache (e.g., a 2 MB L2 cache). The CPU(s) 1206 (e.g., the CCPLEX) may be configured to support simultaneous cluster operation enabling any combination of the clusters of the CPU(s) 1206 to be active at any given time.The CPU(s) 1206 may implement power management capabilities that include one or more of the following features: individual hardware blocks may be clock-gated automatically when idle to save dynamic power; each core clock may be gated when the core is not actively executing instructions due to execution of WFI/WFE instructions; each core may be independently power-gated; each core cluster may be independently clock-gated when all cores are clock-gated or power-gated; and/or each core cluster may be independently power-gated when all cores are power-gated. The CPET(s) 1206 may further implement an enhanced algorithm for managing power states, where allowed power states and expected wakeup times are specified, and the hardware/microcode determines the best power state to enter for the core, cluster, and CCPLEX. The processing cores may support simplified power state entry sequences in software with the work offloaded to microcode.The GPET(s) 1208 may include an integrated GPET (alternatively referred to herein as an “iGPU”). The GPET(s) 1208 may be programmable and may be efficient for parallel workloads. The GPET(s) 1208, in some examples, may use an enhanced tensor instruction set. The GPET(s) 1208 may include one or more streaming microprocessors, where each streaming microprocessor may include an Ll cache (e.g., an Ll cache with at least 96KB storage capacity), and two or more of the streaming microprocessors may share an L2 cache (e.g., an L2 cache with a 512 KB storage capacity). In some embodiments, the GPET(s) 1208 may include at least eight streaming microprocessors. The GPET(s) 1208 may use compute application programming interface(s) (API(s)). In addition, the GPET(s) 1208 may use one or more parallel computing platforms and/or programming models (e.g., NVIDIA’ s CUD A).The GPU(s) 1208 may be power-optimized for best performance in automotive and embedded use cases. For example, the GPU(s) 1208 may be fabricated on a Fin field-effect transistor (FinFET). However, this is not intended to be limiting and the GPU(s) 1208 may be fabricated using other semiconductor manufacturing processes. Each streaming microprocessor may incorporate a number of mixed-precision processing cores partitioned into multiple blocks. For example, and without limitation, 64 PF32 cores and 32 PF64 cores may be partitioned into four processing blocks. In such an example, each processing block may be allocated 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, two mixed-precision NVIDIA TENSOR COREs for deep learning matrix arithmetic, an L0 instruction cache, a warp scheduler, a dispatch unit, and/or a 64 KB register file. In addition, the streaming microprocessors may include independent parallel integer and floating-point data paths to provide for efficient execution of workloads with a mix of computation and addressing calculations. The streaming microprocessors may include independent thread scheduling capability to enable finer-grain synchronization and cooperation between parallel threads. The streaming microprocessors may include a combined Ll data cache and shared memory unit in order to improve performance while simplifying programming.The GPU(s) 1208 may include a high bandwidth memory (HBM) and/or a 16 GB HBM2 memory subsystem to provide, in some examples, about 900 GB/second peak memory bandwidth. In some examples, in addition to, or alternatively from, the HBM memory, a synchronous graphics random-access memory (SGRAM) may be used, such as a graphics double data rate type five synchronous random-access memory (GDDR5).The GPU(s) 1208 may include unified memory technology including access counters to allow for more accurate migration of memory pages to the processor that accesses them most frequently, thereby improving efficiency for memory ranges shared between processors. In some examples, address translation services (ATS) support may be used to allow the GPU(s) 1208 to access the CPU(s) 1206 page tables directly. In such examples, when the GPU(s) 1208 memory management unit (MMU) experiences a miss, an address translation request may be transmitted to the CPU(s) 1206. In response, the CPU(s) 1206 may look in its page tables for the virtual-to-physical mapping for the address and transmits the translation back to the GPU(s) 1208. As such, unified memory technology may allow a single unified virtual address space for memory of both the CPU(s) 1206 and the GPU(s) 1208, thereby simplifying the GPU(s) 1208 programming and porting of applications to the GPU(s) 1208.In addition, the GPU(s) 1208 may include an access counter that may keep track of the frequency of access of the GPU(s) 1208 to memory of other processors. The access counter may help ensure that memory pages are moved to the physical memory of the processor that is accessing the pages most frequently.The SoC(s) 1204 may include any number of cache(s) 1212, including those described herein. For example, the cache(s) 1212 may include an L3 cache that is available to both the CPU(s) 1206 and the GPU(s) 1208 (e.g., that is connected both the CPU(s) 1206 and the GPU(s) 1208). The cache(s) 1212 may include a write-back cache that may keep track of states of lines, such as by using a cache coherence protocol (e.g., MEI, MESI, MSI, etc.). The L3 cache may include 4 MB or more, depending on the embodiment, although smaller cache sizes may be used.The SoC(s) 1204 may include one or more accelerators 1214 (e.g., hardware accelerators, software accelerators, or a combination thereof). For example, the SoC(s) 1204 may include a hardware acceleration cluster that may include optimized hardware accelerators and/or large on-chip memory. The large on-chip memory (e.g., 4MB of SRAM), may enable the hardware acceleration cluster to accelerate neural networks and other calculations. The hardware acceleration cluster may be used to complement the GPU(s) 1208 and to off-load some of the tasks of the GPU(s) 1208 (e.g., to free up more cycles of the GPU(s) 1208 for performing other tasks). As an example, the accelerator s) 1214 may be used for targeted workloads (e.g., perception, convolutional neural networks (CNNs), etc.) that are stable enough to be amenable to acceleration. The term“CNN,” as used herein, may include all types of CNNs, including region-based or regional convolutional neural networks (RCNNs) and Fast RCNNs (e.g., as used for object detection).The accelerator(s) 1214 (e.g., the hardware acceleration cluster) may include a deep learning accelerator(s) (DLA). The DLA(s) may include one or more Tensor processing units (TPUs) that may be configured to provide an additional ten trillion operations per second for deep learning applications and inferencing. The TPUs may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.). The DLA(s) may further be optimized for a specific set of neural network types and floating point operations, as well as inferencing. The design of the DLA(s) may provide more performance per millimeter than a general-purpose GPU, and vastly exceeds the performance of a CPU. The TPU(s) may perform several functions, including a single-instance convolution function, supporting, for example, INT8, INT16, and FP16 data types for both features and weights, as well as post-processor functions.The DLA(s) may quickly and efficiently execute neural networks, especially CNNs, on processed or unprocessed data for any of a variety of functions, including, for example and without limitation: a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones; a CNN for facial recognition and vehicle owner identification using data from camera sensors; and/or a CNN for security and/or safety related events. The DLA(s) may perform any function of the GPU(s) 1208, and by using an inference accelerator, for example, a designer may target either the DLA(s) or the GPU(s) 1208 for any function. For example, the designer may focus processing of CNNs and floating point operations on the DLA(s) and leave other functions to the GPU(s) 1208 and/or other accelerator s) 1214.The accelerator( s) 1214 (e.g., the hardware acceleration cluster) may include a programmable vision accelerator s) (PVA), which may alternatively be referred to herein as a computer vision accelerator. The PVA(s) may be designed and configured to accelerate computer vision algorithms for the advanced driver assistance systems (ADAS), autonomous driving, and/or augmented reality (AR) and/or virtual reality (VR) applications. The PVA(s) may provide a balance between performance and flexibility. For example, each PVA(s) may include, for example and without limitation, any number of reduced instruction set computer (RISC) cores, direct memory access (DMA), and/or any number of vector processors.The RISC cores may interact with image sensors (e.g., the image sensors of any of the cameras described herein), image signal processor(s), and/or the like. Each of the RISC cores may include any amount of memory. The RISC cores may use any of a number of protocols, depending on the embodiment. In some examples, the RISC cores may execute a real-time operating system (RTOS). The RISC cores may be implemented using one or more integrated circuit devices, application specific integrated circuits (ASICs), and/or memory devices. For example, the RISC cores may include an instruction cache and/or a tightly coupled RAM.The DMA may enable components of the PVA(s) to access the system memory independently of the CPU(s) 1206. The DMA may support any number of features used to provide optimization to the PVA including, but not limited to, supporting multi-dimensional addressing and/or circular addressing. In some examples, the DMA may support up to six or more dimensions of addressing, which may include block width, block height, block depth, horizontal block stepping, vertical block stepping, and/or depth stepping.The vector processors may be programmable processors that may be designed to efficiently and flexibly execute programming for computer vision algorithms and provide signal processing capabilities. In some examples, the PVA may include a PVA core and two vector processing subsystem partitions. The PVA core may include a processor subsystem, DMA engine(s) (e.g., two DMA engines), and/or other peripherals. The vector processing subsystem may operate as the primary processing engine of the PVA, and may include a vector processing unit (VPU), an instruction cache, and/or vector memory (e.g., VMEM). A VPU core may include a digital signal processor such as, for example, a single instruction, multiple data (SIMD), very long instruction word (VLIW) digital signal processor. The combination of the SIMD and VLIW may enhance throughput and speed.Each of the vector processors may include an instruction cache and may be coupled to dedicated memory. As a result, in some examples, each of the vector processors may be configured to execute independently of the other vector processors. In other examples, the vector processors that are included in a particular PVA may be configured to employ data parallelism. For example, in some embodiments, the plurality of vector processors included in a single PVA may execute the same computer vision algorithm, but on different regions of an image. In other examples, the vector processors included in a particular PVA may simultaneously execute different computer vision algorithms, on the same image, or even execute different algorithms on sequential images or portions of an image. Among other things, any number of PVAs may be included in the hardware acceleration cluster and any number of vector processors may be included in each of the PVAs. In addition, the PVA(s) may include additional error correcting code (ECC) memory, to enhance overall system safety.The accelerator s) 1214 (e.g., the hardware acceleration cluster) may include a computer vision network on-chip and SRAM, for providing a high-bandwidth, low latency SRAM for the accelerator(s) 1214. In some examples, the on-chip memory may include at least 4MB SRAM, consisting of, for example and without limitation, eight field-configurable memory blocks, that may be accessible by both the PVA and the DLA. Each pair of memory blocks may include an advanced peripheral bus (APB) interface, configuration circuitry, a controller, and a multiplexer. Any type of memory may be used. The PVA and DLA may access the memory via a backbone that provides the PVA and DLA with high-speed access to memory. The backbone may include a computer vision network on-chip that interconnects the PVA and the DLA to the memory (e.g., using the APB).The computer vision network on-chip may include an interface that determines, before transmission of any control signal/address/data, that both the PVA and the DLA provide ready and valid signals. Such an interface may provide for separate phases and separate channels for transmitting control signals/addresses/data, as well as burst-type communications for continuous data transfer. This type of interface may comply with ISO 26262 or IEC 61508 standards, although other standards and protocols may be used.In some examples, the SoC(s) 1204 may include a real-time ray -tracing hardware accelerator, such as described in U.S. Patent Application No. 16/101,232, filed on August 10, 2018. The real-time ray -tracing hardware accelerator may be used to quickly and efficiently determine the positions and extents of objects (e.g., within a world model), to generate realOtime visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses.The accelerator(s) 1214 (e.g., the hardware accelerator cluster) have a wide array of uses for autonomous driving. The PVA may be a programmable vision accelerator that may be used for key processing stages in ADAS and autonomous vehicles. The PVA’s capabilities are a good match for algorithmic domains needing predictable processing, at low power and low latency. In other words, the PVA performs well on semi-dense or dense regular computation, even on small data sets, which need predictable run-times with low latency and low power. Thus, in the context of platforms for autonomous vehicles, the PVAs are designed to run classic computer vision algorithms, as they are efficient at object detection and operating on integer math.For example, according to one embodiment of the technology, the PVA is used to perform computer stereo vision. A semi-global matching-based algorithm may be used in some examples, although this is not intended to be limiting. Many applications for Level 3-5 autonomous driving require motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc.). The PVA may perform computer stereo vision function on inputs from two monocular cameras.In some examples, the PVA may be used to perform dense optical flow. According to process raw RADAR data (e.g., using a 4D Fast Fourier Transform) to provide Processed RADAR. In other examples, the PVA is used for time of flight depth processing, by processing raw time of flight data to provide processed time of flight data, for example.The DLA may be used to run any type of network to enhance control and driving safety, including for example, a neural network that outputs a measure of confidence for each object detection. Such a confidence value may be interpreted as a probability, or as providing a relative“weight” of each detection compared to other detections. This confidence value enables the system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. For example, the system may set a threshold value for the confidence and consider only the detections exceeding the threshold value as true positive detections. In an automatic emergency braking (AEB) system, false positive detections would cause the vehicle to automatically perform emergency braking, which is obviously undesirable. Therefore, only the most confident detections should be considered as triggers for AEB. The DLA may run a neural network for regressing the confidence value. The neural network may take as its input at least some subset of parameters, such as bounding box dimensions, ground plane estimate obtained (e.g. from another subsystem), inertial measurement unit (IMU) sensor 1266 output that correlates with the vehicle 1200 orientation, distance, 3D location estimates of the object obtained from the neural network and/or other sensors (e.g., LIDAR sensor(s) 1264 or RADAR sensor(s) 1260), among others.The SoC(s) 1204 may include data store(s) 1216 (e.g., memory). The data store(s) 1216 may be on-chip memory of the SoC(s) 1204, which may store neural networks to be executed on the GPU and/or the DLA. In some examples, the data store(s) 1216 may be large enough in capacity to store multiple instances of neural networks for redundancy and safety. The data store(s) 1212 may comprise L2 or L3 cache(s) 1212. Reference to the data store(s) 1216 may include reference to the memory associated with the PVA, DLA, and/or other accelerator s) 1214, as described herein.The SoC(s) 1204 may include one or more processor(s) 1210 (e.g., embedded processors). The processor(s) 1210 may include a boot and power management processor that may be a dedicated processor and subsystem to handle boot power and management functions and related security enforcement. The boot and power management processor may be a part of the SoC(s) 1204 boot sequence and may provide runtime power management services. The boot power and management processor may provide clock and voltage programming, assistance in system low power state transitions, management of SoC(s) 1204 thermals and temperature sensors, and/or management of the SoC(s) 1204 power states. Each temperature sensor may be implemented as a ring-oscillator whose output frequency is proportional to temperature, and the SoC(s) 1204 may use the ring-oscillators to detect temperatures of the CPU(s) 1206, GPU(s) 1208, and/or accelerator(s) 1214. If temperatures are determined to exceed a threshold, the boot and power management processor may enter a temperature fault routine and put the SoC(s) 1204 into a lower power state and/or put the vehicle 1200 into a chauffeur to safe stop mode (e.g., bring the vehicle 1200 to a safe stop).The processor(s) 1210 may further include a set of embedded processors that may serve as an audio processing engine. The audio processing engine may be an audio subsystem that enables full hardware support for multi-channel audio over multiple interfaces, and a broad and flexible range of audio I/O interfaces. In some examples, the audio processing engine is a dedicated processor core with a digital signal processor with dedicated RAM.The processor(s) 1210 may further include an always on processor engine that may provide necessary hardware features to support low power sensor management and wake use cases. The always on processor engine may include a processor core, a tightly coupled RAM, supporting peripherals (e.g., timers and interrupt controllers), various I/O controller peripherals, and routing logic.The processor(s) 1210 may further include a safety cluster engine that includes a dedicated processor subsystem to handle safety management for automotive applications. The safety cluster engine may include two or more processor cores, a tightly coupled RAM, support peripherals (e.g., timers, an interrupt controller, etc.), and/or routing logic. In a safety mode, the two or more cores may operate in a lockstep mode and function as a single core with comparison logic to detect any differences between their operations.The processor(s) 1210 may further include a real-time camera engine that may include a dedicated processor subsystem for handling real-time camera management.The processor(s) 1210 may further include a high-dynamic range signal processor that may include an image signal processor that is a hardware engine that is part of the camera processing pipeline.The processor(s) 1210 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce the final image for the player window. The video image compositor may perform lens distortion correction on wide-view camera(s) 1270, surround camera(s) 1274, and/or on in-cabin monitoring camera sensors. In cabin monitoring camera sensor is preferably monitored by a neural network running on another instance of the Advanced SoC, configured to identify in cabin events and respond accordingly. An in-cabin system may perform lip reading to activate cellular service and place a phone call, dictate emails, change the vehicle’s destination, activate or change the vehicle’s infotainment system and settings, or provide voice-activated web surfing. Certain functions are available to the driver only when the vehicle is operating in an autonomous mode, and are disabled otherwise.The video image compositor may include enhanced temporal noise reduction for both spatial and temporal noise reduction. For example, where motion occurs in a video, the noise reduction weights spatial information appropriately, decreasing the weight of information provided by adjacent frames. Where an image or portion of an image does not include motion, the temporal noise reduction performed by the video image compositor may use information from the previous image to reduce noise in the current image.The video image compositor may also be configured to perform stereo rectification on input stereo lens frames. The video image compositor may further be used for user interface composition when the operating system desktop is in use, and the GPU(s) 1208 is not required to continuously render new surfaces. Even when the GPU(s) 1208 is powered on and active doing 3D rendering, the video image compositor may be used to offload the GPU(s) 1208 to improve performance and responsiveness.The SoC(s) 1204 may further include a mobile industry processor interface (MIPI) camera serial interface for receiving video and input from cameras, a high-speed interface, and/or a video input block that may be used for camera and related pixel input functions. The SoC(s) 1204 may further include an input/output controller(s) that may be controlled by software and may be used for receiving EO signals that are uncommitted to a specific role.The SoC(s) 1204 may further include a broad range of peripheral interfaces to enable communication with peripherals, audio codecs, power management, and/or other devices. The SoC(s) 1204 may be used to process data from cameras (e.g., connected over Gigabit Multimedia Serial Link and Ethernet), sensors (e.g., LIDAR sensor(s) 1264, RADAR sensor(s) 1260, etc. that may be connected over Ethernet), data from bus 1202 (e.g., speed of vehicle 1200, steering wheel position, etc.), data from GNSS sensor(s) 1258 (e.g., connected over Ethernet or CAN bus). The SoC(s) 1204 may further include dedicated high-performance mass storage controllers that may include their own DMA engines, and that may be used to free the CPU(s) 1206 from routine data management tasks.The SoC(s) 1204 may be an end-to-end platform with a flexible architecture that spans automation levels 3-5, thereby providing a comprehensive functional safety architecture that leverages and makes efficient use of computer vision and ADAS techniques for diversity and redundancy, provides a platform for a flexible, reliable driving software stack, along with deep learning tools. The SoC(s) 1204 may be faster, more reliable, and even more energy-efficient and space-efficient than conventional systems. For example, the accelerator(s) 1214, when combined with the CPU(s) 1206, the GPU(s) 1208, and the data store(s) 1216, may provide for a fast, efficient platform for level 3-5 autonomous vehicles.The technology thus provides capabilities and functionality that cannot be achieved by conventional systems. For example, computer vision algorithms may be executed on CPUs, which may be configured using high-level programming language, such as the C programming language, to execute a wide variety of processing algorithms across a wide variety of visual data. However, CPUs are oftentimes unable to meet the performance requirements of many computer vision applications, such as those related to execution time and power consumption, for example. In particular, many CPUs are unable to execute complex object detection algorithms in real-time, which is a requirement of in-vehicle ADAS applications, and a requirement for practical Level 3-5 autonomous vehicles.In contrast to conventional systems, by providing a CPU complex, GPU complex, and a hardware acceleration cluster, the technology described herein allows for multiple neural networks to be performed simultaneously and/or sequentially, and for the results to be combined together to enable Level 3-5 autonomous driving functionality. For example, a CNN executing on the DLA or dGPU (e.g., the GPU(s) 1220) may include a text and word recognition, allowing the supercomputer to read and understand traffic signs, including signs for which the neural network has not been specifically trained. The DLA may further include a neural network that is able to identify, interpret, and provides semantic understanding of the sign, and to pass that semantic understanding to the path planning modules running on the CPU Complex.As another example, multiple neural networks may be run simultaneously, as is required for Level 3, 4, or 5 driving. For example, a warning sign consisting of“Caution: flashing lights indicate icy conditions,” along with an electric light, may be independently or collectively interpreted by several neural networks. The sign itself may be identified as a traffic sign by a first deployed neural network (e.g., a neural network that has been trained), the text “Flashing lights indicate icy conditions” may be interpreted by a second deployed neural network, which informs the vehicle’s path planning software (preferably executing on the CPU Complex) that when flashing lights are detected, icy conditions exist. The flashing light may be identified by operating a third deployed neural network over multiple frames, informing the vehicle’s path-planning software of the presence (or absence) of flashing lights. All three neural networks may run simultaneously, such as within the DLA and/or on the GPU(s) 1208.In some examples, a CNN for facial recognition and vehicle owner identification may use data from camera sensors to identify the presence of an authorized driver and/or owner of the vehicle 1200. The always on sensor processing engine may be used to unlock the vehicle when the owner approaches the driver door and turn on the lights, and, in security mode, to disable the vehicle when the owner leaves the vehicle. In this way, the SoC(s) 1204 provide for security against theft and/or carjacking.In another example, a CNN for emergency vehicle detection and identification may use data from microphones 1296 to detect and identify emergency vehicle sirens. In contrast to conventional systems, that use general classifiers to detect sirens and manually extract features, the SoC(s) 1204 use the CNN for classifying environmental and urban sounds, as well as classifying visual data. In a preferred embodiment, the CNN running on the DLA is trained to identify the relative closing speed of the emergency vehicle (e.g., by using the Doppler effect). The CNN may also be trained to identify emergency vehicles specific to the local area in which the vehicle is operating, as identified by GNSS sensor(s) 1258. Thus, for example, when operating in Europe the CNN will seek to detect European sirens, and when in the ETnited States the CNN will seek to identify only North American sirens. Once an emergency vehicle is detected, a control program may be used to execute an emergency vehicle safety routine, slowing the vehicle, pulling over to the side of the road, parking the vehicle, and/or idling the vehicle, with the assistance of ultrasonic sensors 1262, until the emergency vehicle(s) passes.The vehicle may include a CPU(s) 1218 (e.g., discrete CPU(s), or dCPU(s)), that may be coupled to the SoC(s) 1204 via a high-speed interconnect (e.g., PCIe). The CPU(s) 1218 may include an X86 processor, for example. The CPU(s) 1218 may be used to perform any of a variety of functions, including arbitrating potentially inconsistent results between ADAS sensors and the SoC(s) 1204, and/or monitoring the status and health of the controller(s) 1236 and/or infotainment SoC 1230, for example.The vehicle 1200 may include a GPU(s) 1220 (e.g., discrete GPU(s), or dGPU(s)), that may be coupled to the SoC(s) 1204 via a high-speed interconnect (e.g., NVIDIA’ s NVLINK). The GPU(s) 1220 may provide additional artificial intelligence functionality, such as by executing redundant and/or different neural networks, and may be used to train and/or update neural networks based on input (e.g., sensor data) from sensors of the vehicle 1200.The vehicle 1200 may further include the network interface 1224 which may include one or more wireless antennas 1226 (e.g., one or more wireless antennas for different communication protocols, such as a cellular antenna, a Bluetooth antenna, etc.). The network interface 1224 may be used to enable wireless connectivity over the Internet with the cloud (e.g., with the server(s) 1278 and/or other network devices), with other vehicles, and/or with computing devices (e.g., client devices of passengers). To communicate with other vehicles, a direct link may be established between the two vehicles and/or an indirect link may be established (e.g., across networks and over the Internet). Direct links may be provided using a vehicle-to-vehicle communication link. The vehicle-to-vehicle communication link may provide the vehicle 1200 information about vehicles in proximity to the vehicle 1200 (e.g., vehicles in front of, on the side of, and/or behind the vehicle 1200). This functionality may be part of a cooperative adaptive cruise control functionality of the vehicle 1200.The network interface 1224 may include a SoC that provides modulation and demodulation functionality and enables the controlled s) 1236 to communicate over wireless networks. The network interface 1224 may include a radio frequency front-end for up- conversion from baseband to radio frequency, and down conversion from radio frequency to baseband. The frequency conversions may be performed through well-known processes, and/or may be performed using super-heterodyne processes. In some examples, the radio frequency front end functionality may be provided by a separate chip. The network interface may include wireless functionality for communicating over LTE, WCDMA, UMTS, GSM, CDMA2000, Bluetooth, Bluetooth LE, Wi-Fi, Z-Wave, ZigBee, LoRaWAN, and/or other wireless protocols.The vehicle 1200 may further include data store(s) 1228 which may include off-chip (e.g., off the SoC(s) 1204) storage. The data store(s) 1228 may include one or more storage elements including RAM, SRAM, DRAM, VRAM, Flash, hard disks, and/or other components and/or devices that may store at least one bit of data.The vehicle 1200 may further include GNSS sensor(s) 1258. The GNSS sensor(s) 1258 (e.g., GPS and/or assisted GPS sensors), to assist in mapping, perception, occupancy grid generation, and/or path planning functions. Any number of GNSS sensor(s) 1258 may be used, including, for example and without limitation, a GPS using a USB connector with an Ethernet to Serial (RS-232) bridge.The vehicle 1200 may further include RADAR sensor(s) 1260. The RADAR sensor(s) 1260 may be used by the vehicle 1200 for long-range vehicle detection, even in darkness and/or severe weather conditions. RADAR functional safety levels may be ASIL B. The RADAR sensor(s) 1260 may use the CAN and/or the bus 1202 (e.g., to transmit data generated by the RADAR sensor(s) 1260) for control and to access object tracking data, with access to Ethernet to access raw data in some examples. A wide variety of RADAR sensor types may be used. For example, and without limitation, the RADAR sensor(s) 1260 may be suitable for front, rear, and side RADAR use. In some example, Pulse Doppler RADAR sensor(s) are used. The RADAR sensor(s) 1260 may include different configurations, such as long range with narrow field of view, short range with wide field of view, short range side coverage, etc. In some examples, long-range RADAR may be used for adaptive cruise control functionality. The long-range RADAR systems may provide a broad field of view realized by two or more independent scans, such as within a 250m range. The RADAR sensor(s) 1260 may help in distinguishing between static and moving objects, and may be used by ADAS systems for emergency brake assist and forward collision warning. Long-range RADAR sensors may include monostatic multimodal RADAR with multiple (e.g., six or more) fixed RADAR antennae and a high-speed CAN and FlexRay interface. In an example with six antennae, the central four antennae may create a focused beam pattern, designed to record the vehicle’s 1200 surroundings at higher speeds with minimal interference from traffic in adjacent lanes. The other two antennae may expand the field of view, making it possible to quickly detect vehicles entering or leaving the vehicle’s 1200 lane.Mid-range RADAR systems may include, as an example, a range of up to l260m (front) or 80m (rear), and a field of view of up to 42 degrees (front) or 1250 degrees (rear). Short- range RADAR systems may include, without limitation, RADAR sensors designed to be installed at both ends of the rear bumper. When installed at both ends of the rear bumper, such a RADAR sensor systems may create two beams that constantly monitor the blind spot in the rear and next to the vehicle.Short-range RADAR systems may be used in an ADAS system for blind spot detection and/or lane change assist.The vehicle 1200 may further include ultrasonic sensor(s) 1262. The ultrasonic sensor(s) 1262, which may be positioned at the front, back, and/or the sides of the vehicle 1200, may be used for park assist and/or to create and update an occupancy grid. A wide variety of ultrasonic sensor(s) 1262 may be used, and different ultrasonic sensor(s) 1262 may be used for different ranges of detection (e.g., 2.5m, 4m). The ultrasonic sensor(s) 1262 may operate at functional safety levels of ASIL B.The vehicle 1200 may include LIDAR sensor(s) 1264. The LIDAR sensor(s) 1264 may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. The LIDAR sensor(s) 1264 may be functional safety level ASIL B. In some examples, the vehicle 1200 may include multiple LIDAR sensors 1264 (e.g., two, four, six, etc.) that may use Ethernet (e.g., to provide data to a Gigabit Ethernet switch). In some examples, the LIDAR sensor(s) 1264 may be capable of providing a list of objects and their distances for a 360-degree field of view. Commercially available LIDAR sensor(s) 1264 may have an advertised range of approximately l200m, with an accuracy of 2cm-3cm, and with support for a 1200Mbps Ethernet connection, for example. In some examples, one or more non-protruding LIDAR sensors 1264 may be used. In such examples, the LIDAR sensor(s) 1264 may be implemented as a small device that may be embedded into the front, rear, sides, and/or corners of the vehicle 1200. The LIDAR sensor(s) 1264, in such examples, may provide up to a l220-degree horizontal and 35-degree vertical field-of-view, with a 200m range even for low-reflectivity objects. Front-mounted LIDAR sensor(s) 1264 may be configured for a horizontal field of view between 45 degrees and 135 degrees.In some examples, LIDAR technologies, such as 3D flash LIDAR, may also be used. 3D Flash LIDAR uses a flash of a laser as a transmission source, to illuminate vehicle surroundings up to approximately 200m. A flash LIDAR unit includes a receptor, which records the laser pulse transit time and the reflected light on each pixel, which in turn corresponds to the range from the vehicle to the objects. Flash LIDAR may allow for highly accurate and distortion-free images of the surroundings to be generated with every laser flash. In some examples, four flash LIDAR sensors may be deployed, one at each side of the vehicle 1200. Available 3D flash LIDAR systems include a solid-state 3D staring array LIDAR camera with no moving parts other than a fan (e.g., a non-scanning LIDAR device). The flash LIDAR device may use a 5 nanosecond class I (eye-safe) laser pulse per frame and may capture the reflected laser light in the form of 3D range point clouds and co-registered intensity data. By using flash LIDAR, and because flash LIDAR is a solid-state device with no moving parts, the LIDAR sensor(s) 1264 may be less susceptible to motion blur, vibration, and/or shock.The vehicle may further include IMU sensor(s) 1266. The IMU sensor(s) 1266 may be located at a center of the rear axle of the vehicle 1200, in some examples. The IMU sensor(s) 1266 may include, for example and without limitation, an accelerometer(s), a magnetometer(s), a gyroscope(s), a magnetic compass(es), and/or other sensor types. In some examples, such as in six-axis applications, the IMU sensor(s) 1266 may include accelerometers and gyroscopes, while in nine-axis applications, the IMU sensor(s) 1266 may include accelerometers, gyroscopes, and magnetometers.In some embodiments, the IMU sensor(s) 1266 may be implemented as a miniature, high performance GPS-Aided Inertial Navigation System (GPS/INS) that combines micro- electro-mechanical systems (MEMS) inertial sensors, a high-sensitivity GPS receiver, and advanced Kalman filtering algorithms to provide estimates of position, velocity, and attitude. As such, in some examples, the IMU sensor(s) 1266 may enable the vehicle 1200 to estimate heading without requiring input from a magnetic sensor by directly observing and correlating the changes in velocity from GPS to the IMU sensor(s) 1266. In some examples, the IMU sensor(s) 1266 and the GNSS sensor(s) 1258 may be combined in a single integrated unit.The vehicle may include microphone(s) 1296 placed in and/or around the vehicle 1200. The microphone(s) 1296 may be used for emergency vehicle detection and identification, among other things.The vehicle may further include any number of camera types, including stereo camera(s) 1268, wide-view camera(s) 1270, infrared camera(s) 1272, surround camera(s) 1274, long-range and/or mid-range camera(s) 1298, and/or other camera types. The cameras may be used to capture image data around an entire periphery of the vehicle 1200. The types of cameras used depends on the embodiments and requirements for the vehicle 1200, and any combination of camera types may be used to provide the necessary coverage around the vehicle 1200. In addition, the number of cameras may differ depending on the embodiment. For example, the vehicle may include six cameras, seven cameras, ten cameras, twelve cameras, and/or another number of cameras. The cameras may support, as an example and without limitation, Gigabit Multimedia Serial Link (GMSL) and/or Gigabit Ethernet. Each of the camera(s) is described with more detail herein with respect to FIG. 12A and FIG. 12B.The vehicle 1200 may further include vibration sensor(s) 1242. The vibration sensor(s) 1242 may measure vibrations of components of the vehicle, such as the axle(s). For example, changes in vibrations may indicate a change in road surfaces. In another example, when two or more vibration sensors 1242 are used, the differences between the vibrations may be used to determine friction or slippage of the road surface (e.g., when the difference in vibration is between a power-driven axle and a freely rotating axle).The vehicle 1200 may include an ADAS system 1238. The ADAS system 1238 may include a SoC, in some examples. The ADAS system 1238 may include autonomous/adaptive/automatic cruise control (ACC), cooperative adaptive cruise control (CACC), forward crash warning (FCW), automatic emergency braking (AEB), lane departure warnings (LDW), lane keep assist (LKA), blind spot warning (BSW), rear cross-traffic warning (RCTW), collision warning systems (CWS), lane centering (LC), and/or other features and functionality. The ACC systems may use RADAR sensor(s) 1260, LIDAR sensor(s) 1264, and/or a camera(s). The ACC systems may include longitudinal ACC and/or lateral ACC. Longitudinal ACC monitors and controls the distance to the vehicle immediately ahead of the vehicle 1200 and automatically adjust the vehicle speed to maintain a safe distance from vehicles ahead. Lateral ACC performs distance keeping, and advises the vehicle 1200 to change lanes when necessary. Lateral ACC is related to other ADAS applications such as LCA and CWS.CACC uses information from other vehicles that may be received via the network interface 1224 and/or the wireless antenna(s) 1226 from other vehicles via a wireless link, or indirectly, over a network connection (e.g., over the Internet). Direct links may be provided by a vehicle-to-vehicle (V2V) communication link, while indirect links may be infrastructure- to-vehicle (I2V) communication link. In general, the V2V communication concept provides information about the immediately preceding vehicles (e.g., vehicles immediately ahead of and in the same lane as the vehicle 1200), while the 12 V communication concept provides information about traffic further ahead. CACC systems may include either or both I2V and V2V information sources. Given the information of the vehicles ahead of the vehicle 1200, CACC may be more reliable and it has potential to improve traffic flow smoothness and reduce congestion on the road.FCW systems are designed to alert the driver to a hazard, so that the driver may take corrective action. FCW systems use a front-facing camera and/or RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component. FCW systems may provide a warning, such as in the form of a sound, visual warning, vibration and/or a quick brake pulse.AEB systems detect an impending forward collision with another vehicle or other object, and may automatically apply the brakes if the driver does not take corrective action within a specified time or distance parameter. AEB systems may use front-facing camera(s) and/or RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC. When the AEB system detects a hazard, it typically first alerts the driver to take corrective action to avoid the collision and, if the driver does not take corrective action, the AEB system may automatically apply the brakes in an effort to prevent, or at least mitigate, the impact of the predicted collision. AEB systems, may include techniques such as dynamic brake support and/or crash imminent braking. LDW systems provide visual, audible, and/or tactile warnings, such as steering wheel or seat vibrations, to alert the driver when the vehicle 1200 crosses lane markings. A LDW system does not activate when the driver indicates an intentional lane departure, by activating a turn signal. LDW systems may use front-side facing cameras, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component.LKA systems are a variation of LDW systems. LKA systems provide steering input or braking to correct the vehicle 1200 if the vehicle 1200 starts to exit the lane.BSW systems detects and warn the driver of vehicles in an automobile’s blind spot. BSW systems may provide a visual, audible, and/or tactile alert to indicate that merging or changing lanes is unsafe. The system may provide an additional warning when the driver uses a turn signal. BSW systems may use rear-side facing camera(s) and/or RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component.RCTW systems may provide visual, audible, and/or tactile notification when an object is detected outside the rear-camera range when the vehicle 1200 is backing up. Some RCTW systems include AEB to ensure that the vehicle brakes are applied to avoid a crash. RCTW systems may use one or more rear-facing RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component.Conventional ADAS systems may be prone to false positive results which may be annoying and distracting to a driver, but typically are not catastrophic, because the ADAS systems alert the driver and allow the driver to decide whether a safety condition truly exists and act accordingly. However, in an autonomous vehicle 1200, the vehicle 1200 itself must, in the case of conflicting results, decide whether to heed the result from a primary computer or a secondary computer (e.g., a first controller 1236 or a second controller 1236). For example, in some embodiments, the ADAS system 1238 may be a backup and/or secondary computer for providing perception information to a backup computer rationality module. The backup computer rationality monitor may run a redundant diverse software on hardware components to detect faults in perception and dynamic driving tasks. Outputs from the ADAS system 1238 may be provided to a supervisory MCU. If outputs from the primary computer and the secondary computer conflict, the supervisory MCU must determine how to reconcile the conflict to ensure safe operation. In some examples, the primary computer may be configured to provide the supervisory MCU with a confidence score, indicating the primary computer’s confidence in the chosen result. If the confidence score exceeds a threshold, the supervisory MCU may follow the primary computer’s direction, regardless of whether the secondary computer provides a conflicting or inconsistent result. Where the confidence score does not meet the threshold, and where the primary and secondary computer indicate different results (e.g., the conflict), the supervisory MCU may arbitrate between the computers to determine the appropriate outcome.The supervisory MCU may be configured to run a neural network(s) that is trained and configured to determine, based on outputs from the primary computer and the secondary computer, conditions under which the secondary computer provides false alarms. Thus, the neural network(s) in the supervisory MCU may learn when the secondary computer’s output may be trusted, and when it cannot. For example, when the secondary computer is a RADAR- based FCW system, a neural network(s) in the supervisory MCU may learn when the FCW system is identifying metallic objects that are not, in fact, hazards, such as a drainage grate or manhole cover that triggers an alarm. Similarly, when the secondary computer is a camera- based LDW system, a neural network in the supervisory MCU may learn to override the LDW when bicyclists or pedestrians are present and a lane departure is, in fact, the safest maneuver. In embodiments that include a neural network(s) running on the supervisory MCU, the supervisory MCU may include at least one of a DLA or GPU suitable for running the neural network(s) with associated memory. In preferred embodiments, the supervisory MCU may comprise and/or be included as a component of the SoC(s) 1204.In other examples, ADAS system 1238 may include a secondary computer that performs ADAS functionality using traditional rules of computer vision. As such, the secondary computer may use classic computer vision rules (if-then), and the presence of a neural network(s) in the supervisory MCU may improve reliability, safety and performance. For example, the diverse implementation and intentional non-identity makes the overall system more fault-tolerant, especially to faults caused by software (or software-hardware interface) functionality. For example, if there is a software bug or error in the software running on the primary computer, and the non-identical software code running on the secondary computer provides the same overall result, the supervisory MCU may have greater confidence that the overall result is correct, and the bug in software or hardware on primary computer is not causing material error. In some examples, the output of the ADAS system 1238 may be fed into the primary computer’s perception block and/or the primary computer’s dynamic driving task block. For example, if the ADAS system 1238 indicates a forward crash warning due to an object immediately ahead, the perception block may use this information when identifying objects. In other examples, the secondary computer may have its own neural network which is trained and thus reduces the risk of false positives, as described herein.The vehicle 1200 may further include the infotainment SoC 1230 (e.g., an in-vehicle infotainment system (I VI)) . Although illustrated and described as a SoC, the infotainment system may not be a SoC, and may include two or more discrete components. The infotainment SoC 1230 may include a combination of hardware and software that may be used to provide audio (e.g., music, a personal digital assistant, navigational instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.), phone (e.g., hands-free calling), network connectivity (e.g., LTE, Wi-Fi, etc.), and/or information services (e.g., navigation systems, rear-parking assistance, a radio data system, vehicle related information such as fuel level, total distance covered, brake fuel level, oil level, door open/close, air filter information, etc.) to the vehicle 1200. For example, the infotainment SoC 1230 may radios, disk players, navigation systems, video players, USB and Bluetooth connectivity, carputers, in-car entertainment, Wi-Fi, steering wheel audio controls, hands free voice control, a heads-up display (HUD), an HMI display 1234, a telematics device, a control panel (e.g., for controlling and/or interacting with various components, features, and/or systems), and/or other components. The infotainment SoC 1230 may further be used to provide information (e.g., visual and/or audible) to a user(s) of the vehicle, such as information from the ADAS system 1238, autonomous driving information such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.), and/or other information.The infotainment SoC 1230 may include GPU functionality. The infotainment SoC 1230 may communicate over the bus 1202 (e.g., CAN bus, Ethernet, etc.) with other devices, systems, and/or components of the vehicle 1200. In some examples, the infotainment SoC 1230 may be coupled to a supervisory MCU such that the GPU of the infotainment system may perform some self-driving functions in the event that the primary controlled s) 1236 (e.g., the primary and/or backup computers of the vehicle 1200) fail. In such an example, the infotainment SoC 1230 may put the vehicle 1200 into a chauffeur to safe stop mode, as described herein. The vehicle 1200 may further include an instrument cluster 1232 (e.g., a digital dash, an electronic instrument cluster, a digital instrument panel, etc.). The instrument cluster 1232 may include a controller and/or supercomputer (e.g., a discrete controller or supercomputer). The instrument cluster 1232 may include a set of instrumentation such as a speedometer, fuel level, oil pressure, tachometer, odometer, turn indicators, gearshift position indicator, seat belt warning light(s), parking-brake warning light(s), engine-malfunction light(s), airbag (SRS) system information, lighting controls, safety system controls, navigation information, etc. In some examples, information may be displayed and/or shared among the infotainment SoC 1230 and the instrument cluster 1232. In other words, the instrument cluster 1232 may be included as part of the infotainment SoC 1230, or vice versa.FIG. 12D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle 1200 of FIG. 12A, in accordance with some embodiments of the present disclosure. The system 1276 may include server(s) 1278, network(s) 1290, and vehicles, including the vehicle 1200. The server(s) 1278 may include a plurality of GPUs 1284(A)- 1284(H) (collectively referred to herein as GPUs 1284), PCIe switches 1282(A)- 1282(H) (collectively referred to herein as PCIe switches 1282), and/or CPUs 1280(A)- 1280(B) (collectively referred to herein as CPUs 1280). The GPUs 1284, the CPUs 1280, and the PCIe switches may be interconnected with high-speed interconnects such as, for example and without limitation, NVLink interfaces 1288 developed by NVIDIA and/or PCIe connections 1286. In some examples, the GPUs 1284 are connected via NVLink and/or NVSwitch SoC and the GPUs 1284 and the PCIe switches 1282 are connected via PCIe interconnects. Although eight GPUs 1284, two CPUs 1280, and two PCIe switches are illustrated, this is not intended to be limiting. Depending on the embodiment, each of the server(s) 1278 may include any number of GPUs 1284, CPUs 1280, and/or PCIe switches. For example, the server(s) 1278 may each include eight, sixteen, thirty-two, and/or more GPUs 1284.The server(s) 1278 may receive, over the network(s) 1290 and from the vehicles, image data representative of images showing unexpected or changed road conditions, such as recently commenced road-work. The server(s) 1278 may transmit, over the network(s) 1290 and to the vehicles, neural networks 1292, updated neural networks 1292, and/or map information 1294, including information regarding traffic and road conditions. The updates to the map information 1294 may include updates for the HD map 1222, such as information regarding construction sites, potholes, detours, flooding, and/or other obstructions. In some examples, the neural networks 1292, the updated neural networks 1292, and/or the map information 1294 may have resulted from new training and/or experiences represented in data received from any number of vehicles in the environment, and/or based on training performed at a datacenter (e.g., using the server(s) 1278 and/or other servers).The server(s) 1278 may be used to train machine learning models (e.g., neural networks) based on training data. The training data may be generated by the vehicles, and/or may be generated in a simulation (e.g., using a game engine). In some examples, the training data is tagged (e.g., where the neural network benefits from supervised learning) and/or undergoes other pre-processing, while in other examples the training data is not tagged and/or pre-processed (e.g., where the neural network does not require supervised learning). Once the machine learning models are trained, the machine learning models may be used by the vehicles (e.g., transmitted to the vehicles over the network(s) 1290, and/or the machine learning models may be used by the server(s) 1278 to remotely monitor the vehicles.In some examples, the server(s) 1278 may receive data from the vehicles and apply the data to up-to-date real-time neural networks for real-time intelligent inferencing. The server(s) 1278 may include deep-learning supercomputers and/or dedicated AI computers powered by GPU(s) 1284, such as a DGX and DGX Station machines developed by NVIDIA. However, in some examples, the server(s) 1278 may include deep learning infrastructure that use only CPU-powered datacenters.The deep-learning infrastructure of the server(s) 1278 may be capable of fast, real-time inferencing, and may use that capability to evaluate and verify the health of the processors, software, and/or associated hardware in the vehicle 1200. For example, the deep-learning infrastructure may receive periodic updates from the vehicle 1200, such as a sequence of images and/or objects that the vehicle 1200 has located in that sequence of images (e.g., via computer vision and/or other machine learning object classification techniques). The deep- learning infrastructure may run its own neural network to identify the objects and compare them with the objects identified by the vehicle 1200 and, if the results do not match and the infrastructure concludes that the AI in the vehicle 1200 is malfunctioning, the server(s) 1278 may transmit a signal to the vehicle 1200 instructing a fail-safe computer of the vehicle 1200 to assume control, notify the passengers, and complete a safe parking maneuver.For inferencing, the server(s) 1278 may include the GPU(s) 1284 and one or more programmable inference accelerators (e.g., NVIDIA’s TensorRT 3). The combination of GPU- powered servers and inference acceleration may make real-time responsiveness possible. In other examples, such as where performance is less critical, servers powered by CPUs, FPGAs, and other processors may be used for inferencing.FIG. 13 is a block diagram of an example computing device 1300 suitable for use in implementing some embodiments of the present disclosure. Computing device 1300 may include a bus 1302 that directly or indirectly couples the following devices: memory 1304, one or more central processing units (CPUs) 1306, one or more graphics processing units (GPUs) 1308, a communication interface 1310, input/output (I/O) ports 1312, input/output components 1314, a power supply 1316, and one or more presentation components 1318 (e.g., display(s)).Although the various blocks of FIG. 13 are shown as connected via the bus 1302 with lines, this is not intended to be limiting and is for clarity only. For example, in some embodiments, a presentation component 1318, such as a display device, may be considered an I/O component 1314 (e.g., if the display is a touch screen). As another example, the CPUs 1306 and/or GPUs 1308 may include memory (e.g., the memory 1304 may be representative of a storage device in addition to the memory of the GPUs 1308, the CPUs 1306, and/or other components). In other words, the computing device of FIG. 13 is merely illustrative. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “desktop,”“tablet,”“client device,”“mobile device,”“hand-held device,”“game console,” “electronic control unit (ECU),”“virtual reality system,” and/or other device or system types, as all are contemplated within the scope of the computing device of FIG. 13.The bus 1302 may represent one or more busses, such as an address bus, a data bus, a control bus, or a combination thereof. The bus 1302 may include one or more bus types, such as an industry standard architecture (ISA) bus, an extended industry standard architecture (EISA) bus, a video electronics standards association (VESA) bus, a peripheral component interconnect (PCI) bus, a peripheral component interconnect express (PCIe) bus, and/or another type of bus.The memory 1304 may include any of a variety of computer-readable media. The computer-readable media may be any available media that may be accessed by the computing device 1300. The computer-readable media may include both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, the computer-readable media may comprise computer- storage media and communication media.The computer-storage media may include both volatile and nonvolatile media and/or removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, and/or other data types. For example, the memory 1304 may store computer-readable instructions (e.g., that represent a program(s) and/or a program element(s), such as an operating system. Computer-storage media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 1300. As used herein, computer storage media does not comprise signals per se.The communication media may embody computer-readable instructions, data structures, program modules, and/or other datatypes in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” may refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, the communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.The CPET(s) 1306 may be configured to execute the computer-readable instructions to control one or more components of the computing device 1300 to perform one or more of the methods and/or processes described herein. The CPET(s) 1306 may each include one or more cores (e.g., one, two, four, eight, twenty-eight, seventy-two, etc.) that are capable of handling a multitude of software threads simultaneously. The CPET(s) 1306 may include any type of processor, and may include different types of processors depending on the type of computing device 1300 implemented (e.g., processors with fewer cores for mobile devices and processors with more cores for servers). For example, depending on the type of computing device 1300, the processor may be an ARM processor implemented using Reduced Instruction Set Computing (RISC) or an x86 processor implemented using Complex Instruction Set Computing (CISC). The computing device 1300 may include one or more CPUs 1306 in addition to one or more microprocessors or supplementary co-processors, such as math co processors.The GPU(s) 1308 may be used by the computing device 1300 to render graphics (e.g., 3D graphics). The GPU(s) 1308 may include hundreds or thousands of cores that are capable of handling hundreds or thousands of software threads simultaneously. The GPU(s) 1308 may generate pixel data for output images in response to rendering commands (e.g., rendering commands from the CPU(s) 1306 received via a host interface). The GPU(s) 1308 may include graphics memory, such as display memory, for storing pixel data. The display memory may be included as part of the memory 1304. The GPU(s) 708 may include two or more GPUs operating in parallel (e.g., via a link). When combined together, each GPU 1308 may generate pixel data for different portions of an output image or for different output images (e.g., a first GPU for a first image and a second GPU for a second image). Each GPU may include its own memory, or may share memory with other GPUs.In examples where the computing device 1300 does not include the GPU(s) 1308, the CPU(s) 1306 may be used to render graphics.The communication interface 1310 may include one or more receivers, transmitters, and/or transceivers that enable the computing device 700 to communicate with other computing devices via an electronic communication network, included wired and/or wireless communications. The communication interface 1310 may include components and functionality to enable communication over any of a number of different networks, such as wireless networks (e.g., Wi-Fi, Z-Wave, Bluetooth, Bluetooth LE, ZigBee, etc.), wired networks (e.g., communicating over Ethernet), low-power wide-area networks (e.g., LoRaWAN, SigFox, etc.), and/or the Internet.The I/O ports 1312 may enable the computing device 1300 to be logically coupled to other devices including the I/O components 1314, the presentation component(s) 1318, and/or other components, some of which may be built in to (e.g., integrated in) the computing device 1300. Illustrative I/O components 1314 include a microphone, mouse, keyboard, joystick, game pad, game controller, satellite dish, scanner, printer, wireless device, etc. The EO components 1314 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition (as described in more detail below) associated with a display of the computing device 1300. The computing device 1300 may be include depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, touchscreen technology, and combinations of these, for gesture detection and recognition. Additionally, the computing device 1300 may include accelerometers or gyroscopes (e.g., as part of an inertia measurement unit (IMU)) that enable detection of motion. In some examples, the output of the accelerometers or gyroscopes may be used by the computing device 1300 to render immersive augmented reality or virtual reality.The power supply 1316 may include a hard-wired power supply, a battery power supply, or a combination thereof. The power supply 1316 may provide power to the computing device 1300 to enable the components of the computing device 1300 to operate.The presentation component s) 1318 may include a display (e.g., a monitor, a touch screen, a television screen, a heads-up-display (HUD), other display types, or a combination thereof), speakers, and/or other presentation components. The presentation component(s) 1318 may receive data from other components (e.g., the GPU(s) 1308, the CPU(s) 1306, etc.), and output the data (e.g., as an image, video, sound, etc.).The disclosure may be described in the general context of computer code or machine- useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The disclosure may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.As used herein, a recitation of“and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example,“element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elementsA, B, and C. In addition,“at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of elementB. Further,“at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B.The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms“step” and/or“block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. |
Ascertaining command completion in flash memories is disclosed. An exemplary aspect includes eliminating the software lock and the outstanding requests variable and replacing them with a transfer request completion register. The transfer request completion register may be mapped to the universal flash storage (UFS) Transfer Protocol (UTP) Transfer Request List (UTRL) slots. The controller of the host - a hardware component - may set the bit in the transfer request completion register on transfer request completion at the same time the doorbell register is cleared. After this bit has been read, the bit in the transfer request completion register is cleared. |
A universal flash storage (UFS) system comprising:a doorbell register having a number of bits corresponding to a UFS transfer protocol (UTP) Transfer Request List (UTRL);a completion notification register having a same number of bits; anda control system operatively coupled to the doorbell register and the completion notification register and configured to:set a doorbell bit in the doorbell register for a send request start;set a completion bit in the completion notification register on transfer request completion; andclear the doorbell bit on transfer request completion, wherein the completion bit is set at the same time the doorbell bit is cleared.The UFS system of claim 1, wherein the control system is further configured to issue a transfer request to a device, wherein the send request start is associated with the transfer request.The UFS system of claim 1, further comprising a communication interface configured to couple a host to a device.The UFS system of claim 2, wherein the control system is further configured to clear the completion bit after processing completion of the transfer request.The UFS system of claim 4, wherein the control system is further configured to reuse a slot associated with the completion bit after clearing the completion bit.The UFS system of claim 2, wherein the transfer request includes a write command to write data to the device, or wherein the transfer request includes a read command to read data from the device.A method of controlling a memory system, comprising:generating a transfer request in a host;setting a bit in a doorbell register in the host identifying the transfer request;passing the transfer request to a device through a communication interface;completing a transfer associated with the transfer request;clearing the bit in the doorbell register; andsetting a completion bit in a completion register, wherein the completion bit is set at the same time the doorbell bit is cleared.The method of claim 7, wherein generating the transfer request comprises generating a read command to read data from the device, or wherein generating the transfer request comprises generating a write command to write data to the device.The method of claim 7, further comprising starting the transfer.The method of claim 7, further comprising handling interrupts to the transfer request without need for a software lock.The method of claim 7, further comprising clearing the completion bit after processing completion of the transfer request, and reusing a slot associated with the completion bit after clearing the completion bit.An embedded Multi-Media Controller (eMMC) memory system comprising:a doorbell register having a number of bits corresponding to an eMMC Task Descriptor List (TDL);a completion notification register having a same number of bits; anda control system operatively coupled to the doorbell register and the completion notification register and configured to:set a doorbell bit in the doorbell register for a send request start;set a completion bit in the completion notification register on transfer request completion; andclear the doorbell bit on transfer request completion, wherein the completion bit is set at the same time the doorbell bit is cleared.The eMMC system of claim 12, wherein the control system is further configured to issue a transfer request to a device, wherein the send request start is associated with the transfer request.The eMMC system of claim 12, further comprising a communication interface configured to couple a host to the device, wherein the control system is further configured to reuse a slot associated with the completion bit after clearing the completion bit.The eMMC system of claim 13, wherein the transfer request includes a write command to write data to the device, or wherein the transfer request includes a read command to read data from the device. |
PRIORITY CLAIMThe present application claims priority to U.S. Provisional Patent Application Serial No. 61/875,907 filed on September 10, 2013 , and entitled "SYSTEMS AND METHODS FOR ASCERTAINING COMMAND COMPLETION IN FLASH MEMORY," which is incorporated herein by reference in its entirety.The present application also claims priority to U.S. Patent Application Serial No. 14/467,404 filed on August 25, 2014 , and entitled "ASCERTAINING COMMAND COMPLETION IN FLASH MEMORIES," which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the DisclosureThe technology of the disclosure relates generally to flash memory and processing commands for flash memory.II. BackgroundFlash memory is common in many sorts of computing devices including mobile terminals such as cameras, audio players, smart phones, tablets, and the like. Flash memory may be one of two general types - removable or embedded - and several standards exist for both general types. One standard initially designed for embedded situations is the Universal Flash Storage (UFS) standard set forth by the Joint Electron Device Engineering Council (JEDEC). Another common standard is the embedded Multi-Media Controller (eMMC) standard.In the UFS standard, a host communicates with a device that holds the memory elements. The host issues commands to the device to execute "transfer request" tasks such as writing data into the memory elements, reading data from the memory elements, and synchronize cache. By design, LTFS supports multiple concurrent transfer requests. The transfer requests are software driven at the controller of the host and use a register called a doorbell register and a software variable referred to (at least within a LINUX implementation) as an outstanding requests variable. While the term "outstanding requests variable" is specific to LINUX, other operating systems use similar variables and all are referred to herein as outstanding requests variables. Each transfer request occupies a slot and a corresponding bit in the doorbell register and the outstanding requests variable. When sending a new transfer request, software sets a bit corresponding to the slot in the register and the variable. Setting the bit in the register notifies the controller that a new transfer request is ready. When a transfer request is completed, the hardware clears the bit corresponding to the slot in the register, and software then compares the bit in the register to the bits in the outstanding requests variable to find completed requests. Note that eMMC is similar, although the particular elements may have different names.If the host receives an interrupt before setting the doorbell register and after updating the outstanding requests variable, the host may recognize that the request is completed before the request was sent. In such a situation, the software may complete the request, but with an error. Alternatively, if the host receives an interrupt after setting the register and the request was completed before updating the outstanding requests variable, the request may be lost. Still another situation may delay requests until another transfer request completion interrupt arrives. Such situation either delays the request, thereby causing performance degradation, causes the delay to last indefinitely, or until an error occurs which aborts the command. Currently, such situations are avoided through the use of a software lock. However, such software locks are slow and may exclude other transfer requests. Further, such software locks or exclusions generally increase latency resulting in a degradation of performance, especially in multi-core processors.SUMMARY OF THE DISCLOSUREAspects disclosed in the detailed description include ascertaining command completion in flash memories. An exemplary aspect includes eliminating the software lock and the outstanding requests variable and replacing them with a transfer request completion register. The transfer request completion register may be mapped to the universal flash storage (UFS) Transfer Protocol (UTP) Transfer Request List (UTRL) slots. The controller of the host - a hardware component - may set the bit in the transfer request completion register on transfer request completion at the same time the doorbell register is cleared. After this bit has been read, the bit in the transfer request completion register is cleared. While UFS is specifically contemplated, other flash memory standards such as embedded Multi-Media Controller (eMMC) also may benefit from aspects of the present disclosure (e.g., eMMC has a Task Descriptor List (TDL) that is functionally equivalent to the UTRL). Replacing the software lock and the outstanding requests variable improves performance by reducing latency and eliminating the transfer request exclusions that may occur with the use of such software locks. In particular, completion and issuing contexts can work simultaneously. Transfer requests may be issued from multiple contexts at the same time. The use of these multiple contexts improves performance, especially in multi-core devices such as smart phones.In this regard in one aspect, a UFS system is disclosed. The UFS system includes a doorbell register having a number of bits corresponding to a UTRL. The UFS system also comprises a completion register having a same number of bits. The UFS system further comprises a control system operatively coupled to the doorbell register and the completion register. The control system is configured to set a doorbell bit in the doorbell register for a send request start. Stated another way, when a bit in the doorbell register is raised, it signals the controller that a transfer request is ready and can be processed (i.e., start transferring the data). The control system is also configured to set a completion bit in the completion register on transfer request completion. The control system is also configured to clear the doorbell bit on transfer request completion.In another aspect, a memory system is disclosed. The memory system includes a doorbell register having a number of bits. The memory system also includes a completion register having a same number of bits. The memory system also includes a control system operatively coupled to the doorbell register and the completion register. The control system is configured to set a doorbell bit in the doorbell register for a send request start. The control system is also configured to set a completion bit in the completion register on transfer request completion. The control system is also configured to clear the doorbell bit on transfer request completion.In another aspect, a method of controlling a memory system is disclosed. The method includes generating a transfer request in a host. The method also includes setting a bit in a doorbell register in the host identifying the transfer request. The method also includes passing the transfer request to a device through a communications interface. The method also includes completing a transfer associated with the transfer request. The method also includes clearing the bit in the doorbell register. The method also includes setting a completion bit in a completion register.In another aspect, an embedded Multi-Media Controller (eMMC) memory system is disclosed. The memory system includes a doorbell register having a number of bits corresponding to an eMMC task descriptor list. The memory system also includes a completion notification register having a same number of bits. The memory system also includes a control system operatively coupled to the doorbell register and the completion notification register and configured to set a doorbell bit in the doorbell register for a send request start. The control system is also configured to set a completion bit in the completion notification register on transfer request completion. The control system is also configured to clear the doorbell bit on transfer request completion.BRIEF DESCRIPTION OF THE FIGURESFigure 1 is a block diagram of an exemplary connection between a host and a device without exemplary aspects of the present disclosure;Figure 2A illustrates a first race condition that may arise in a memory system without synchronization locks or aspects of the present disclosure;Figure 2B illustrates a second race condition that may arise in a memory system without synchronization locks or aspects of the present disclosure;Figure 3 illustrates a flow chart of a conventional data flow process using a lock to prevent race conditions such as those illustrated in Figures 2A and 2B ;Figure 4 is a block diagram of an exemplary connection between a host and a device with host registers according to exemplary aspects of the present disclosure;Figure 5 is a flowchart illustrating an exemplary process of data flow between the host and device of Figure 4 ; andFigure 6 is a block diagram of an exemplary processor-based system that can employ the host and device illustrated in Figure 4 .DETAILED DESCRIPTIONWith reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.Aspects disclosed in the detailed description include ascertaining command completion in flash memories. An exemplary aspect includes eliminating the software lock and the outstanding requests variable and replacing them with a transfer request completion register. The transfer request completion register may be mapped to the universal flash storage (UPS) Transfer Protocol (UTP) Transfer Request List (UTRL) slots. The controller of the host - a hardware component - may set the bit in the transfer request completion register on transfer request completion at the same time the doorbell register is cleared. After this bit has been read, the bit in the transfer request completion register is cleared. While UFS is specifically contemplated, other flash memory standards such as embedded Multi-Media Controller (eMMC) also may benefit from aspects of the present disclosure (e.g., eMMC has a Task Descriptor List (TDL) that is functionally equivalent to the UTRL). Replacing the software lock and the outstanding requests variable improves performance by reducing latency and eliminating the transfer request exclusions that may occur with the use of such software locks. In particular, completion and issuing contexts can work simultaneously. Transfer requests may be issued from multiple contexts at the same time. The use of these multiple contexts improves performance, especially in multi-core devices such as smart phones.Before addressing aspects of the present disclosure, an overview of conventional systems and issues that arise therewith presented with reference to Figures 1-3 . Exemplary aspects of the present disclosure begin below with reference to Figure 4 .In this regard, Figure 1 is block diagram of a host 10 coupled to a device 12 via conductors 14. The communications between host 10 and device 12 conform to the UFS v2.0 standard published September 2013. While the present discussion focuses on UFS, other flash standards may also benefit from aspects of the present disclosure including embedded Multi-Media Controller (eMMC). The host 10 includes a host controller 16 that is a hardware based system operatively coupled to an appropriate communication interface 18. Host controller 16 interoperates with host software 20. Collectively, the host controller 16 and host software 20 are a control system.With continued reference to Figure 1 , the device 12 includes a controller 22 that is a hardware based system operatively coupled to an appropriate communication interface 24. The device 12 further includes a memory unit 26 (e.g., a Negated AND or NOT AND (NAND) Flash storage device). The device 12 further includes a task queue 28. Collectively, the controller 22 and any software associated with the operation of the controller 22 are a control system,Host 10 further includes a doorbell register 30 (UTRLDBR). The doorbell register 30 is a hardware based component with a number of bits equal to a number of transfer request slots handled by the host controller 16. That is, the doorbell register 30 has a number of bits corresponding to a UFS standard Protocol Transfer Request list.With continued reference to Figure 1 , in a conventional UFS system, the computing element incorporating the host 10 may need to read or write data to the memory unit 26. Accordingly, a transfer request that outlines the data transfer requested may be sent to the host controller 16. The host software 20 then assigns a slot to the transfer request. The host controller 16 may have multiple slots (not shown) to handle multiple transfer requests. Multiple transfer requests are common, especially in multicore processors. When the host software 20 has prepared the transfer request for the device 12, the host software 20 sets a bit in the doorbell register 30 corresponding to the slot with which the transfer request is associated. Setting the bit in the doorbell register 30 signals to the host controller 16 to send the transfer request to the device 12 through the communication interface 18.The device 12 handles the transfer request according to well documented rules within the UFS standard. The data transfer occurs, and once the data transfer is completed, the host controller 16 notifies the host software 20 by clearing the bit in the doorbell register 30. In operation, the host 10 may receive a transfer request interrupt. The host software 20 checks the doorbell register 30 to see which tasks are finished and which slots are already assigned. However, absent more information, the host software 20 cannot discriminate between bits set to zero for completed tasks and bits set to zero for a request that has not yet been sent. Accordingly, the host software 20 maintains an outstanding requests variable (not shown), which indicates which slots have been assigned.The outstanding requests variable is updated once preparations to send a transfer request have begun and cleared once the response for transfer request is received from the device 12. The host software 20 compares the outstanding requests variable with the doorbell register 30 to know which slots have completed requests. Absent further control, the UFS system may have race conditions which cause errors, delays, aborted commands, or the loss of commands. Two such race conditions are illustrated in Figures 2A and 213.In this regard, Figure 2A illustrates, through a process 34, what happens when a send request stops running before the outstanding requests variable is updated. It should be appreciated that process 34 may be implemented by different elements including software and hardware and may be separate and distinct components (e.g., different sub-routines, different software modules, different IC, or the like). In particular, and as stated above, when the host software 20 of Figure 1 has prepared the transfer request for the device 12, the host software 20 sets a bit in the doorbell register 30 (block 36) corresponding to the slot with which the transfer request is associated. The context of the host to changes (block 38) corresponding to the host 10 processing some other transfer request or processing some incoming data. Device 12 processes the transfer request (block 40). The device 12 may need some time to process the transfer request. While the device 12 is processing the transfer request, a context switch, sending the *send command process' to sleep may occur. When the device 12 completes the transfer request, the device 12 sends a completed task notification. The host 10 then raises a completion interrupt (block 42). At this point, because the context changed, the outstanding requests variable was never updated. Thus, at the completion interrupt, the host 10 checks the doorbell register 30 (block 44) and reads the outstanding requests variable (block 46). However, as noted above, the outstanding requests variable was not updated and thus, the completed request is not recognized (block 48) and the command is aborted or timed out (block 50).Similarly, Figure 2B illustrates a process 52 where the updating of the outstanding requests variable occurs before updating the doorbell register 30 (the opposite of the order described above and done to avoid the race condition set forth in process 34). It should be appreciated that process 52 may be implemented by different elements including software and hardware and may be separate and distinct components (e.g., different sub-routines, different software modules, different IC, or the like). However, process 52 gives rise to another race condition (i.e., two processes are competing for the same resource) where the command is completed, but with errors. In particular, the process 52 begins at the point in time where the outstanding requests variable is updated (block 54). A completion interrupt for another transfer request is raised (block 56). However, the interrupt occurs before updating the doorbell register 30. Thus, when the doorbell register 30 is read (block 58), the bit is not set. However, when the outstanding requests variable is read (block 60), the host software 20 sees the transfer request and recognizes a completed request (block 62). Thus, the host software 20 will complete the request, but with an error (block 64).Conventional systems prevent these race conditions through the use of a software lock. Software locks increase latency. In the interest of completeness, Figure 3 illustrates the flow processes associated with a send request context 66 and a request completion context 68. The process associated with send request context 66 begins with a send request context start (block 70). The host 10 prepares the transaction data (block 72). The host software 20 then sets a lock and disables interrupts (block 74). The software sets the outstanding requests variable (block 76) and then the doorbell register 30 is set (block 78). After the doorbell register 30 is set, the lock is disabled and interrupts enabled (block 80). After the lock is removed, the send request context ends (block 82).With continued reference to Figure 3 , the request completion context 68 starts (block 84). The host controller 16 clears the bit(s) in the doorbell register 30 (block 85). The request completion interrupt, occurs and a lock is created by the host software 20 (block 86). The host 10 reads the outstanding requests variable (block 88) in the host software 20. The host 10 then reads the doorbell register 30 (block 90) and determines completed requests with reference to the doorbell register 30 and the outstanding requests variable (block 92). For each completed request (block 94), a subroutine is performed wherein the response code is read (block 96), any errors are handled (block 98) and an upper layer (e.g., the software that issued the request in the first instance) is notified of the request completion (block 100). When all completed requests have been processed at block 94, the outstanding requests variable's corresponding bits are cleared (block 101), and then the host software 20 removes the lock and exits (block 102) resulting in the end of the request completion context (block 104). The existence of the locks in both send request context 66 and request completion context 68 is highlighted by the designation locked sequence (block 106).In contrast to the processes of send request context 66 and request completion context 68, aspects of the present disclosure allow the elimination of the lock, and the attendant disadvantages are alleviated. In this regard, Figure 4 illustrates a host 10' that includes a command completion register (UTRLCNR) 32 (also sometimes referred to as a completion notification register). Note that in most other requests host 10' has elements identical to host 10 of Figure 1 . As with doorbell register 30, the command completion register 32 is hardware based and has a number of bits equal to a number of slots handled by the host controller 16. That is, use of the command completion register 32 allows a hardware solution instead of the locks. By use of the hardware solution, sending and completion of requests can start at any point. The ability to have multiple contexts operating concurrently improves the operating efficiencies, especially for multi-core processors.In this regard, Figure 5 provides send request context 108 and request completion context 110. Send request context 108 starts (block 112) and the host software 20 prepares the transaction data (block 114). The host software 20 sets the doorbell register 30 (block 116) and the send request ends (block 118). Because there is no need to set the software variable for the outstanding requests, there is no concern about an interrupt occurring.With continued reference to Figure 5 , the request completion context 110 starts (block 120). Initially, the hardware clears the doorbell register 30 and sets the command completion register 32 (block 122). An interrupt occurs (block 124). The host software 20 reads the command completion register 32 (block 126) to ascertain what tasks are completed. For each completed task, a subroutine begins (block 128) where the response code is read (block 130), any errors are handled (block 132) and the host software 20 clears the command completion register 32 (block 134). After the host software 20 clears the command completion register 32, an upper layer (e.g., the software that issued the request) is notified of the request completion (block 136). After clearing and notification, the slot in the command completion register 32 corresponding to the bit may be reversed as needed or defined. When all completed requests have been processed (block 128), the request completion context 110 ends (block 138). In contrast to the time period when the lock disables interrupts (highlighted by 106 in Figure 3 ), block 139 highlights that the interrupts can occur at any point, and in particular may occur during the times that the conventional systems impose the lock. As noted above, elimination of the lock improves performance and the addition of the new hardware (i.e., the command completion register 32) is viewed as an acceptable tradeoff for the improved performance.Ascertaining command completion in flash memories according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player.In this regard, Figure 6 illustrates an example of a processor-based system 140 that can employ the host 10' and device 12 illustrated in Figure 4 . In this example, the processor-based system 140 includes one or more central processing units (CPUs) 142, each including one or more processors 144. The CPU(s) 142 may be a master device and include the host 10'. The CPU(s) 142 may have cache memory 146 coupled to the processor(s) 144 for rapid access to temporarily stored data. The CPU(s) 142 is coupled to a system bus 148. As is well known, the CPU(s) 142 communicates with these other devices by exchanging address, control, and data information over the system bus 148. For example, the CPU(s) 142 can communicate bus transaction requests to a memory system 150 that may include the device 12. Although not illustrated in Figure 6 , multiple system buses 148 could be provided, wherein each system bus 148 constitutes a different fabric.Other master and slave devices can be connected to the system bus 148. As illustrated in Figure 6 , these devices can include the memory system 150, which may have multiple memory units (not specifically illustrated), one or more input devices 152, one or more output devices 154, one or more network interface devices 156, and one or more display controllers 158, as examples. The input device(s) 152 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The output device(s) 154 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The network interface device(s) 156 can be any devices configured to allow exchange of data to and from a network 160. The network 160 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wide local area network (WLAN), and the Internet. The network interface device(s) 156 can be configured to support any type of communication protocol desired.The CPU(s) 142 may also be configured to access the display controller(s) 158 over the system bus 148 to control information sent to one or more displays 162. The display controller(s) 158 sends information to the display(s) 162 to be displayed via one or more video processors 164, which process the information to be displayed into a format suitable for the display(s) 162. The display(s) 162 can include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc.Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.FURTHER SUMMARY OF THE INVENTION1. A universal flash storage (UFS) system comprising:a doorbell register having a number of bits corresponding to a UFS transfer protocol (UTP) Transfer Request List (UTRL);a completion notification register having a same number of bits; anda control system operatively coupled to the doorbell register and the completion notification register and configured to:set a doorbell bit in the doorbell register for a send request start;set a completion bit in the completion notification register on transfer request completion; andclear the doorbell bit on transfer request completion.2. The UFS system of 1, wherein the control system is further configured to issue a transfer request to a device.3. The UFS system of 2, wherein the send request start is associated with the transfer request.4. The UFS system of 1, further comprising a communication interface configured to couple a host to a device.5. The UFS system of 2, wherein the control system is further configured to clear the completion bit after processing completion of the transfer request.6. The UFS system of 5, wherein the control system is further configured to reuse a slot associated with the completion bit after clearing the completion bit.7. The UFS system of 2, wherein the transfer request includes a write command to write data to the device.8. The UFS system of 2, wherein the transfer request includes a read command to read data from the device.9. The UFS system of 1 integrated into a device selected from the group consisting of a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player.10. A memory system comprising:a doorbell register having a number of bits;a completion register having a same number of bits; anda control system operatively coupled to the doorbell register and the completion register and configured to:set a doorbell bit in the doorbell register for a send request start;set a completion bit in the completion register on transfer request completion; andclear the doorbell bit on transfer request completion.11. A method of controlling a memory system, comprising:generating a transfer request in a host;setting a bit in a doorbell register in the host identifying the transfer request;passing the transfer request to a device through a communication interface;completing a transfer associated with the transfer request;clearing the bit in the doorbell register; andsetting a completion bit in a completion register.12. The method of 11, wherein generating the transfer request comprises generating a read command to read data from the device.13. The method of 11, wherein generating the transfer request comprises generating a write command to write data to the device.14. The method of 11, further comprising starting the transfer.15. The method of 11, further comprising handling interrupts to the transfer request without need for a software lock.16. The method of 11, further comprising receiving an interrupt generated by a second transfer request.17. The method of 11, further comprising clearing the completion bit after processing completion of the transfer request.18. The method of 17, further comprising reusing a slot associated with the completion bit after clearing the completion bit.19. An embedded Multi-Media Controller (eMMC) memory system comprising:a doorbell register having a number of bits corresponding to an eMMC Task Descriptor List (TDL);a completion notification register having a same number of bits; anda control system operatively coupled to the doorbell register and the completion notification register and configured to:set a doorbell bit in the doorbell register for a send request start;set a completion bit in the completion notification register on transfer request completion; andclear the doorbell bit on transfer request completion.20. The eMMC system of 19, wherein the control system is further configured to issue a transfer request to a device.21. The eMMC system of 20, wherein the send request start is associated with the transfer request.22. The eMMC system of 19, further comprising a communication interface configured to couple a host to the device.23. The eMMC system of 20, wherein the control system is further configured to clear the completion bit after processing completion of the transfer request.24. The eMMC system of 23, wherein the control system is further configured to reuse a slot associated with the completion bit after clearing the completion bit.25. The eMMC system of 20, wherein the transfer request includes a write command to write data to the device.26. The eMMC system of 20, wherein the transfer request includes a read command to read data from the device. |
A substrate for solder ball assembling a semiconductor device substantially parallel onto said substrate, said device having a plurality of terminals arrayed on a warped surface, comprising an electrically insulating surface including a plurality of discrete metallic areas; said areas having locations matching the locations of said device terminals, and further being suitable for solder ball attachment in surface mount reflow operation; and said areas further having at least one characteristic suitable for accommodating said device warping in solder reflow operation, whereby areas having higher amounts of said characteristic cause said solder balls to become thinner during reflow, resulting in lower solder joint heights, relative to the heights of the remaining solder joints. |
We claim: 1. A method for assembling a semiconductor device having a plurality of terminals on a warped surface onto a substrate having an electrically insulating surface including a plurality of discrete metallic areas in locations matching the locations of said device terminals, said areas further having at least one characteristic suitable for accommodating said device warping, comprising the steps of: forming an array of solder balls attached to said device terminals so that each terminal is contacted by one of said solder balls; mechanically aligning said solder balls so that each solder ball is placed into alignment with one of said metallic areas on said substrate; contacting said solder balls and said metallic substrate areas, as completely as permitted by said device surface warping; applying energy such that said substrate and said device increase in temperature and transfer heat to said solder balls, causing said solder balls to reach a liquid state; dwelling for metallurgical interaction, thereby activating said substrate area characteristic to cause said liquid solder balls to become thinner, resulting in lower solder joint heights, and to accommodate said device warping, resulting in device alignment substantially parallel to said substrate; and removing said energy such that said solder balls cool and harden, forming physical bonds between said solder balls and said metallic areas. 2. The method according to claim 1 wherein said energy is supplied uniformly while said substrate and said device move through an oven. 3. The method according to claim 1 wherein said substrate area characteristic is the size of said areas, causing said liquid solder to spread wider laterally by wetting and surface tension. 4. The method according to claim 1 wherein said substrate area characteristic is the metallic thickness of said areas, causing said liquid solder to penetrate deeper by metallic mixing. |
FIELD OF THE INVENTION The present invention is related in general to the field of semiconductor devices and processes and more specifically to structures, materials and fabrication of substrates to be used in surface mount assembly of semiconductor devices. DESCRIPTION OF THE RELATED ART In order to successfully attach a leaded semiconductor surface mount device onto a board by soldering, all the device leads have to touch the board surface simultaneously, or at least be within a certain small distance from that surface; they have to exhibit "coplanarity". The coplanarity is a function of the lead pitch. As an example of industry practice, for a lead pitch of 0.65 mm, the acceptable coplanarity is 0.1 mm, representing a tolerance window of .±.0.05 mm. For a lead pitch of 0.3 mm, the acceptable coplanarity is only 0.05 mm (for comparison, the diameter of a human hair falls into the 0.1 to 0.3 mm range). With the advent of the Ball Grid Array (BGA) package for semiconductor devices, the coplanarity of leaded devices is no longer an issue, since the leads are replaced by solder "balls" for surface mount assembly. However, in plastic BGA's, the overall packages are usually somewhat flexible, since they are composed of flexible materials. There is typically a significant difference in the coefficients of thermal expansion between the silicon chip, the plastic substrate used for chip mounting, and the encapsulation material (commonly a plastic molding compound). Consequently, in processes at elevated temperatures, the package may slightly warp and then represent, as a whole package, coplanarity problems in subsequent assembly processes. Due to the warping and coplanarity problem, only a limited number of solder balls attached to the warped package surface will contact the board in assembly, while a substantial number of solder balls will not contact the board surface and will not be able to form solder joints in solder reflow attach processes. Typically, assembly and packaging processes at elevated temperatures include: Transfer molding at 175 DEG C. in less than 1 minute. Polymerization of the molded device at 175 DEG C. for up to six hours in "cure" ovens. Reflow of attached solder balls or bumps. Typically, solder bumps are reflowed in chain type furnaces at temperatures dependent on the melting of the solder mixture (typically between about 150 and 250 DEG C.). After these temperature treatments, plastic BGA packages may exhibit warping to the extent that uniform solder ball attachment onto substrates is difficult, if not outright impossible. The resultant coplanarity problems are particularly pronounced for BGA packages using plastic films or other thin plastic materials as supporting parts. As a consequence, serious yield losses and reliability problems have been encountered in board attach processes of plastic BGA's. The proposal to remedy the coplanarity problem by using an array of solder balls having different diameters dependent on the location on the package, is completely impractical; in addition, the degree of warping varies with device type, size and materials of packages, thermal process history, and so on. As an example, in BGA's having a convex warping, this proposal would require smaller diameter solder balls for the center portion of the package and larger diameter balls for the peripheral portions--a proposition which mass manufacturing would have great difficulties in handling. An urgent need has therefore arisen for a low-cost and reliable approach, involving both package and board structures and the assembly fabrication method, to provide uniform board attachment of warped plastic Ball Grid Array packages, with the goal of assembling the plastic BGA device substantially parallel to the substrate. The structure of the substrate and the assembly method should be flexible enough to be applied for different semiconductor product families and a wide spectrum of design and assembly variations, and should achieve improvements towards the goals of enhanced process yields and device reliability. Preferably, these innovations should be accomplished using the installed equipment base so that no investment in new manufacturing machines is needed. SUMMARY OF THE INVENTION According to the present invention for a semiconductor integrated circuit (IC) assembly, a Ball Grid Array (BGA) package with the solder balls arrayed on a warped surface can be attached substantially parallel onto a flat substrate when the substrate has contact areas featuring at least one distributed characteristic to cause the solder balls to become thinner during reflow. One such distributed characteristic may be the size of the contact areas. Another such feature may be the metallic thickness of the areas. Most frequently, the warped surface has an outward concave contour, but the invention applies also to convex BGA surface contours. The present invention is related to high density ICs packaged as plastic BGA's, especially those having high numbers of inputs/outputs, and also to low end, low cost devices. These ICs can be found in many semiconductor device families such as standard linear and logic products, digital signal processors, microprocessors, digital and analog devices, high frequency and high power devices, and both large and small area chip categories. It is an aspect of the present invention to provide a substrate with solder contact areas having at least one characteristic which exploits the wetting of solder on metallic surfaces, the dissolving strength of liquid solder, and the self-aligning feature of liquid solder surfaces based on surface tension. Another aspect of the present invention is to design these characteristics so that certain categories of substrates match certain BGA package types having their known statistical degree of warping. Another aspect of the invention is to reach these goals without cost of equipment changes and new capital investment and using the installed fabrication equipment base. Another aspect of the invention is to teach guidelines for designing the substrate contact areas in order to match the corrective characteristic to the statistical coplanarity distribution of the BGA-to-be-assembled. These aspects have been achieved by the teachings of the invention concerning the structure, geometries and material selection of the substrates, and the assembly methods suitable for mass production. Various modifications have been successfully employed. In the first embodiment of the invention, the sizes of the metallic substrate areas are modified to accommodate a BGA package with concave warping of its solder ball surface. The resultant lowering of the solder joint heights and resolved coplanarity problem are illustrated. In the second embodiment of the invention, the metallic thickness and solder-dissolving characteristic of the substrate contact areas are modified to accommodate a BGA package with concave warping of its solder ball surface. The resultant lowering of the solder joint heights and resolved coplanarity problem are illustrated. The technical advances represented by the invention, as well as the aspects thereof will become apparent from the following description of the preferred embodiments of the invention, when considered in conjunction with the accompanying drawings and the novel features set forth in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic and simplified cross section of a BGA device with coplanarity problem, to be attached to a substrate. FIG. 2 is a schematic and simplified cross section of a BGA device with coplanarity problem and its assembly, substantially parallel, onto a substrate. FIG. 3 is a mathematical description of the conditions for assembly of a BGA with coplanarity problem, substantially parallel to a flat substrate. FIG. 4 lists mathematical equations expressing the interrelation of the parameters in FIG. 3. FIGS. 5 to 7 illustrate schematically the first embodiment of the invention. FIG. 5 is a simplified cross section through a portion of a substrate characterized by metallic contact areas of various sizes. FIGS. 6A and 6B are simplified cross sections through the contact areas of FIG. 5 before and after solder ball reflow. FIG. 7 lists numerical examples of BGAs featuring the first embodiment of the invention. FIGS. 8 to 10 illustrate schematically the second embodiment of the invention. FIG. 8 is a simplified cross section through a portion of a substrate characterized by metallic contact areas of various thicknesses. FIGS. 9A and 9B are simplified cross sections through the contact areas of FIG. 8 before and after solder ball reflow. FIG. 10 lists numerical examples of BGAs featuring the second the second embodiment of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS As illustrated in FIG. 1, the invention relates to the assembly of an integrated circuit (IC), packaged in a plastic Ball Grid Array (BGA) package 100, onto a flat substrate 110 in solder reflow process. Because of the mismatch of the coefficients of thermal expansion of the semiconductor chip and the mostly plastic parts (including carrier 101 and encapsulant 102) of the package, the BGA package frequently deviates from a flat shape and may, for instance, exhibit a convex surface curvature 103. As defined herein, the difference in elevation between BGA solder "balls" touching the substrate surface and BGA "balls" not touching the surface due to package warping, is called BGA "coplanarity". In FIG. 1, solder balls 105 touch the substrate surface at places 111; however, solder balls 106 do not touch the substrate surface. In devices having a warping problem with a convex surface, the coplanarity reaches a maximum value C, designated 104 in FIG. 1, for solder balls positioned at the BGA perimeter. As defined herein, the term "ball" is used to refer to a finite body of material. In addition, it may often but by no means always have the additional connotation of approximately spherical shape. When used in conjunction with solder material after reflow, this finite body of material may rather have the shape of a half-dome, a truncated cone, or a cylinder with straight, concave or convex outlines. It is still referred to as a "ball". The present invention relates to methods of assembling by solder reflow a warped BGA 200 in FIG. 2, having the maximum coplanarity 204, onto a flat substrate 210. During assembly, all solder balls have to undergo reflow and will have to result in different solder joint heights dependent on the position of the solder ball on the warped BGA surface. In the warped BGA of FIG. 2 having a convex surface, balls 205 show a lower solder joint height compared to balls 206 around the perimeter of the package. By way of example, BGAs with small outlines (for instance, .mu.*BGA.TM.), have square shape of 12.times.12 mm and solder balls numbering 100, 128, 144, or 180; square shape of 15.times.15 mm and solder balls numbering 176 or 196; solder ball diameter after reflow typically 450 .mu.m. In other devices, solder ball diameters may be smaller, as in the diameter range of about 100 to 120 .mu.m, or may be considerably larger. Using the warped BGA after solder reflow of FIG. 2 as a guideline, FIG. 3 derives the mathematical relations between coplanarity C and the extreme cases of solder ball distribution in order to accommodate the device warping. Solder ball cases with tall solder joint height (designated 206 in FIG. 2), are indicated in FIG. 3 by subscripts "o": Solder ball of height Ho and radius Ro for contact length Lo over contact depth Do. Solder ball cases with low solder joint height (designated 205 in FIG. 2), are indicated by subscripts "1": Solder ball of height H1 and radius R1 for contact length L1 over contact depth D1. In FIG. 3, the warped surface is indicated by heavy line 301. The results of the mathematical model are summarized in FIG. 4 based on two relations which characterize BGA solder reflow operations: First, solder volumes Vo and V1 are identical, since no material is lost or created in the assembly process (equation (1)). Second, the taller solder height Ho is the sum of the smaller solder height H1 plus the coplanarity C (equations (3a) and (3b)), expressing the goal of substantially parallel assembly of BGA and substrate. Further, the volume of any solder ball is expressed in equation (2) in terms of solder ball radius and contact length and depth. For ease of calculations, the tacit assumption is made that the solder contacts on the substrate surface are identical. In the actual assembly process, the solder contacts at the package joints are identical and the contacts on the substrate surface are variable. This does not affect the modeling results. As a consequence, the embodiments of the invention start with the design of the minimum length and minimum depth of the substrate contact areas. Based on typical fabrication practice, it is reasonable to let these minima be Lo and Do, as is also suggested by FIGS. 2 and 3. Fixing the minima and using equation (1) delivers equation (4). Consequently, the remaining variables are R1, L1, and D1. When L1 and D2 are designed, R2 will be determined as a consequence of equation (1). Designing L1 leads to the first embodiment of the invention; designing D1 leads to the second embodiment. First Embodiment of the Invention: Discrete Lengths L1, L2, . . . , Ln FIGS. 5 to 7 illustrate structure, materials and processes of the first embodiment of the present invention. The embodiment is based on variable sizes of the discrete metallic contact areas 500 of the substrate 510. The substrate is preferably made of electrically insulating organic material selected from a group consisting of polyimide, polymer strengthened by glass fibers, FR-4, FR-5, and BT resin. The substrate has a generally flat surface. Another option is a thermally conductive substrate (for instance, metal such as copper) with an insulating layer on top. Deposited on the surface, or inset in the surface, are contact areas 500, usually consisting of copper with a flash of gold. However, if metal interdiffusion with the solder is to be kept at a minimum, a thin layer of refractory metal (titanium or titanium-tungsten alloy, 40 to 700 nm thick, preferred 50 nm) may be deposited over the copper layer, followed by a layer of platinum or platinum-rich alloy (200 to 800 nm thick, preferred 500 nm). Other materials for the contact areas may be selected from a group consisting of aluminum, tungsten, or alloys thereof, overlaid by palladium or gold. FIG. 5 schematically shows a portion of substrate 510 with three discrete metallic substrate areas of lengths L1, L2, and L3, respectively. Length L1 has the smallest value, length L3 the largest. The number of areas and the actual lengths are designed based on the model of FIGS. 3 and 4 in relation to the number of BGA solder balls and the degree of surface warping of the BGA to be assembled (see examples in FIG. 7 for a typical values L1). FIGS. 6A and 6B display schematically the attachment of solder balls 601 (of approximately equal diameter) to the discrete substrate areas on substrate 510, the reflow of the solder balls 601 and the effect of the invention on the heights of the resulting solder joints. Solder balls 601 are selected from a group consisting of tin/lead, tin/indium, tin/silver, tin/bismuth, solder pastes, and conductive (for instance, silver-filled) adhesives. The solder alloy is selected based on its melting temperature convenient for the device application, and its capability to wet the contact surface completely. As FIG. 6A shows, the solder balls 601 are preferably of identical size, with the diameter varying widely dependent on device type and application; typical diameters are about 250 to 500 .mu.m, other examples are quoted above. FIG. 6B illustrates the fact that the solder balls of originally equal size spread at the reflow temperatures across the surface of the substrate contact areas and create solder joints of unequal heights H1, H2, H3. The tallest height H1 is related to the smallest length L1, the smallest height H3 to the longest length L3. Since the length of the substrate contact area is the characteristic variable in the first embodiment of the invention, this result indicates that higher amounts of the characteristic cause the solder balls to become thinner during solder reflow, relative to the thickness of the remaining solder joints. This, in turn, causes lower solder joint heights relative to the heights of the remaining solder joints. FIG. 7 tabulates typical results based on two actual .mu.*BGA.TM. geometrical data. The quoted values for L1 are averages over many L1, L2, . . . , Ln. The heights H1, H2, . . . , Hn have been structured according to the empirical warping of the plastic BGA to be assembled on the board. Using the invention for the characteristics of substrate and reflow solder balls, warped semiconductor BGA packages can be accommodated. Second Embodiment of the Invention: Discrete Depths D1, D2, . . . , Dn FIGS. 6 to 10 illustrate structure, materials and processes of the second embodiment of the present invention. The embodiment is based on variable depths of the discrete metallic contact areas 800 of the substrate 810. The substrate is preferably made of electrically insulating organic material selected from a group consisting of polyimide, polymer strengthened by glass fibers, FR-4, FR-5, and BT resin. The substrate has a generally flat surface. Another option is a thermally conductive substrate (for instance, metal such as copper) with an insulating layer on top. Inset into the surface are contact areas 800 of various depths D1, D2, D3, . . . , Dn. FIG. 8 shows these contact areas filled flat to the substrate surface with a material somewhat spongy (such as gold-clad aluminum sponge) or containing voids intended to be filled with solder. FIG. 9A shows these contact areas recessed to various depths, with a metal layer deposited in each recess (such as copper with a flash of gold). Other materials for the contact areas may be selected from a group consisting of aluminum, tungsten, or alloys thereof, overlaid by palladium, gold, platinum, or platinum-rich alloy. FIGS. 8 and 9A schematically show a portion of substrate 810 and 910, respectively, with three discrete metallic substrate areas of depth D1, D2, and D3, respectively. D1 has the smallest value, D3 the largest. The number of areas and the actual depths are designed based on the model in FIGS. 3 and 4 in relation to the number of BGA solder balls and the degree of surface warping of the BGA to be assembled (see examples in FIG. 10 for typical values D1). FIGS. 9A and 9B display schematically the attachment of solder balls 901 (of approximately equal diameter) to the discrete substrate areas on substrate 810 and 901, respectively, the reflow of the solder balls 901 and the effect of the invention on the heights of the resulting solder joints. The solder alloy is selected based on its melting temperature convenient for the device application, and its capability to penetrate the contact depths fully. Solder balls 901 are selected from a group consisting of tin/lead, tin/indium, tin/silver, tin/bismuth, solder pastes, and conductive (for instance, silver-filled) adhesives. The diameter of solder balls 901 may vary widely dependent on device type and application; typical diameters are about 250 to 500 .mu.m. FIG. 9B illustrates the fact that the solder balls of originally equal size penetrate at the reflow temperatures into the depth of the contact areas and create solder joints of unequal heights H1, H2, H3. The tallest height H1 is related to the shallowest depth D1, the smallest height H3 to the deepest depth D3. Since the depth of the substrate contact area is the characteristic variable in the second embodiment of the invention, this results indicates that the higher amounts of the characteristic cause the solder balls to become thinner during solder reflow, relative to the thickness of the remaining solder joints. This, in turn, causes lower solder joint heights relative to the heights of the remaining solder joints. FIG. 10 tabulates typical results based on two actual .mu.*BGA.TM. geometrical data. The quoted values for D1 are averages over many D1, D2, . . . , Dn. The heights H1, H2, . . . , Hn have been structured according to the empirical warping of the plastic BGA to be assembled on the board. Using the invention for the characteristics of substrate and reflow solder balls, warped semiconductor BGA packages can be accommodated. While this invention has been described in reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. As an example, the material of the semiconductor chip may comprise silicon, silicon germanium, gallium arsenide, or any other semiconductor material used in manufacturing. As another example, the BGA may have an encapsulation made by overmolding or another technique, or may have no encapsulation at all. The IC chip may be wire bonded or solder flip processed. It is therefore intended that the appended claims encompass any such modifications or embodiments. |
A computing device may include a touch-sensitive display. Display logic may control a size of an active area of the touch-sensitive display and a size of an inactive area of the touch-sensitive display. The display logic to set the active area to a first size when the touch-sensitive display is in a first mode. Display logic to set the active area to a second size when the touch-sensitive display is in a second mode. In one embodiment, the computing device may be a convertible computer system. In another embodiment, the computing device may be a tablet computer system. |
WHAT IS CLAIMED IS: 1. A computing device comprising: a touch-sensitive display; and display logic at least a portion of which is hardware, the display logic to: control a size of an active area of the touch-sensitive display and a size of an inactive area of the touch-sensitive display; set the active area to a first size when the touch-sensitive display is in a first mode; and set the active area to a second size when the touch-sensitive display is in a second mode. 2. The computing device of claim 1, further comprising: a base including: a keyboard; and a first housing, and the touch-sensitive display further including a second housing, wherein at least an edge of the first housing and an edge of the second housing are detachably coupled. 3. The computing device of claim 2, wherein the first mode comprises the first housing and the second housing in a detached configuration, and the second mode comprises the first housing and the second housing in a coupled configuration. 4. The computing device of claim 1, wherein the first size of the active area is greater than the second size of the active area. 5. The computing device of claim 1, wherein the inactive area of the touch-sensitive display is a virtual bezel. 6. The computing device of claim 1, wherein the display logic to control the size of the active area based at least in part on an orientation of the touch-sensitive display. 7. The computing device of claim 1, wherein the display logic to further control the size of the active area based at least in part on a touch input to the active area of the touch- sensitive display. 8. The computing device of claim 7, wherein the display logic to: cause the touch-sensitive display to display an active button, and change the size of the active area in response to a user input to the touch-sensitive display at a location of the active button. 9. The computing device of claim 7, wherein the display logic to cause the touch- sensitive display to change a location of the active button based on an orientation of the touch- sensitive display. 10. The computing device of claim 1, wherein the display logic to cause the touch- sensitive display to display an active button surrounded by the inactive area, the display logic to further cause the touch-sensitive display to perform an operation in response to a touch input to the active button. 11. The computing device of claim 1, wherein the display logic to cause the touch- sensitive display to change a size of the inactive area of the touch-sensitive display based at least in part on execution of an application. 12. The computing device of claim 1, wherein the display logic to cause the touch- sensitive display to change the size of the active area based on an input from the group consisting of a keypad, a touchpad, a key and a touchpoint nub. 13. A method of displaying on a tablet, comprising: displaying an active area having a first size on a touch-sensitive display of the tablet; and changing a size of the active area to a second size when the touch-sensitive display changes from a first mode to a second mode. 14. The method of claim 13, wherein the first mode comprises a first housing of a base and a second housing of the touch-sensitive display in a detached configuration, and the second mode comprises the first housing and the second housing in a coupled configuration, and the first size of the active area is greater than the second size of the active area. 15. The method of claim 13, further comprising changing a size of the active area in response to on a touch input to a displayed active button that is surrounded by the inactive area. 16. The method of claim 15, further comprising changing a location of the active button based on an orientation of the touch-sensitive display. 17. A machine readable medium having stored thereon machine readable instructions that, when executed, implement operations to: display an active area having a first size on a touch-sensitive display of a tablet; and change the active area to a second size when the touch-sensitive display changes from the first mode to a second mode. 18. The machine readable medium of claim 17, wherein the first mode comprises a first housing of a base and a second housing of the touch-sensitive display in a detached configuration, and the second mode comprises the first housing and the second housing in a coupled configuration. 19. The machine readable medium of claim 18, wherein the first size of the active area is greater than the second size of the active area. 20. The machine readable medium of claim 17, wherein the inactive area of the touch-sensitive display is a virtual bezel. 21. The machine readable medium of claim 17, wherein the operations further to change the size of the active area based at least in part on an orientation of the touch-sensitive display. 22. The machine readable medium of claim 17, wherein the operations further to change the size of the active area based at least in part on a touch input to the active area of the touchscreen. 23. The machine readable medium of claim 17, further comprising changing a size of the active area in response to a touch input to a displayed active button that is surrounded by the inactive area. 24. The machine readable medium of claim 23, wherein the operations further to change a location of the active button based on an orientation of the touch-sensitive display. 25. The machine readable medium of claim 17, wherein the operations further to perform an operation in response to the touch-sensitive display receiving a touch input at a button that is surrounded by the inactive area. 26. The machine readable medium of claim 17, wherein the operations further to change a size of the inactive area of the touch-sensitive display based at least in part on execution of an application. 27. The machine readable medium of claim 17, wherein the operations further to change the size of the active area based on an input from the group consisting of a keypad, a touchpad and a touchpoint nub. |
DISPLAY DEVICE HAVING MULTI-MODE VIRTUAL BEZEL BACKGROUND 1. Field Embodiments may relate to a display device having a multi-mode virtual bezel (or inactive area). 2. Background Electronic devices include tablet-type computer system (or computing device) in which tablet may couple to a base, and may detach from the base. Electronic devices may also include convertible computing device that may convert from a clamshell mode to a tablet mode. BRIEF DESCRIPTION OF THE DRAWINGS Arrangements and embodiments may be described in detail with reference to the following drawings in which like reference numerals refer to like elements and wherein: FIG. 1 is a front perspective view of an electronic device according to example embodiment; FIG. 2 is a front perspective view of an electronic device according to an example embodiment; FIG. 3 is a front perspective view of an electronic device according to an example embodiment; FIGs. 4A-4B are front views of a tablet according to an example embodiment; FIG. 5 is a block diagram of a tablet according to an example embodiment; FIG. 6 is a flowchart showing virtual bezel transition logic; and FIG. 7 is a flowchart showing a bezel touch. DETAILED DESCRIPTION In the following detailed description, like numerals and characters may be used to designate identical, corresponding and/or similar components in differing figure drawings. Further, in the detailed description to follow, example sizes/models/values/ ranges may be given although embodiments are not limited to the same. Where specific details are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments may be practiced without these specific details. Embodiments may relate to a tablet type computer system in which a tablet may be separated or detached from a base of the tablet type computer system. The tablet may operate in different modes depending on whether the tablet is physically coupled to the base or not. For example, the tablet may operate in a clamshell mode when the tablet is physically coupled to the base. On the other hand, the tablet may operate in a tablet mode when the tablet is not physically coupled to the base. The tablet may include a display having a touchscreen or a touch-sensitive display. The touch-sensitive display (or touchscreen) may include an active area and an inactive area. The active area may be an area that receives a touch input and a component of the tablet responds to the touch input (such as via the display). The inactive area may be an area of the touch-sensitive display (or touchscreen) that does not respond to a touch input. In other words, even though the touch input is provided to the inactive area, the tablet may not change the display or perform any other action. The tablet may appear as if the touch input (to the inactive area) is not recognized by the tablet. Embodiments may also be applicable to a convertible computing device that may convert between a clamshell mode and a tablet mode. In the convertible computing device, the lid or display may be called a tablet or tablet display (that includes a touch-sensitive display). However, in the convertible computing device the tablet display (or lid) may not detach from the base. Embodiments may relate to controlling an active area (or active display area) of the touch- sensitive display (or touchscreen), such as based on the operational mode of the tablet or based on a user input. For example, in the clamshell mode, the touch-sensitive display (or touchscreen) may have a large active area (or large display area) as compared to when in the tablet mode in which the touch-sensitive display (or touchscreen) may have a small active area (or small display area). A size of the inactive area (or virtual bezel) of the touch-sensitive display (or touchscreen) may also change based on the changed size of the active area. This may allow a user to more easily hold the tablet while accidently touching the active area. The inactive area may be called a virtual bezel, which is a bezel area that decreases or increases in size by changing the active display area of the device. A user may hold a tablet in their hands. As such, when using a touch enabled tablet, a large bezel area may be desired so that the user may not block the display area (or active display area) or cause inadvertent touch events while holding the tablet. However, when in the clamshell mode, the bezel may no longer be needed and it may be desirable to have as small a bezel as possible to maximize the active display area. In an electronic device that is operates in both a tablet mode and a clamshell mode (e.g. a detachable tablet or convertible laptop), a virtual bezel may provide an optimal display area depending on how the electronic device is being used. The virtual bezel may be an adjustable color border around the outer edge of the display. By increasing or decreasing the border around the display, the display may appear to change sizes. The change in pixel size may be performed by displays hardware and/or operating system (OS) driver, so the OS may not be affected by the physical change in display area size. Embodiments may provide a computing device that include a touch-sensitive display and display logic at least a portion of which is hardware. This display logic may control a size of the active area of the touch-sensitive display and a size of an inactive area of the touch-sensitive display. This display logic may further set the active area to a first size when the touch-sensitive display is in a first mode. The display logic may still further set the active area to a second size when the touch-sensitive display is in a second mode. Additionally, the first mode may include a first housing of a base and a second housing of the touch-sensitive display in a detached configuration. The second mode may include the first housing and the second housing in a coupled configuration. As discussed hereinafter, various operations may be performed by a touchscreen display, a touch-sensitive display or a tablet. These operations may be performed by display logic at least a portion of which is hardware. FIG. 1 is a front perspective view of an electronic device according to an example embodiment. Other embodiments and configurations may also be provided. The electronic device may be a tablet type computer system in which a tablet may be detachable from other components of the device, such as a base. FIG. 1 shows an electronic device 10 (such as a tablet computer system) that may include a base 20 and a tablet 30. The tablet 30 may be called a detachable tablet. The tablet 30 may include a touch-sensitive display and display logic at least a portion of which is hardware. The tablet 30 may also be called a lid, which may be coupled to or detached from the base 20 via a coupling device. In the convertible computing device, the lid (or tablet display) may not detach from the base. The base 20 may include a first housing, a keyboard 60 and other electronic components. The base 20 may include an area 22 of enhanced thickness at a rear portion thereof. The area 22 of enhanced thickness may be configured to house electronics therein. The tablet 30 may include at least a second housing, a display 40, a processor, a graphic driver, a Wi-Fi component, a memory and a battery, for example. Other components may also be provided on or at the tablet 30. FIG. 1 shows the base 20 and the tablet 30 in a clamshell mode in which at least an edge of the second housing of the tablet 30 and an edge of the first housing of the base 20 may be physically coupled together. The tablet 30 may be attached to the base 20 by a coupling device. More specifically, at least an edge of the first housing may be coupled to an edge of the second housing by the coupling device. The coupling device may include a power connector, a data connector, a mechanical connector and/or a docking connector. The coupling device may also be a mechanical hinge. The tablet 30 may therefore be able to obtain power and/or data from the base 20 via the connectors. In at least one embodiment, the coupling device may be provided on the base 20. In at least another embodiment, the coupling device may include connectors on both the tablet 30 and the base 20. The tablet 30 may detach from the base 20. Upon detecting the detachment, the tablet 30 may enter a tablet mode in which the tablet 30 is self-sufficient. The battery of the tablet 30 may supply power to the tablet 30 when the tablet 30 is not electrically connected to a power source (i.e., in the tablet mode). For a convertible computing device, sensors may be provided on the lid (or tablet display) and/or the base, and the sensors may sense the system operational mode (i.e., tablet mode or clamshell mode). This information may allow the system to set the virtual bezel to be on or off. The display 40 may be a liquid crystal display (LCD), for example, provided within an edge 35 of the tablet 30. The display 40 may include a touch-sensitive display or touchscreen having an active area and an inactive area. The active area may also be considered an active display area. The pixels corresponding to the active area may emit light or may be turned on. The inactive area may be considered a virtual bezel in which a single color, such as black or tan, may be displayed. The non-display area may be an area that does not display objects or widgets, for example. The single color of the virtual bezel may help distinguish the inactive area from the active area. In at least one embodiment, pixels corresponding to the inactive area may be black by the pixels being turned off. A size of the active area and a size of the inactive area may depend on an operation mode of the tablet 30, for example. For example, while in a clamshell mode as shown in FIG. 1, a first active area 42 of the touch-sensitive display (of the display 40) may correspond to substantially all of the display 40. The first active area 42 of the touch-sensitive display may extend up to the edge 35 of the tablet 30, although other dimensions of the first active area 42 may be provided. FIG. 1 shows the first active area 42 of the display 40 covering almost all of a front surface of the display 40, and thus an inactive area is not easily seen. The active area is an area of the display 40 that will accept a touch input and the tablet 30 will respond with some action or operation, such as a change of display. The inactive area is an area of the display 40 that will not respond with some action or operation based on the touch input directly in the inactive area. FIG. 2 is a front perspective view of an electronic device according to an example embodiment. Other embodiments and configurations may also be provided. More specifically, FIG. 2 shows the tablet 30 separated from the base 20. FIG. 2 also shows a coupling device 25 provided on a housing of the base 20. The coupling device 25 may be a groove that allows an edge of the housing of the lid 30 to be coupled to an edge of the housing of the base 20. The coupling device may also include a power connector, a data connector, a mechanical connector, a docking connector, a docking latch mechanism and/or a hinge. This may allow components in the base to be interfaced to the lid or tablet display. Components of the base 20 may include a first housing, a keyboard, a secondary battery, a direct current (DC) in jack and/or USB connectors on the base. FIG. 2 shows the tablet 30 in the tablet mode in which the housing of the tablet 30 is detached from the housing of the base 20. In the tablet mode, display logic of the tablet 30 may change a size of the active area and/or a size of the inactive area. For example, while in a tablet mode, the display 40 may include a second active area 43 (or display area) and an inactive area 46 (or virtual bezel). The second active area 43 of the display 40 may be smaller than the first active area 42 of the display 40. The inactive area 46 may be provided on the display 40 between the second active area 43 and the edge 35 of the tablet 30. The inactive area 46 may create a virtual bezel that surrounds the second active area 43. FIG. 3 is a front perspective view of an electronic device according to an example embodiment. Other embodiments and configurations may also be provided. More specifically, FIG. 3 shows the tablet 30 separated from the base 20. FIG. 3 shows the tablet 30 in a tablet mode in which the tablet 30 is detached from the base 20. In this embodiment, the display logic may display a third active area 44 (or display area) without an inactive area or with a small inactive bezel. The third active area 44 of the display 40 may be the same as or smaller than the first active area 42 of the display 40, and the third active area 44 may be larger than the second active area 43. In another embodiment, an inactive area may be provided between the second active area 44 and the edge 35 of the tablet 30. Display logic may also display a dynamic button (or virtual key) on the touch-sensitive display in order to perform an operation. The button may correspond to a specific area of the touch-sensitive display that may recognize a touch input to the button. The button may correspond to a specific function such as changing an orientation of the display (such as from portrait to landscape), changing a size of the active area, and/or other functions of the display. The button may appear to be displayed in the virtual bezel, although it is an active component of the touch-sensitive display. The active button may be surrounded by the inactive area. FIGs. 4A-4B are front perspective views of an electronic device according to an example embodiment. Other embodiments and configurations may also be provided. More specifically, FIG. 4A shows the display logic of the tablet 30 (in a tablet mode) provides the display 40 in a portrait view. The portrait view may be a view of an image having a height that is greater than a width. FIG. 4B shows the display logic of the tablet 30 (in the tablet mode) provides the display 40 in a landscape view. The landscape view may be a view having a width that is greater than a height. The view may be based on an orientation of the tablet and/or based on a touch input to a button (or virtual button) displayed on the display 40. FIG. 4A shows the display logic displays a button 52 (or key) provided at a specific area of the display 40. For example, the button 52 may be provided in a lower center area of the display 40 when the tablet 30 is displaying an image in the portrait view. The button 52 may be to perform any of a number of operations of the tablet 30 based on a touch input. The button 52 may appear to be in the inactive area 46; however, since an operation may occur based on a touch input to the button 52, the button 52 may be considered as being in an active area of the touchscreen. FIG. 4B shows the display logic displays a button 54 (or key) provided at a specific area of the display 40. For example, the button 54 may be provided in a lower center area of the display 40 when the tablet 30 is displaying an image in the landscape view. The button 54 may be to perform any of a number of operations of the tablet 30 based on a touch input. The operations may be performed by the display logic. The button 54 may appear to be in the inactive area 46; however, since an operation may occur based on a touch input to the button 54, the button 54 may be considered as being displayed in an active area of the touchscreen. The computing device may include sensors to determine if the display is in portrait or landscape mode. The buttons 52, 54 may become active depending on an orientation of the display. This may be performed by display logic. Further, backlights under the buttons 52, 54 may be illuminated accordingly to indicate to the user when the corresponding button is active and can support a user input. In at least one embodiment, display logic may center justify the active area with a bottom of the display. In at least one embodiment, display logic may cause the touch-sensitive display to change the size of the active area based on an input from the group consisting of a keypad, a touchpad, a key and a touch point nub. FIG. 5 illustrates a block diagram of a tablet according to an embodiment. Other embodiments and configurations may also be provided. Display logic may include components with the tablet shown in FIG. 5. The tablet 30 may include a processor 102 coupled to a bus 104. While the tablet 30 is shown with a single processor 102, the tablet 30 may have multiple processors or the processor 102 may have multiple cores. A memory 106 may also be coupled to the bus 104 and may store data and sequences of instructions that are executed by the processor 102 or any other device included in the tablet 30. The memory 106 may include random access memory (RAM), read only memory (ROM), and/or other type of memory. A data storage device 108 may also be coupled to the bus 104 to store information and instructions. The data storage device 108 may include a magnetic disk (e.g., a hard disk), an optical disc (e.g., a CD-ROM) and/or a digital versatile disc (DVD), etc. The tablet 30 may further include a chipset 110. The chipset 110 may include a graphics controller and an input/output (I/O) controller. The graphics controller may manage information to be displayed on the display 40 of the tablet 30. The I/O controller may manage I/O devices (e.g., game controller, mouse, etc.) that may be connected to the tablet 30. An antenna 112 and/or a network interface may also be connected to the bus 104 to provide via wireless and/or wireless connections, respectively, access to a network, such as a personal area network, local area network and/or wide area network. Instructions executed by the processor 102 may be provided to the memory 106 from a machine-accessible medium, or an external storage device accessible via a remote connection (e.g., over a network via the antenna 112 and/or the network interface 115) providing access to one or more electronically-accessible media, etc. A machine-accessible medium may include any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine (e.g., a computer or a processor). For example, a machine-accessible medium may include RAM, ROM, magnetic or optical storage medium, flash memory devices, electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals); etc. In an alternative embodiment, hard-wired circuitry may be used in place of or in combination with the instructions, and thus embodiments are not limited to any specific combination of hardware circuitry and software instructions. The tablet 30 may include a sensor(s) 116, such as an accelerometer, a proximity sensor and/or a depth sensor. The sensor(s) 116 may sense a touch input to the display 40 or a proximate touch to the display 40. The sensor(s) may also sense orientation of the display 40 and activate the buttons 52 or 54 and associated backlights for the buttons 52, 54. The tablet 30 may also include a battery 118 and/or a power supply to power the components of the tablet 30. The memory 106 may store data, such as programs, application, software, etc. The memory 106 may store an algorithm or instructions to control the display 40. For example, the memory 106 may store an algorithm or instructions to control a size of the active area and a size of the inactive area. Various functions of embodiments as described herein may be implemented using one or more of these hardware systems, and/or instructions or routines that may be executed by one or more executions units, such as the processor 102, within the hardware systems. Machine executable instructions may be stored using any machine readable storage medium including internal memory, such as the memory 106, as well as various external and internal memories, such as a hard drive, diskette, CD-ROM, magnetic tape, digital video or versatile disk (DVD), laser disk, Flash memory, a server on a network, etc. In an alternative embodiment, various functions of embodiments may be implemented in discrete hardware or firmware (or hardware logic). For example, one or more application specific integrated circuits (ASICs) may be programmed with one or more of the above described functions. In another example, one or more functions may be implemented in one or more ASICs on additional circuit boards and the circuit boards may be inserted into the tablet 30 described above. In another example, one or more programmable gate arrays (PGAs) may be used to implement one or more functions. In yet another example, a combination of hardware and software may be used to implement one or more functions of embodiments. Embodiments may provide a tablet in which a size of a virtual bezel may change based on display logic at least a portion of which is hardware. In at least one embodiment, when the tablet 30 enters the tablet mode (as shown in FIG. 2), then the display logic may decrease a size of the active area of the display 40 as compared to the size of the active area of the display 40 in the clamshell mode (as shown in FIG. 1). Embodiments may allow a user to control a size of the active area and the inactive area by changing a setting of the tablet 30. The settings may also be pre-set to the tablet 30. As one example, when the housing of the tablet 30 is physically connected to the housing of the base 20, then the active area (or the active display area) may extend to the edge 35 of the tablet 30, and thereby the inactive area (or the virtual bezel) may disappear or be minimized. This may allow use of a maximum size of the display 40. When the user removes the tablet 30 from the base 20 (or when the tablet 30 enters the tablet mode), then the inactive area (or the virtual bezel) may appear and the active area (or the active display area) may be reduced to allow the user to hold the tablet 30 at the inactive area (i.e., the virtual bezel) without an accidental touch input. Display logic may provide that the inactive area (or the virtual bezel) to disappear automatically during specific applications. For example, if a user desires to watch a video on the tablet 30 while in the tablet mode, then the display logic may expand the active area (or the active display area), such as shown in FIG. 3. The sensor 116 (or the sensors), such as an accelerometer, may detect when a user is holding the tablet 30. Upon sensing a holding of the tablet 30, the controller 104 of the tablet 30 may switch the display 40 to a different mode or view. Additionally, a proximity sensor or a depth sensor may detect when a user's hand is approaching the tablet 30. Based on the sensed detection, the controller 104 (or display logic) may automatically provide the inactive area (or the virtual bezel) to allow the user to adjust a tablet angle relative to the base 20 or to remove the tablet 30 from the base 20, for example. In at least one embodiment, the active area (or active display area) of the display 40 may shrink for the tablet mode (i.e., larger bezel), and touch over the display pixels that make up the virtual bezel (or inactive area) may not be sent to the operating system (OS). This may be performed by display logic. Touches over the smaller display area or active area may be remapped to new touch coordinates since the active display area has changed underneath the touch screen. The change in size of the virtual bezel may be triggered physically (i.e., when the tablet 30 is removed from the base or the dock or the convertible changes to the tablet mode) or based on sensors (certain rotations of the device or when the tablet 30 is grabbed a certain way). This trigger may involve the user grabbing the entire display 40 in order to manipulate it, and a touch may be momentarily disabled on the entire device while changing bezel sizes to avoid inadvertent touch events. Touch may be either re-enabled when the transition is complete, mechanically (tablet is re-attached) or by the user touching a specific area on the device. FIG. 6 is a flowchart showing virtual bezel transition logic according to an example embodiment. Other embodiments, operations and other orders of operations may also be provided. More specifically, in operation 202, a virtual bezel mode change may be triggered. In operation 205, a determination may be made regarding whether the tablet display (or touch- sensitive display) should go to a tablet mode. If the determination is NO in operation 204, then in operation 205, the active display area may be enlarged and the virtual bezel area may be decreased. Operations may then proceed to operation 202. If the determination is YES in operation 204, then in operation 206, the active display area may be decreased and the virtual bezel area may be enlarged. In operation 208, the touch device may be disabled. Operation 210, the tablet display (or touch-sensitive display) may wait for a touch trigger to be enabled. In operation 212, the touch may be reactivated. Operations may then proceed to operation 202. When the virtual bezel (or inactive area) increases during the tablet mode (and the active display area decreases in size), embodiments may take advantage of that dead display area and the display logic may display custom buttons that are sensitive but still displayable surrounded by the inactive area. Placement, look and behavior of the dynamic button(s) may be provided anywhere in the virtual bezel (or the inactive area). The buttons may replace keyboard keys or may be used for macros commands such as an airplane mode to disable all wireless functions. The buttons may permit common system actions (such as volume control, display brightness, etc.) to always be accessible to the user independent of the normal display area controlled by the operating system. Buttons may be activated (such as by display logic) to serve as shortcuts without consuming space in the application's client window area. When the device is rotated (which normally changes the display orientation), the display logic may automatically move the buttons to new areas in the virtual bezel. The buttons may be hidden or moved based on user preferences. Touches on the outside of the touch sensitive area must be processed based on the virtual bezel state and placement of dynamic buttons. FIG. 7 shows a logic flow for a bezel touch according to an example embodiment. Other embodiments, operations and orders of operations may also be provided. A touch in the virtual bezel area may be a regular touch on the display (clamshell or small virtual bezel mode), a touch in the bezel that is ignored (tablet or large bezel mode) or a dynamic button touch (tablet bezel mode). In operation 302, a touch may be received. In operation 304, a determination may be made by display logic regarding whether the touch is in the virtual bezel area. If the determination is NO in operation 304, then in operation 312, then the operating system may map the event as a touch event. In operation 314, the operating system may process the touch event. Operations may then proceed to operation 302. If the determination is YES in operation 304, then in operation 306 a determination may be made by display logic whether the device is in a tablet mode. If the determination is NO in operation 306, then operations may proceed to operation 312. If the determination is YES in operation 306, then in operation 308, a determination may be made by display logic regarding whether a touch is to a dynamic button area. IF the determination is NO in operation 308, then operations may proceed to operation 302. If the determination is YES in operation 310, then the operating system may be notified of an application button input. Any reference in this specification to "one embodiment," "an embodiment," "example embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to affect such feature, structure, or characteristic in connection with other ones of the embodiments. Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. |
Methods, apparatuses, and non-transitory machine-readable media for presentation of digital images from user drawings are described. Apparatuses can include a display, a memory device, and a controller. In an example, a method can include the controller receiving data representing a user drawing, identifying a feature of the user drawing based on the data, and comparing the feature of the user drawing to features of a plurality of digitized images. In another example, a particular digitized image can be displayed based on the comparison of the feature with the features of the plurality of digitized images. |
What is claimed is:1. A method, comprising: receiving, by a controller, data representing a user drawing; identifying a feature of the user drawing based on the data; comparing the feature of the user drawing to features of a plurality of digitized images; and displaying a particular digitized image of the plurality of digitized images based on the comparison of the feature with the features of the plurality of digitized images and a context of the user drawing.2. The method of claim 1, wherein receiving the data representing the user drawing includes receiving an input made using a touchscreen display.3. The method of claim 1, wherein receiving the data representing the user drawing includes receiving an image of the user drawing.4. The method of claim 1, wherein receiving the data representing the user drawing includes receiving the data from a computing device.5. The method of claim 1, wherein identifying the feature of the user drawing includes identifying a line segment.6. The method of claim 1, wherein identifying the feature of the user drawing includes identifying a shape.7. The method of any one of claims 1-6, wherein the plurality of digitized images is stored in an image library, and wherein the method includes selecting the image library from among a plurality of different image libraries.8. The method of any one of claims 1-6, wherein the method includes displaying the particular digitized image responsive to a determination that the feature of the user drawing and a digitized feature of the particular digitized image exceed a similarity threshold.9. The method of any one of claims 1 -6, wherein receiving the data representing the user drawing includes receiving a plurality of inputs made using a touchscreen display; and wherein the method includes: identifying the feature of the user drawing based on the data while the plurality of inputs is being received; comparing the feature of the drawing to the features of the plurality of digitized images while the plurality of inputs is being received; and displaying the particular digitized image of the plurality of digitized images based on the comparison of the feature with the features of the plurality of digitized images while the plurality of inputs is being received.10. The method of any one of claims 1-6, wherein receiving the data representing the user drawing includes receiving a plurality of inputs made using a touchscreen display; and wherein the method includes: identifying the feature of the user drawing based on the data after the plurality of inputs is received; comparing the feature of the drawing to the features of the plurality of digitized images after the plurality of inputs is received; and displaying the particular digitized image of the plurality of digitized images based on the comparison of the feature with the features of the plurality of digitized images after the plurality of inputs is received.11. A non-transitory machine-readable medium comprising a processing resource in communication with a memory resource having instructions, which when executed by the processing resource, cause the processing resource to: receive data representing a user drawing; identify a feature of the user drawing based on the data; compare the feature of the user drawing to features of a plurality of digitized images of a particular image library; and
provide a particular digitized image of the plurality of digitized images via a display based on the comparison of the feature with the features of the plurality of digitized images.12. The medium of claim 11, including instructions to provide a plurality of image libraries, each including a different plurality of digitized images.13. The medium of claim 12, including instructions to receive a user selection of the particular image library from among the plurality of image libraries.14. The medium of claim 12, including instructions to select the particular image library from among the plurality of image libraries based on a location context.15. The medium of claim 12, including instructions to select the particular image library from among the plurality of image libraries based on a time context.16. The medium of claim 12, including instructions to select the particular image library from among the plurality of image libraries based on a user occupation context.17. The medium of claim 12, including instructions to select the particular image library from among the plurality of image libraries based on a historical user drawing.18. An apparatus, comprising: a display; a memory device; a controller coupled to the memory device configured to: receive data representing a user drawing; identify a feature of the user drawing based on the data; and17
compare the feature of the user drawing to features of a plurality of digitized images; and wherein the display is configured to display a particular digitized image of the plurality of digitized images based on the comparison by the controller of the feature with the features of the plurality of digitized images.19. The apparatus of claim 18, wherein the apparatus is a mobile device, wherein the display is a touchscreen display, and wherein the controller is configured to receive the data representing the user drawing via the touchscreen display.20. The apparatus of claim 18, wherein the apparatus includes a camera, and wherein the controller is configured to receive the data representing the user drawing via an image of the drawing captured by the camera.18 |
Presentation of Digitized Images From User DrawingsTechnical Field[0001] The present disclosure relates to apparatuses, non-transitory machine-readable media, and methods for presentation of digitized images from user drawings.Background[0002] A computing device is a mechanical or electrical device that transmits or modifies energy to perform or assist in the performance of human tasks. Examples include thin clients, personal computers, printing devices, laptops, mobile devices (e.g., e-readers, tablets, smartphones, etc.), intemet-of- things (loT) enabled devices, and gaming consoles, among others. An loT enabled device can refer to a device embedded with electronics, software, sensors, actuators, and/or network connectivity which enable such devices to connect to a network and/or exchange data. Examples of loT enabled devices include mobile phones, smartphones, tablets, phablets, computing devices, implantable devices, vehicles, home appliances, smart home devices, monitoring devices, wearable devices, devices enabling intelligent shopping systems, among other cyber-physical systems.[0003] A computing device can include an imaging device (e.g., a camera) used to capture images. A computing device can include a display used to view images. The display can be a touchscreen display that serves as an input device. When a touchscreen display is touched by a finger, digital pen (e.g., stylus), or other input mechanism, associated data can be received by the computing device. The touchscreen display may include pictures and/or words, among others that a user can touch to interact with the device.Brief Description of the Drawings[0004] Figure 1 is a functional block diagram in the form of a computing system including an apparatus having a display, an imaging device, a memory device, and a controller in accordance with a number of embodiments of the present disclosure.
[0005] Figure 2 is a diagram representing an example of a user drawing on a display of a computing device in accordance with a number of embodiments of the present disclosure.[0006] Figure 3 is a diagram representing an example of a digitized image displayed on the display of the computing device based on the comparison of the features of the user drawing of Figure 2 with the features of the plurality of digitized images in accordance with a number of embodiments of the present disclosure.[0007] Figure 4 is a functional diagram representing a processing resource in communication with a memory resource having instructions stored thereon for presentation of digitized images from user drawings in accordance with a number of embodiments of the present disclosure.[0008] Figure 5 is a flow diagram representing an example method for presentation of digitized images from user drawings in accordance with a number of embodiments of the present disclosure.Detailed Description[0009] Apparatuses, machine-readable media, and methods related to presentation of digitized images from user drawings are described. Where referred to herein, a “user drawing” or simply a “drawing” is a non-digitized drawing made by a human. Stated differently, a drawing may refer to a “handdrawn,” rather than a “computer-drawn,” drawing. Traditionally, drawings can be made with one or more drawing tools (e.g., pencils, pens, markers, chalk, etc.) on one or more surfaces (e.g., paper, chalkboards, whiteboards, etc.). More recently, user drawings can be made with a digital pen (e.g., a stylus) or a finger on a touchscreen display.[0010] In many scenarios, users draw an informal drawing and desire that drawing to be digitized. As referred to herein, a digitized image is an image that is computer-rendered and/or computer-readable. Embodiments of the present disclosure can receive a drawing (e.g., data representing a drawing) and present a digitized image of that drawing.[0011] In one example, a team is in a meeting room working on a process associated with a new product launch. During that meeting, a block diagram is drawn on a lightboard, whiteboard, or poster board that depicts the
steps of the process. If the team wants to present its management with the agreed-upon process, it may desire to formalize the diagram to increase its clarity and/or readability. In another example, a teacher putting together a trigonometry quiz may draw a triangle having sides that are not quite straight or a circle that is not quite circular.[0012] In either case, and in many other cases, converting these drawings to formal, digitized images using previous approaches may require labor, time, or knowledge. Historically, a set of traditional drafting tools (e.g., straightedge, compass, ruler, and/or templates) would be employed to formalize drawings. However, digitization of drawings may involve the use of sophisticated software having cost and time barriers for users. In some approaches, users can send drawings out to a service or draftsperson to be digitized (at a cost). Some would-be drawers may attempt to consult one or more software image libraries (e.g., Clip art), only to be frustrated by endless browsing, difficulty searching, or a lack of the specific image(s) they seek.[0013] Embodiments of the present disclosure can take a hand-drawn drawing, identify its features, and present a digitized image based on a comparison of those features with features of digitized images in an image library. In some embodiments, a picture of the drawing can be captured by an imaging device (e.g., a camera). In some embodiments, the drawing can be made using a touchscreen display. In some embodiments, the drawing can be received from a separate device (e.g., via message, email, etc.). The features identified can include, for example, line segments, arcs, single-point inputs (e.g., dots), and/or shapes. When directly input into a computing device, as in cases of a touchscreen display, a digitized image can be provided even before the user is finished drawing. Accordingly, frustrations stemming from the effort, time, and expertise associated with previous approaches can be reduced.[0014] In some embodiments, the image library from which the digitized images are chosen for comparison can be one of a plurality of available image libraries. That is, embodiments herein can reduce time and provide better results by focusing the comparison on a subset of what may be a large amount of digitized images. The particular library used for comparison with the drawing can be selected based on one or more criteria. In some embodiments, the user can select the particular library (e.g., from a list). In some embodiments, the
particular library can be selected without specific user input (e.g., automatically). Such selection can be made based on the context of the drawing.[0015] Context, in accordance with the present disclosure, is a set of circumstances relevant to the determination of what features a drawing may depict. Contexts include, for example, location context, time context, occupation context, and user context. For example, a person who works as an optical engineer may be expected to draw certain kinds of drawings on their smartphone at their office during weekdays. These could include, for instance, block diagrams and/or optical diagrams. That same smartphone, however, may be expected to have different kinds of drawings drawn on it during the evening at home by their 5-year-old child. These could include, for instance, triceratopses, people, and/or horses. Embodiments herein can take these different contextual factors into account when selecting an image library for comparison. As a result, the digitized image(s) compared can be better suited to what may actually be drawn.[0016] Some embodiments of the present disclosure include a method comprising receiving, by a controller, data representing a user drawing, identifying a feature of the user drawing based on the data, comparing the feature of the user drawing to features of a plurality of digitized images, and displaying a particular digitized image of the plurality of digitized images based on the comparison of the feature with the features of the plurality of digitized images and a context of the user drawing.[0017] Some embodiments of the present disclosure include an apparatus comprising a display, a memory device, and a controller coupled to the memory device configured to receive data representing a user drawing, identify a feature of the user drawing based on the data, and compare the feature of the user drawing to features of a plurality of digitized images. The display can be configured to display a particular digitized image of the plurality of digitized images based on the comparison by the controller of the feature with the features of the plurality of digitized images. As used herein, an “apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example.
[0018] Yet other embodiments of the present disclosure can include a non-transitory machine-readable medium comprising a processing resource in communication with a memory resource having instructions, which when executed by the processing resource, cause the processing resource to receive data representing a user drawing, identify a feature of the user drawing based on the data, compare the feature of the user drawing to features of a plurality of digitized images of a particular image library, and provide a particular digitized image of the plurality of digitized images via a display based on the comparison of the feature with the features of the plurality of digitized images.[0019] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure can be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments can be utilized and that process, electrical, and structural changes can be made without departing from the scope of the present disclosure.[0020] As used herein, designators such as “N,” etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designation can be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” can include both singular and plural referents, unless the context clearly dictates otherwise. In addition, “a number of,” “at least one,” and “one or more” (e.g., a number of memory devices) can refer to one or more memory devices, whereas a “plurality of’ is intended to refer to more than one of such things. Furthermore, the words “can” and “may” are used throughout this application in a permissive sense (i.e. , having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, means “including, but not limited to.” The terms “coupled,” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The terms “data” and “data values” are used
interchangeably herein and can have the same meaning, as appropriate to the context.[0021] The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures can be identified by the use of similar digits. For example, 222 can reference element “22” in Figure 2, and a similar element can be referenced as 322 in Figure 3. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and/or the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present disclosure and should not be taken in a limiting sense.[0022] Figure 1 is a functional block diagram in the form of a computing system including an apparatus 100 having a display 102, an imaging device 104, a memory device 106, and a controller 108 (e.g., a processor, control circuitry, hardware, firmware, and/or software) in accordance with a number of embodiments of the present disclosure. The memory device 106, in some embodiments, can include a non-transitory machine-readable medium (MRM), and/or can be analogous to the memory resource 452 described with respect to Figure 4. The apparatus 100 can be a computing device; for instance, the display 102 may be a touchscreen display of a mobile device such as a smartphone. The controller 108 can be communicatively coupled to the memory device 106 and/or the display 102. As used herein, “communicatively coupled” can include coupled via various wired and/or wireless connections between devices such that data can be transferred in various directions between the devices. The coupling need not be a direct connection, and in some examples, can be an indirect connection. The imaging device 104 can be a camera, for instance, such as one known to those skilled in the art.[0023] The memory device 106 can include non-volatile or volatile memory. For example, non-volatile memory can provide persistent data by retaining stored data when not powered, and non-volatile memory types can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable
ROM (EPROM), and Storage Class Memory (SCM) that can include resistance variable memory, such as phase change random access memory (PCRAM), three-dimensional cross-point memory (e.g., 3D XPoint™), resistive random access memory (RRAM), ferroelectric random access memory (FeRAM), magnetoresistive random access memory (MRAM), and programmable conductive memory, among other types of memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random-access memory (DRAM), and static random access memory (SRAM), among others.[0024] The controller 108 can receive data representing a user drawing. In some embodiments, the data can be received as data from an imaging device. An image (e.g., a picture) can be captured using the imaging device 104. For example, a user can capture a picture of a drawing drawn on a tangible medium (e.g., a whiteboard) using the imaging device 104. The picture can be received by the controller 108. In some embodiments the data can be received as one or more inputs drawn on the display 102 with a finger or stylus, for instance. For example, one or more line strokes made by a trace of a user’s finger across the display 102 can be received as data representing a user drawing. Each line stroke can represent a trace or a portion of a trace of a moving input point used to create the drawing. In some embodiments, the drawing can be made without the use of a tangible medium. For instance, a user can wear an augmented reality (AR) device or virtual reality (VR) device, such as a headset, and embodiments herein can track movements of a hand or stylus using the AR or VR device.[0025] The controller 108 can identify a feature of the drawing based on the data. A feature can include a line segment, an arc, a single-point input (e.g., dot), and/or shapes (e.g., triangles, rectangles, etc.). The identification of a feature can be accomplished by various techniques. For example, in some embodiments the controller 108 can identify line segments from a line stroke by decomposing the line stroke into various building blocks (e.g., straight lines and curved lines). In some embodiments, the controller 108 can utilizes a Hough transform to identify features. For example, the controller 108 can analyze the input(s) to identify and extract drawing features by use of a Hough transform algorithm. In some embodiments, the controller 108 can use the Hough
transform, or another image feature identification and extraction technique, to generate a vector graphics representing the drawing. A vector graphics representation is a representing a drawing that characterizes the drawing in terms of its constituent elements (e.g., primitive geometrical elements, such as points, lines and curves). Thus, for example, the vector graphics representation represents the drawing as a mathematical expression based on the drawing's line segments.[0026] The controller 108 can compare the feature of the user drawing to features of a plurality of digitized images. A feature of one of the plurality of digitized images represents a portion or all of a corresponding digitized image. In some embodiments, the plurality of digitized images can be images of a particular image library selected from among a plurality of image libraries having different digitized images therein.[0027] The controller 108 can compare the features to determine a similarity or dissimilarly measure between the sets of features. The controller 108 can then use this measure of similarity or dissimilarly of features to determine if the corresponding drawings are similar or not. In some embodiments, the features of the user drawing can be compared with features of the plurality of digitized images to determine whether the similarity measure exceeds a similarity threshold. The similarity measure can be determined in a variety of ways. In some embodiments, the similarity measure can be determined by cosine similarity measures of feature vectors that describe the line segments, by spatial distance measurements, or any other suitable process that can determine a similarity between two sets of features.[0028] The controller 108 can cause the display 102 to display a particular digitized image of the plurality of digitized images based on the comparison of the feature with the features of the plurality of digitized images. In some embodiments, for instance, the displayed digitized image can be a digitized image having a threshold-exceeding similarity to the drawing. In some embodiments, the controller 108 can score the similarity between the drawing and one or more digitized images and identify one or more digitized images as candidate images based on the similarity score(s) for the digitized image(s) meeting a threshold value. The threshold value can be a predefined value or can be a sliding value based on the similarity scores of the digitized images. For
example, the threshold value may be set to identify the top three most similar digitized images as candidate images.[0029] Figure 2 is a diagram representing an example of a user drawing 224 on a display 222 of a computing device 220 in accordance with a number of embodiments of the present disclosure. Computing device 220, for instance, may be a smartphone with a touchscreen display 222. A user may draw the drawing 224 with his or her finger or a digital pen, for example. The particular shape of the drawing 224 is not limited to that illustrated in Figure 2.[0030] As described herein, embodiments of the present disclosure can identify a feature of a user drawing. A plurality of features is identified in the drawing 224. Features identified in the drawing 224 include a plurality of shapes and a plurality of line segments. For instance, identified in the drawing 224 is a drawn “A” rectangle 226, a drawn “B” diamond 228, a drawn “C” rectangle 230, a drawn “D” rectangle 232, a first drawn line segment 234, a second drawn line segment 236, and a third drawn line segment 238. As described herein, the features of the drawing 224 can be compared to features of a plurality of digitized images. In some embodiments, a single digitized image includes features similar to all the features of the user drawing. In some embodiments, a single digitized image includes fewer than all the features of the user drawing. In such cases, multiple digitized images can be combined such that the resulting combination is sufficiently similar to the user drawing.[0031] Figure 3 is a diagram representing an example of a digitized image 340 displayed on the display 322 of the computing device 320 based on the comparison of the features of the user drawing 224 of Figure 2 with the features of the plurality of digitized images in accordance with a number of embodiments of the present disclosure. A plurality of features is seen in the digitized image 340, including a plurality of shapes and a plurality of line segments. For instance, the digitized image 340 includes an “A” rectangle 326, a “B” diamond 328, a “C” rectangle 330, a “D” rectangle 332, a first line segment 334, a second line segment 336, and a third line segment 338. As seen with reference to Figures 2 and 3, each of the features in the digitized image 340 corresponds to a respective feature in the drawing 224. The drawn “A” rectangle 226 corresponds to the “A” rectangle 326. The drawn “B” diamond 228 corresponds to the “B” rectangle 328. The drawn “C” rectangle 230 corresponds
to the “C” rectangle 330. The drawn “D” rectangle 232 corresponds to the “D” rectangle 332. The first drawn line segment 234 corresponds to the first line segment 334. The second drawn line segment 236 corresponds to the second line segment 336. The third drawn line segment 238 corresponds to the third line segment 338. The digitized image 340, when compared with the drawing 224, has straight lines, sharp comers, and an increased professionalism. [0032] Figure 4 is a functional diagram representing a processing resource 458 in communication with a memory resource 452 having instructions 454, 456, 458, 460 stored thereon for presentation of digitized images from user drawings in accordance with a number of embodiments of the present disclosure. The memory resource 452, in some embodiments, can be analogous to the memory device 106 described with respect to Figure 1. The processing resource 458, in some examples, can be analogous to the controller 108 described with respect to Figure 1.[0033] A system 450 can be a server or a computing device (among others) and can include the processing resource 458. The system 450 can further include the memory resource 452 (e.g., a non-transitory MRM), on which may be stored instructions, such as instructions 454 and 456. Although the following descriptions refer to a processing resource and a memory resource, the descriptions may also apply to a system with multiple processing resources and multiple memory resources. In such examples, the instructions may be distributed (e.g., stored) across multiple memory resources and the instructions may be distributed (e.g., executed by) across multiple processing resources.[0034] The memory resource 452 may be electronic, magnetic, optical, or other physical storage device that stores executable instructions. Thus, the memory resource 452 may be, for example, a non-transitory MRM comprising Random Access Memory (RAM), an Electrically-Erasable Programmable ROM (EEPROM), a storage drive, an optical disc, and the like. The memory resource 452 may be disposed within a controller and/or computing device. In this example, the executable instructions 454, 456, 458, 460 can be “installed” on the device. Additionally or alternatively, the memory resource 452 can be a portable, external or remote storage medium, for example, that allows the system 450 to download the instructions 454, 456, 458, 460 from the portable/extemal/remote storage medium. In this situation, the executable instructions may be part of an
“installation package”. As described herein, the memory resource 452 can be encoded with executable instructions for presentation of digitized images from user drawings.[0035] The instructions 454, when executed by a processing resource such as the processing resource 458, can include instructions to receive data representing a user drawing. The user drawing represented by the data is a nondigitized drawing made by a human. The data can be received from an input made into a touchscreen display, for instance, or from an imaging device, though it is noted that embodiments herein are not so limited.[0036] The instructions 456, when executed by a processing resource such as the processing resource 458, can include instructions to identify a feature of the user drawing based on the data. Features, as described herein, include line segments, arcs, points, shapes, etc. Identification of such features in the user drawing can be accomplished in various manners, such as those described previously herein.[0037] The instructions 458, when executed by a processing resource such as the processing resource 458, can include instructions to compare the feature of the user drawing to features of a plurality of digitized images of a particular image library. In some embodiments, a particular image library can be consulted responsive to a user selection of that image library (e.g., from a list of image libraries). The names of the image libraries can be displayed in a user- configurable manner. In some embodiments, one or more sample images from the image library are displayed to convey the type, style, or content of the image libraries.[0038] In some embodiments, the particular image library can be selected without user input (e.g., automatically). The selection of the particular image library can be made based on historical image libraries used by the user, for instance. In some embodiments a list of previously-used image libraries can be made available on the display for selection. As previously discussed, the particular image library can be selected based on context. Different image libraries may have varying degrees of relevance depending on where the drawing is being drawn (e.g., location context), when the drawing is being drawn (e.g., time context), what the user does (e.g., occupation context), and who the user is (e.g., user context).
[0039] Factors bearing on location context are factors concerning where the computing device (and, by extension, the user) is. Such factors include a country, state, or town where the computing device is located. Additionally, such factors can include whether the user is indoors or outdoors. A user outdoors may be expected to be drawing building exteriors or nature subjects more so than one indoors. Additional factors can include whether the user is home or away from home, or at work or away from work. Factors bearing on time context are factors concerning the temporal aspects of a drawing. Such factors include time of day, day of week, and season, for instance. For example, a drawing made during a workday may be more likely to include diagrammatical features than one being drawn at 8:00 pm on a Saturday. Further, a particular library selected for comparison with a drawing made in the winter would be more likely to include a snowman than one selected for a drawing made in the summer. Factors bearing on occupation context are factors concerning what the user does. Such factors can include a type of industry in which the user works (e.g., business, construction, manufacturing, transportation, food service, etc.), a type of occupation engaged in by the user (e.g., salesperson, laborer, driver, engineer, etc.), whether the user is a manager or a lower-level employee, what tools with which the user typically works, and others. Factors bearing on user context are factors concerning who the user is. The identity of the user drawing the drawing can be determined through a user login, through biometric recognition, or by other manners. In some embodiments, a user may enter or select these factors to provide embodiments herein with increased context regarding the types of drawings they may draw. User context factors can include the age of the user, the gender of the user, activities performed by the user (e.g., hobbies, sports, clubs, etc.), family members of the user, and interests of the user, for instance.[0040] Multiple contexts can be considered simultaneously. For example, a drawing made on the Fourth of July by a first user in Indonesia may not carry the same contextual weight as one being made on the Fourth of July by second user in America. The particular library selected for the second user would be more likely to include patriotic American digitized images than would the particular library selected for the first user. Further, images commonly
associated with a particular religious holiday may not be of particular relevance to a user drawing something on that day who does not celebrate the holiday. [0041] The instructions 460, when executed by a processing resource such as the processing resource 458, can include instructions to provide a particular digitized image of the plurality of digitized images via a display based on the comparison of the feature with the features of the plurality of digitized images. In some embodiments, the particular digitized image can be provided (e.g., suggested) before the user is finished drawing. In some embodiments, the user may prefer to finish a drawing before being provided with the digitized image and can deactivate this feature.[0042] Figure 5 is a flow diagram representing an example method 562 for presentation of digitized images from user drawings in accordance with a number of embodiments of the present disclosure. At 564, the method 562 includes receiving, by a controller, data representing a user drawing. The data can be received in different forms. For example, the data can be in the form of image data (e.g., from an image sensor or imaging device), the data can be in the form of input data from a touchscreen display. In some embodiments, the data can be input into a separate device, such as a different touchscreen display of a separate device.[0043] At 566, the method 562 includes identifying a feature of the user drawing based on the data. Feature identification can be accomplished by various techniques, such as those described herein. For example, features can be identified using a Hough transform. Features identified can include constituent elements, such as points, lines, and/or curves, among other elements.[0044] At 568, the method 562 includes comparing the feature of the user drawing to features of a plurality of digitized images, and at 570, the method 562 includes displaying a particular digitized image of the plurality of digitized images based on the comparison of the feature with the features of the plurality of digitized images and a context of the user drawing. In some embodiments, digitized images having features with a threshold-exceeding similarity to those of the drawing can be presented (e.g., displayed). In some embodiments, suggestions regarding the particular digitized image can be made while the drawing is still being drawn. The user can select or otherwise indicate approval of the suggestion of the particular digitized image.
[0045] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0046] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
Complementary metal oxide semiconductor metal gate transistors may be formed by depositing a metal layer in trenches formerly inhabited by patterned gate structures. The patterned gate structures may have been formed of polysilicon in one embodiment. The metal layer may have a workfunction most suitable for forming one type of transistor, but is used to form both the n and p-type transistors. The workfunction of the metal layer may be converted, for example, by ion implantation to make it more suitable for use in forming transistors of the opposite type. |
1. A method comprising: depositing a metal layer having first and second portions/ forming a transistor of a first type using the first portion of said metal layer; modifying the workfunction of the second portion of said metal layer; and forming a transistor of a second type using said second portion of said metal layer. 2. The method of claim 1 including forming a sacrificial layer on a semiconductor substrate and patterning said sacrificial layer to form gate structures for n-type and p-type transistors. 3. The method of claim 2 including covering said gate structures with an insulator material. 4. The method of claim 3 including removing said gate structures. 5. The method of claim 4 including forming trenches in said insulator material by removing said gate structures and depositing said metal layer over said insulator material and in said trenches. 6. The method of claim 5 including filling one of said trenches with a metal to form said transistor of a first type. 7. The method of claim [beta] including ion implanting the second portion of said exposed metal layer to alter the workfunction of said metal layer. 8. The method of claim 7 including implanting said metal layer to increase the workfunction of said metal layer. 9. The method of claim 8 including depositing metal over the implanted metal layer into the other of said trenches to form a transistor of a second type. 10. The method of claim 1 including forming said transistor of a first type as an n-type transistor. 11. A semiconductor structure comprising: a transistor of a first type including a gate electrode; ' a transistor of a second type including a gate electrode, said transistor of a second type including a gate dielectric, a first metal over said gate dielectric, and a second metal over said first metal, said first metal having an altered workfunction. 12. The structure of claim 11 wherein the workfunction of said first metal has been increased. 13. The structure of claim 12 wherein said transistor of a first type has a metal layer of the same material as the first metal layer of said transistor of a second type but said first metal layer has a different workfunction than said metal layer of said transistor of a first type. 14. The structure of claim 13 wherein said transistor of said first type is an n-type transistor and said transistor of a second type is a p-type transistor. 15. The structure of claim 14 wherein said transistors include a gate dielectric layer having a dielectric constant greater than 10. 16. A method comprising: forming a pair of trenches over a semiconductor structure; depositing a layer of metal in each of said trenches; modifying the workfunction of the metal in one of said trenches while preventing modification of the workfunction of the metal in the other of said trenches; and forming metal gate electrodes over said metal layer in each of said trenches. 17. The method of claim 16 including altering the workfunction by ion implanting the metal layer. 18. The method of claim 16 including increasing the workfunction of the metal layer in one of said trenches. 19. The method of claim 16 including forming a high dielectric constant gate dielectric material under said metal layer. 20. The method of claim 19 including forming n-type and p-type transistors having metal gate electrodes in said trenches. |
FORMING DUAL METAL COMPLEMENTARY METAL OXIDE SEMICONDUCTOR INTEGRATED CIRCUITSBackgroundThe present invention relates to methods for making semiconductor devices, in particular, semiconductor devices with metal gate electrodes. MOS field-effect transistors with very thin gate dielectrics made from silicon dioxide may experience unacceptable gate leakage currents. Forming the gate dielectric from certain high dielectric constant (K) dielectric materials, instead of silicon dioxide, can reduce gate leakage. As used herein, high-k dielectric means having a dielectric constant higher than 10. When, however, a high-k dielectric film is initially formed, it may have a slightly imperfect molecular structure. To repair such a film, it may be necessary to anneal it at a relatively high temperature.Because such a high-k dielectric layer may not be compatible with polysilicon, it may be desirable to use metal gate electrodes in devices that include high-k gate dielectrics. When making a CMOS device that includes metal gate electrodes, it may be necessary to make the NMOS and PMOS gate electrodes from different materials. A replacement gate process may be used to form gate electrodes from different metals. In that process, a first polysilicon layer, bracketed by a pair of spacers, is removed selectively to a second polysilicon layer to create a trench between the spacers. The trench is filled with a first metal. The second polysilicon layer is then removed, and replaced with a second metal that differs from the first metal. Thus, there is a need for alternate ways to form replacement metal gate electrodes.Brief Description of the DrawingsFigures IA-IN represent cross-sections of structures that may be formed when carrying out an embodiment of the method of the present invention.Features shown in these figures are not intended to be drawn to scale.Detailed Description Figures 1A-1N illustrate structures that may be formed, when carrying out an embodiment of the method of the present invention. Initially, high-k gate dielectric layer 170 and a sacrificial metal layer 169 are formed on substrate 100, generating the Figure IA structure. Substrate 100 may comprise a bulk silicon or silicon-on-insulator substructure. Alternatively, substrate 100 may comprise other materials - which may or may not be combined with silicon - such as: germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, or gallium antimonide. Although a few examples of materials from which substrate 100 may be formed are described here, any material that may serve as a foundation upon which a semiconductor device may be built falls within the spirit and scope of the present invention. Some of the materials that may be used to make high-k gate dielectric layer 170 include: hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. Particularly preferred are hafnium oxide, zirconium oxide, titanium oxide and aluminum oxide. Although a few examples of materials that may be used to form high-k gate dielectric layer 170 are described here, that layer may be made from other materials that serve to reduce gate leakage. The layer 170 has a dielectric constant higher than 10 and from 15 to 25 in one embodiment of the present invention.High-k gate dielectric layer 170 may be formed on substrate 100 using a conventional deposition method, e.g., a conventional chemical vapor deposition ("CVD"), low pressure CVD, or physical vapor deposition ("PVD") process. Preferably, a conventional atomic layer CVD process is used. In such a process, a metal oxide precursor (e.g., a metal chloride) and steam may be fed at selected flow rates into a CVD reactor, which is then operated at a selected temperature and pressure to generate an atomically smooth interface between substrate 100 and high-k gate dielectric layer 170. The CVD reactor should be operated long enough to form a layer with the desired thickness. In most applications, high-k gate dielectric layer 170 may be less than about 60 Angstroms thick, for example, and, in one embodiment, between about 5 Angstroms and about 40 Angstroms thick.A sacrificial metal layer 169 may be formed over the dielectric layer 170. The sacrificial metal layer 169 may be any metal that is capable of withstanding high temperatures (greater than 450<0>C) without reaction with overlying materials. As one example, the sacrificial metal layer 14 may be formed of titanium nitride. In one embodiment, the layer 169 may be formed by sputtering. In another embodiment, the layer 169 may be formed by atomic layer deposition. After high-k gate dielectric layer 170 and sacrificial metal layer 169 are formed on substrate 100, sacrificial layer 171 is formed on high-k gate dielectric layer 170 as shown in Figure IB. In this embodiment, hard mask layer 172 is then formed on sacrificial layer 171, generating the figure IB structure. Sacrificial layer 171 may comprise polysilicon and may be deposited on sacrificial metal layer 169 using a conventional deposition process. Sacrificial layer 171 may be, for example, between about 100 and about 2,000 Angstroms thick, and, in one embodiment, between about 500 and about 1,600 Angstroms thick.Hard mask layer 172 may comprise silicon nitride between about 100 and about 1000 Angstroms thick, for example, and between about 200 and about 350 Angstroms thick in one embodiment. Hard mask layer 172 may be formed on sacrificial layer 171.Sacrificial layer 171 and hard mask layer 172 are then patterned to form patterned hard mask layers 130, 131, and patterned sacrificial layers 104, 106, and 169 - as Figure 1C illustrates. Conventional wet or dry etch processes may be used to remove unprotected parts of hard mask layer 172, sacrificial metal layer 169 and sacrificial layer 171. In this embodiment, after those layers have been etched, exposed part 174 of high-k gate dielectric layer 170 is removed.Although exposed part 174 of high-k gate dielectric layer 170 may be removed using dry or wet etch techniques, it may be difficult to etch that layer using such processes without adversely affecting adjacent structures. It may be difficult to etch high-k gate dielectric layer 170 selectively to the underlying substrate using a dry etch process, and wet etch techniques may etch high-k gate dielectric layer 170 isotropically - undercutting overlying sacrificial layers 104, 106 in an undesirable fashion.To reduce the lateral removal of high-k gate dielectric layer 170, as exposed part 174 of that layer is etched, exposed part 174 of high-k gate dielectric layer 170 may be modified to facilitate its removal selectively to covered part 175 of that layer. Exposed part 174 may be modified by adding impurities to that part of high-k gate dielectric layer 170 after sacrificial layer 171 has been etched. A plasma enhanced chemical vapor deposition ("PECVD") process may be used to add impurities to exposed part 174 of high-k gate dielectric layer 170. In such a PECVD process, a halogen or halide gas (or a combination of such gases) may be fed into a reactor prior to striking a plasma. The reactor should be operated under the appropriate conditions (e.g., temperature, pressure, radio frequency, and power) for a sufficient time to modify exposed part 174 to ensure that it may be removed selectively to other materials. In one embodiment, a low power PECVD process, e.g., one taking place at less than about 200 Watts, is used.In one embodiment, hydrogen bromide ("HBr") and chlorine ("Cl2") gases are fed into the reactor at appropriate flow rates to ensure that a plasma generated from those gases will modify exposed part 174 in the desired manner. Between about 50 and about 100 Watts wafer bias (for example, about 100 Watts) may be applied for a sufficient time to complete the desired transformation of exposed part 174. Plasma exposure lasting less than about one minute, and perhaps as short as 5 seconds, may be adequate to cause that conversion.After exposed part 174 has been modified, it may be removed. The presence of the added impurities enables that exposed part to be etched selectively to covered part 175 to generate the Figure ID structure. In one embodiment, exposed part 174 is removed by exposing it to a relatively- strong acid, e.g., a halide based acid (such as hydrobromic or hydrochloric acid) or phosphoric acid. When a halide based acid is used, the acid preferably contains between about 0.5% and about 10% HBr or HCl by volume - and more preferably about 5% by volume. An etch process that uses such an acid may take place at or near room temperature, and last for between about 5 and about 30 minutes - although a longer exposure may be used if desired. When phosphoric acid is used, the acid may contain between about 75% and about 95% H3PO4 by volume. An etch process that uses such an acid may, for example, take place at between about 140<0>C and about 18O<0>C, and, in one embodiment, at about 16O<0>C. When such an acid is used, the exposure step may last between about 30 seconds and about 5 minutes - and for about one minute for a 20 Angstrom thick film.Figure ID represents an intermediate structure that may be formed when making a complementary metal oxide semiconductor ("CMOS") . That structure includes first part 101 and second part 102 of substrate 100 shown in Figure IE. Isolation region 103 separates first part 101 from second part 102. Isolation region 103 may comprise silicon dioxide, or other materials that may separate the transistor's active regions. First sacrificial layer 104 is formed on first high-k gate dielectric layer 105, and second sacrificial layer 106 is formed on second high-k gate dielectric layer 107. Hard masks 130, 131 are formed on sacrificial layers 104, 106. After forming the Figure ID structure, spacers may be formed on opposite sides of sacrificial layers 104, 106. When those spacers comprise silicon nitride, they may be formed in the following way. First, a silicon nitride layer of substantially uniform thickness, for example, less than about 1000 Angstroms thick - is deposited over the entire structure, producing the structure shown in Figure IE. Conventional deposition processes may be used to generate that structure.In one embodiment, silicon nitride layer 134 is deposited directly on substrate 100 and opposite sides of sacrificial layers 104, 106 - without first forming a buffer oxide layer on substrate 100 and layers 104, 106. In alternative embodiments, however, such a buffer oxide layer may be formed prior to forming layer 134. Similarly, although not shown in Figure IE, a second oxide may be formed on layer 134 prior to etching that layer. If used, such an oxide may enable the subsequent silicon nitride etch step to generate an L-shaped spacer.Silicon nitride layer 134 may be etched using a conventional process for anisotropically etching silicon nitride to create the Figure IF structure. As a result of that etch step, sacrificial layer 104 is bracketed by a pair of sidewall spacers 108, 109, and sacrificial layer 106 is bracketed by a pair of sidewall spacers 110, 111.As is typically done, it may be desirable to perform multiple masking and ion implantation steps (Figure IG) to create lightly implanted regions 135a-138a near layers 104, 106 (that will ultimately serve as tip regions for the device's source and drain regions), prior to forming spacers 108, 109, 110, 111 on sacrificial layers 104, 106. Also as is typically done, the source and drain regions 135-138 may be formed, after forming spacers 108, 109, 110, 111, by implanting ions into parts 101 and 102 of substrate 100, followed by applying an appropriate anneal step.An ion implantation and anneal sequence used to form n- type source and drain regions within part 101 of substrate 100 may dope sacrificial layer 104 n-type at the same time. Similarly, an ion implantation and anneal sequence used to form p-type source and drain regions within part 102 of substrate 100 may dope sacrificial layer 106 p-type. When doping sacrificial layer 106 with boron, that layer should include that element at a sufficient concentration to ensure that a subsequent wet etch process, for removing n-type germanium containing layer 104, will not remove a significant amount of p-type sacrificial layer 106. The anneal will activate the dopants that were previously introduced into the source and drain regions and tip regions and into sacrificial layers 104, 106. In a preferred embodiment, a rapid thermal anneal is applied that takes place at a temperature that exceeds about 1,000<0>C - and, optimally, that takes place at 1,080<0>C. In addition to activating the dopants, such an anneal may modify the molecular structure of high-k gate dielectric layers 105, 107 to create gate dielectric layers that may demonstrate improved performance. Because of the imposition of the sacrificial metal layer 169, better performing dielectric layers 170 may result from these high temperature steps without significant reaction between the high dielectric constant dielectric layer 170 and the sacrificial layer 171. After forming spacers 108, 109, 110, 111, dielectric layer 112 may be deposited over the device, generating the Figure IG structure. Dielectric layer 112 may comprise silicon dioxide, or a low-k material. Dielectric layer 112 may be doped with phosphorus, boron, or other elements, and may be formed using a high density plasma deposition process. By this stage of the process, source and drain regions 135, 136, 137, 138, which are capped by suicided regions 139, 140, 141, 142, have already been formed. Those source and drain regions may be formed by implanting ions into the substrate, then activating them. Alternatively, an epitaxial growth process may be used to form the source and drain regions, as will be apparent to those skilled in the art.Commonly used nitride spacer, source/drain, and suicide formation techniques to make the Figure IG structure. That structure may include other features - not shown, so as not to obscure the method of the present invention - that may be formed using conventional process steps.Dielectric layer 112 is removed from hard masks 130, 131, which are, in turn, removed from patterned sacrificial layers 104, 106, producing the Figure IH structure. A conventional chemical mechanical polishing ("CMP") operation may be applied to remove that part of dielectric layer 112 and hard masks 130, 131. Hard masks 130, 131 may be removed to expose patterned sacrificial layers 104, 106. Hard masks 130, 131 may be polished from the surface of layers 104, 106, when dielectric layer 112 is polished - as they will have served their purpose by that stage in the process. After forming the Figure IH structure, sacrificial layers 104 or 106 are removed to generate trenches 113, producing the structure shown in figure II. A 1% solution of HF may be used for 15 to 30 seconds to remove the chemical oxide formed over the remaining polysilicon.In a second embodiment, a wet etch process that is selective for layers 104 over sacrificial layer 106 is applied to remove layers 104 and 169 without removing significant portions of layer 106. When sacrificial layer 104 is doped n-type, and sacrificial layer 106 is doped p- type (e.g., with boron), such a wet etch process may comprise exposing sacrificial layer 104 to an aqueous solution that comprises a source of hydroxide for a sufficient time at a sufficient temperature to remove substantially all of layer 104. That source of hydroxide may comprise between about 2 and about 30 percent ammonium hydroxide or a tetraalkyl ammonium hydroxide, e.g., tetramethyl ammonium hydroxide ("TMAH") , by volume in deionized water. Any remaining sacrificial layer 104 may be selectively removed by exposing it to a solution, which is maintained at a temperature between about 15<0>C and about 90<0>C (for example, below about 40<0>C), that comprises between about 2 and about 30 percent ammonium hydroxide by volume in deionized water. Puring that exposure step, which preferably lasts at least one minute, it may be desirable to apply sonic energy at a frequency of between about 10 kHz and about 2,000 kHz, while dissipating at between about 1 and about 10 Watts/cm<2>.In the second embodiment, sacrificial layer 104, with a thickness of about 1,350 Angstroms, may be selectively removed by exposing it at about 25<0>C for about 30 minutes to a solution that comprises about 15 percent ammonium hydroxide by volume in deionized water, while applying sonic energy at about 1,000 kHz - dissipating at about 5 Watts/cm<2>. Such an etch process should remove substantially all of an n-type sacrificial layer without removing a meaningful amount of a p-type sacrificial layer.As a third embodiment, sacrificial layer 104 may be selectively removed by exposing it for at least one minute to a solution, which is maintained at a temperature between about 6O<0>C and about 9O<0>C, that comprises between about 20 and about 30 percent TMAH by volume in deionized water, while applying sonic energy. Removing sacrificial layer 104, with a thickness of about 1,350 Angstroms, by exposing it at about 80<0>C for about 2 minutes to a solution that comprises about 25 percent TMAH by volume in deionized water, while applying sonic energy at about 1,000 kHz - dissipating at about 5 Watts/cm<2> - may remove substantially all of layer 104 without removing a significant amount of layer 106. First high-k gate dielectric layer 105 should be sufficiently thick to prevent the etchant that is applied to remove sacrificial layer 104 from reaching the channel region that is located beneath first high-k gate dielectric layer 105. The sacrificial metal layer 169 may also be removed by selective etching. In some embodiments, the layer 169 may not be removed. In some embodiments, the dielectric layer 105 may be removed before forming the replacement metal gate. In such case, a metal oxide gate dielectric may be formed before forming the replacement gate.In the illustrated embodiment, n-type metal layer 180 is formed directly on layers 105 and in the trenches 113 to generate the figure U structure. N-type metal layer 180 may comprise any n-type conductive material. N-type metal layer 180 preferably has thermal stability characteristics that render it suitable for making a metal NMOS gate electrode for a semiconductor device. In one embodiment, the layer 180 may be between 30 and 1000 Angstroms thick and may be deposited by physical vapor deposition or chemical vapor deposition.Materials that may be used to form n-type metal layer 180 include: hafnium, zirconium, titanium, tantalum, aluminum, and their alloys, e.g., metal carbides that include these elements, i.e., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. N-type metal layer 180 may be formed on first high-k gate dielectric layer 105 using well known PVD or CVD processes, e.g., conventional sputter or atomic layer CVD processes .The p-type side 200 may be masked and an n-type layer 115 may be deposited on the n-type side 202 to form the Figure IK structure. The layer 115 may be the same as the layer 180, in one embodiment.N-type metal layers 115 and 180 may serve as a metal NMOS gate electrode that has a workfunction that is between about 3.9 eV and about 4.2 eV, and that is between about 100 Angstroms and about 2,000 Angstroms thick and, in one embodiment, may particularly be between about 500 Angstroms and about 1,600 Angstroms thick. Although Figure IK represents structures in which n-type metal layers 115, 180 fill all of trench 113, in alternative embodiments, n-type metal layer 115 may fill only part of trench 113, with the remainder of the trench being filled with a material that may be easily polished, e.g., tungsten, aluminum, titanium, or titanium nitride. Using a higher conductivity fill metal in place of the workfunction metal may improve the overall conductivity of the gate stack. In such an alternative embodiment, n-type metal layer 115, which serves as the workfunction metal, may be between about 50 and about 1,000 Angstroms thick and, for example, at least about 100 Angstroms thick. In embodiments in which trench 113 includes both a workfunction metal and a trench fill metal, the resulting metal NMOS gate electrode may be considered to comprise the combination of both the workfunction metal and the trench fill metal. If a trench fill metal is deposited on a workfunction metal, the trench fill metal may cover the entire device when deposited, forming a structure like the Figure IJ structure. That trench fill metal must then be polished back so that it fills only the trench, generating a structure like the Figure IK structure.In the illustrated embodiment, after forming n-type metal layer 115 within trench 113, the masking of p-type side 200 may be removed and the horizontal portions of the layer 180, as well as the horizontal portions of the 115, may be polishing off, and n-type side 202 may be masked. Then a workfunction adjusting implant I is performed on the p-type side 200 as shown in Figure IL. The implant species may be nitrogen, oxygen, chlorine, fluorine, or bromine, for example, to increase the workfunction of the n-type layer 180 to make it more suitable for use in p-type transistors. Alternatively, the workfunction increasing species may be aided by plasma enhanced ion implantation, furnace diffusion, or plasma deposition, to mention a few examples. The species may be added until the species makes up from about 3 to about 50 atomic percent of the exposed layer 180. In many cases, between about 5 and about 10 atomic percent may be sufficient doping. If the trenches 113 have a reentrant profile, an angled implant may be used.In this embodiment, p-type metal layer 116 is formed directly on layer 180 to fill trench 115 on the p-type side 200 and to generate the figure IM structure. P-type metal layer 116 may comprise any p-type conductive material from which a metal PMOS gate electrode may be derived. P-type metal layer 116 preferably has thermal stability characteristics that render it suitable for making a metal PMOS gate electrode for a semiconductor device.Materials that may be used to form p-type metal layer 116 include: ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. P-type metal layer 116 may be formed on second high-k gate dielectric layer 107 using well known PVD or CVD processes, e.g., conventional sputter or atomic layer CVD processes. As shown in figure IN, p-type metal layer 116 is removed except where it fills trench 113. Layer 116 may be removed from other portions of the device via a wet or dry etch process, or an appropriate CMP operation, with dielectric 112 serving as an etch or polish stop.P-type metal layer 116 may serve as a metal PMOS gate electrode with a workfunction that is between about 4.9 eV and about 5.2 eV, and that is between about 100 Angstroms and about 2,000 Angstroms thick, and more preferably is between about 500 Angstroms and about 1,600 Angstroms thick. Although Figures IM and IN represent structures in which p- type metal layer 116 fills all of trench 150, in alternative embodiments, p-type metal layer 116 may fill only part of trench 150. As with the metal NMOS gate electrode, the remainder of the trench may be filled with a material that may be easily polished, e.g., tungsten, aluminum, titanium, or titanium nitride. In such an alternative embodiment, p- type metal layer 116, which serves as the workfunction metal, may be between about 50 and about 1,000 Angstroms thick. Like the metal NMOS gate electrode, in embodiments in which trench 150 includes a workfunction metal and a trench fill metal, the resulting metal PMOS gate electrode may be considered to comprise the combination of both the workfunction metal and the trench fill metal.After removing metal layer 116, except where it fills trench 113, a capping dielectric layer may be deposited onto dielectric layer 112, metal NMOS gate electrode 115, and metal PMOS gate electrode 116, using any conventional deposition process. Process steps for completing the device that follow the deposition of such a capping dielectric layer, e.g., forming the device's contacts, metal interconnect, and passivation layer, are well known to those skilled in the art and will not be described here.While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.What is claimed is: |
A fail-safe thermal sensor is implemented in an integrated circuit such as a microprocessor. The fail-safe thermal sensor monitors the temperature of the integrated circuit and halt logic halts operation of the integrated circuit in response to the fail-safe thermal sensor indicating that a threshold temperature has been exceeded. The threshold temperature may be a predetermined fixed critical temperature. The halt logic may inhibit operation of the integrated circuit by stopping a clock for the integrated circuit. |
What is claimed is:1. An integrated circuit comprising:a fail safe sensor;a programmable thermal sensor;halt logic to halt operation of the integrated circuit in response to the fail safe sensor indicating that a pre-programmed fixed threshold temperature has been exceeded; andclock adjustment logic to control temperature of the integrated circuit in response to the programmable thermal sensor indicating that a programmable threshold temperature has been exceeded by decreasing a clock frequency of the integrated circuit.2. The integrated circuit of claim 1 wherein the halt logic is to inhibit operation of the integrated circuit by stopping a clock for the integrated circuit.3. The integrated circuit of claim 1 wherein the halt logic protects the integrated circuit without software control.4. The integrated circuit of claim 1 comprising:a plurality of programmable thermal sensors placed across the integrated circuit;an averaging mechanism in communication with the plurality of programmable thermal sensors to calculate an average temperature from the plurality of programmable thermal sensors.5. The integrated circuit of claim 1 wherein the clock adjustment logic is further to control the temperature of the integrated circuit by increasing the clock frequency of the integrated circuit.6. The integrated circuit of claim 1 wherein the clock adjustment logic is further to execute instructions to provide closed loop control of the integrated circuit clock frequency, thereby automatically reducing the temperature when overheating occurs.7. The integrated circuit of claim 1 further comprising threshold adjustment logic to increase the programmable threshold temperature value to a new threshold temperature value in response to the programmable thermal sensor indicating that the threshold temperature value has been exceeded.8. The integrated circuit of claim 7 wherein the threshold adjustment logic is further to lower the new threshold temperature to detect decreases in temperature.9. The integrated circuit of claim 1 further comprising an interrupt handler to display information regarding a sensed temperature to a user of the integrated circuit upon generation of an interrupt in the fail safe sensor or the programmable thermal sensor.10. A method comprising:sensing a temperature of an integrated circuit using a first sensor provided on the integrated circuit;sensing the temperature of the integrated circuit using a second sensor provided on the integrated circuit;halting operation of the integrated circuit in response to sensing with the first sensor that a pre-programmed fixed threshold temperature has being exceeded; andcontrolling a clock frequency of the integrated circuit by decreasing the clock frequency in response to sensing with the second sensor that a programmable threshold temperature has been exceeded.11. The method of claim 10 wherein halting operation comprises inhibiting operation of the integrated circuit by stopping a clock for the integrated circuit.12. The method of claim 10 wherein halting operation comprises halting operation of the integrated circuit without software control.13. The method of claim 10 wherein controlling further comprises increasing the clock frequency of the integrated circuit in response to the sensed temperature.14. The method of claim 10 wherein controlling further comprises executing instructions to provide closed loop control of the integrated circuit clock frequency in response to the sensed temperature.15. The method of claim 10 further comprising displaying information regarding a sensed temperature to a user of the integrated circuit in response to generation of an interrupt in the first sensor or the second sensor.16. The integrated circuit of claim 1, wherein the integrated circuit is a microprocessor. |
CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a division of prior application Ser. No. 09/093,988 filed Jun. 8, 1998 now abandoned which is a continuation of prior application Ser. No. 08/660,016, filed Jun. 6, 1996, issued as U.S. Pat. No. 5,838,578 on Nov. 17, 1998, which is a continuation of prior application Ser. No. 08/124,980, filed Sep. 21, 1993 abandoned, all entitled "Method and Apparatus for Programmable Thermal Sensor for an Integrated Circuit" and all assigned to the assignee of the present application.FIELD OF THE INVENTIONThe present invention relates to thermal sensing, and more specifically to methods and apparatus for a programmable thermal sensor in an integrated circuit.ART BACKGROUNDAdvances in silicon process technology has lead to the development of increasingly larger die sizes for integrated circuits. The large dies sizes permit integration of millions of transistors on a single die. As die sizes for integrated circuits become larger, the integrated circuits consume more power. In addition, advances in microprocessor computing require execution of a large number of instructions per second. To execute more instructions per second, the microprocessor circuits operate at an increased clock frequency. Therefore, a microprocessor containing over one million transistors may consume over 30 watts of power. With large amounts of power being dissipated, cooling becomes a problem.Typically, integrated circuits and printed circuit boards are cooled by either active or passive cooling devices. A passive cooling device, such as a heat sink mounted onto an integrated circuit, has a limited capacity to dissipate heat. An active cooling device, such as a fan, is used to dissipate larger amounts of heat. Although a fan cooling system dissipates heat, there are several disadvantages associated with such a system. Traditionally, fans cool integrated circuits by air convection circulated by a fan. However, when a fan is used in conjunction with a high density multi-chip computer system, a large volume of air is required for cooling thereby necessitating powerful blowers and large ducts. The powerful blowers and large ducts implemented in the computer occupy precious space and are too noisy. The removal of a cover or other casing may result in a disturbance of air flow causing the fan cooling system to fail. In addition, the fan cooling system is made up of mechanical parts that have a mean time between failure (MTBF) specification less than a typical integrated circuit. Furthermore, fan cooling systems introduce noise and vibration into the system.In addition to cooling systems, thermal sensors are implemented to track the temperature of an integrated circuit or electronic system. Typically, thermal sensors consist of a thermocouple which is directly attached to a heat sink. In more sophisticated thermal sensing systems, a diode and external analog circuitry are used. In operation, the voltage/current characteristics of the diode change depending upon the temperature of the integrated circuit, and the external analog circuitry measures the voltage or current characteristics of the diode. The additional analog circuitry is complex and difficult to implement. In addition, employing the analog circuitry results in a thermal time delay degrading the accuracy of such a configuration. Moreover, external analog circuitry for sensing the voltage of the diode consumes a larger area than the integrated circuit being sensed. Therefore, it is desirable to provide a thermal sensor which is incorporated into the integrated circuit. In addition, it is desirable to provide a thermal sensor that can provide feedback for an active cooling system. Furthermore, it is desirable to control the temperature of an integrated circuit without the use of a fan. The present invention provides an integrated thermal sensor that detects a threshold temperature so that active cooling of the integrated circuit is accomplished through system control.SUMMARY OF THE INVENTIONA programmable thermal sensor is implemented in an integrated circuit. The programmable thermal sensor monitors the temperature of the integrated circuit, and generates an output to indicate that the temperature of the integrated circuit has attained a predetermined threshold temperature. The programmable thermal sensor contains a voltage reference, a programmable Vbe, a current source, and a sense amplifier or comparator. The current source generates a constant current to power the voltage reference and the programmable Vbe. With a constant current source, the voltage reference generates a constant voltage over varying temperatures and power supply voltages. In a preferred embodiment, the voltage reference is generated with a silicon bandgap reference circuit. The constant voltage from the voltage reference is one input to the sense amplifier. The programmable Vbe contains a sensing portion and a multiplier portion. In general, the programmable Vbe generates a voltage dependent upon the temperature of the integrated circuit and the value of programmable inputs. The programmable inputs are supplied to the multiplier portion to generate a multiplier value for use in the multiplier portion. The voltage reference is compared with the voltage generated by the programmable Vbe in the sense amplifier. The sense amplifier generates a greater than, less than, signal.The programmable thermal sensor of the present invention is implemented in a microprocessor. In addition to the programmable thermal sensor, the microprocessor contains a processor unit, an internal register, microprogram and clock circuitry. The processor unit incorporates the functionality of any microprocessor circuit. The clock circuitry generates a system clock for operation of the microprocessor. In general, the microprogram writes programmable input values to the internal register. The programmable input values correspond to threshold temperatures. The programmable thermal sensor reads the programmable input values, and generates an interrupt when the temperature of the microprocessor reaches the threshold temperature. In a first embodiment, the interrupt is input to the microprogram and the processor unit. In response to an interrupt, the processor unit may take steps to cool the temperature of the microprocessor, and the microprogram programs a new threshold temperature. For example, the processor may turn on a fan or reduce the clock frequency. The new threshold temperature is slightly higher than the current threshold temperature so that the processor unit may further monitor the temperature of the microprocessor.In a second embodiment of the present invention, the interrupt generated by the programmable thermal sensor is input to external sensor logic. The external sensor logic automatically controls the frequency of the microprocessor. If the temperature of the microprocessor raises, then the clock frequency is decreased. Conversely, if the temperature of the microprocessor drops, then the system clock frequency is increased. In addition to a programmable thermal sensor, the microprocessor contains a fail safe thermal sensor. The fail safe thermal sensor generates an interrupt when detecting that the microprocessor reaches predetermined threshold temperatures and subsequently halts operation of the system clock. The predetermined threshold temperature is selected below a temperature that causes physical damage to the device. The microprocessor of the present invention is implemented in a computer system. Upon generation of an interrupt in the programmable thermal sensor, a message containing thermal sensing information is generated and displayed to a user of the computer system.BRIEF DESCRIPTION OF THE DRAWINGSThe objects, features, and advantages of the present invention will be apparent from the following detailed description of the preferred embodiment of the invention with references to the following drawings.FIG. 1 illustrates a block diagram of a programmable thermal sensor configured in accordance with the present invention.FIG. 2 illustrates a graph depicting the relationship between the base-emitter voltage (Vbe) of a bipolar transistor versus the temperature of the supply voltage.FIG. 3 illustrates a bandgap reference circuit configured in accordance with the present invention.FIG. 4 illustrates a programmable base to emitter voltage (Vbe) circuit configured in accordance with the present invention.FIG. 5 illustrates a current source, including the bandgap reference circuit, configured in accordance with the present invention.FIG. 6 illustrates a sense amplifier for the thermal sensor configured in accordance with the present invention.FIG. 7 illustrates block diagram of a first embodiment of a microprocessor incorporating a programmable thermal sensor configured in accordance with the present invention.FIG. 8 illustrates a flow diagram for a method of controlling the programmable thermal sensor configured in accordance with the present invention.FIG. 9 illustrates a block diagram of a second embodiment of a microprocessor incorporating a programmable thermal sensor configured in accordance with the present invention.FIG. 10 illustrates a block diagram of a microprocessor incorporating a fail safe thermal sensor configured in accordance with the present invention.FIG. 11 illustrates a computer system incorporating a microprocessor comprising thermal sensing configured in accordance with the present invention.NOTION AND NOMENCLATUREThe detailed descriptions which follow are presented, in part, in terms of algorithms and symbolic representations of operations within a computer system. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art.An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. These steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.Further, the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein which form part of the present invention; the operations are machine operations. Useful machines for performing the operations of the present invention include general purpose digital computers or other similar devices. In all cases there should be borne in mind the distinction between the method operations in operating a computer and the method of computation itself. The present invention relates to method steps for operating a computer in processing electrical or other (e.g., mechanical, chemical) physical signals to generate other desired physical signals.The present invention also relates to apparatus for performing these operations. This apparatus may be specially constructed for the required purposes or it may comprise a general purpose computer as selectively activated or reconfigured by a computer program stored in the computer. The algorithms presented herein are not inherently related to a particular computer or other apparatus. In particular, various general purpose machines may be used with programs written in accordance with the teachings herein, or it may prove more convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description given below. Machines which may perform the functions of the present invention include those manufactured by Intel Corporation, as well as other manufacturers of computer systems.DETAILED DESCRIPTIONMethods and apparatus for thermal sensing in an integrated circuit are disclosed. In the following description, for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that these specific details are not required to practice the present invention. In other instances, well known circuits and devices are shown in block diagram form to avoid obscuring the present invention unnecessarily.Referring to FIG. 1, a block diagram of a programmable thermal sensor configured in accordance with the present invention is illustrated. In general, a programmable thermal sensor 100 monitors the temperature of an integrated circuit, and generates an output to indicate that the temperature of the integrated circuit has attained a predetermined threshold temperature. The programmable thermal sensor 100 contains a voltage reference 120, a programmable Vbe 110, a current source 140, and a sense amplifier 160. The current source 140 generates a constant current to power the voltage reference 120 and the programmable Vbe 110. With a constant current source, the voltage reference 120 generates a constant voltage over varying temperatures and power supply voltages (Vcc). In a preferred embodiment, the voltage reference is generated with a silicon bandgap reference circuit. The constant voltage from the voltage reference 120 is input to the sense amplifier 160. The programmable Vbe 110 contains a sensing portion and a multiplier portion. In general, the programmable Vbe 110 generates a voltage dependent upon the temperature of the integrated circuit and the value of programmable inputs. The programmable inputs are supplied to the multiplier portion to generate a multiplier value for use in the multiplier portion.Referring to FIG. 2, a graph depicting the relationship between the base-emitter voltage (Vbe) of a bipolar transistor versus temperature is illustrated. A characteristic curve 200 on the graph of FIG. 2 shows the linear characteristics of the Vbe voltage over a temperature range of 70 degrees Fahrenheit (70[deg.] F.) to 140[deg.] F. In addition, the graph of FIG. 2 shows a relative constant bandgap voltage curve 205 over the specified temperature range. Although the bandgap voltage varies slightly over the temperature range, the variation of the bandgap voltage is negligible compared to the variation of the Vbe voltage over the temperature range. As shown by the curve 205 in FIG. 2, the bandgap voltage is equal to approximately 1.3 volts (V). When the Vbe voltage equals 1.3 volts, the temperature of the integrated circuit is 100[deg.] F. Based on the linear temperature characteristics of the Vbe voltage, and the relatively constant bandgap voltage over the temperature range, a thermal sensor is constructed.For the voltage/temperature characteristics of line 200 shown in FIG. 2, the bandgap voltage equals the Vbe voltage when the integrated circuit is at 100[deg.] F. However, the Vbe voltage may be changed to sense additional temperature values in the integrated circuit. By shifting the linear Vbe voltage/temperature characteristic curve 200, any number of predetermined threshold temperature values are detected. To shift the voltage/temperature characteristic curve 200, the Vbe voltage is multiplied by pre-determined values to generate a new voltage for comparison to the bandgap voltage. Specifically, to shift the characteristic curve 200 to sense a voltage less then 100[deg.] F., the Vbe voltage is multiplied by a fraction to generate a new characteristic curve, such as the characteristic curve 210 shown in FIG. 2. The characteristic curve 210 exhibits the same slope as the original characteristic curve 200. However, for the characteristic curve 210, the Vbe voltage is equal to the bandgap voltage when the integrated circuit temperature equals 90[deg.] F. Similarly, the Vbe voltage may be multiplied by a value greater than 1 to generate a characteristic curve such as the characteristic curve 220 shown in FIG. 2. The characteristic curve 220 also exhibits the same slope as the original characteristic curve 200. However, the characteristic curve 220 intersects the bandgap voltage curve 205 at 120[deg.] F. Consequently, any number of threshold temperatures are detectable by multiplying the Vbe voltage by a predetermined constant.Referring to FIG. 3, a bandgap reference circuit configured in accordance with the present invention is illustrated. The bandgap reference circuit 120 is powered from a voltage source, Vcc. The voltage source Vcc is regulated by a current source such that the current source 140 supplies a constant current over a wide range of Vcc voltages. A preferred embodiment of the present invention for the current source 140 is described fully below. The bandgap reference circuit 120 contains three N-P-N bipolar transistors Q1, Q2 and Q3, and three resistive elements R1, R2 and R3. In general, the constant bandgap reference voltage, Vbandgap, is generated at the collector of N-P-N transistor Q3. The bipolar transistors Q1, Q2 and resistive elements R1, R2 and R3 are provided to compensate for temperature variations in the base to emitter junction voltage (Vbe) of bipolar transistor Q3. Specifically, the resistive element R1 is coupled from the current source 140 to the collector of bipolar transistor Q1. The collector and base of bipolar transistor Q1 are shorted so that Q1 is effectively a P-N junction diode. The base of transistor Q1 and the base of transistor Q2 are coupled together. The resistive element R3 couples the collector of transistor Q2 to the current source 140, and the resistive element R2 couples the emitter of transistor Q2 to ground. In a preferred embodiment of the present invention, the resistive element R1 equals 4800 ohms, the resistive element R2 equals 560 ohms, and the resistive element R3 equals 4800 ohms.In operation, the voltage at the base of transistors Q1 and Q2 are pulled to the Vbandgap voltage through the R1 resistance. Therefore, the transistors Q1 and Q2 are biased in the active region, thereby allowing current to flow from the collector to the emitter of transistors Q1 and Q2. The mirrored configuration of transistors Q1 and Q2 tends to drive the base to emitter voltage (Vbe) of transistors Q1 and Q2 equivalent. However, the resistive element R2 increases the resistance at the emitter of transistor Q2, resulting in a greater current density flowing through transistor Q1 than flowing through transistor Q2. As the temperature in the integrated circuit rises, the Vbe of transistor Q2 decreases. In turn, the decrease of Vbe on transistor Q2 causes a decrease in the current density flow through Q2. The decrease in current density through the resistive element R2 also causes a reduction in the current density flowing through the resistive element R3. Because the collector of transistor Q2 is coupled to the base of transistor Q3, a decrease in the current through resistive element R3 results in an increase in the voltage at the base of transistor Q3. Consequently, as the temperature of the integrated circuit rises, the Vbe across transistors Q1, Q2, and Q3 decreases. However, the decrease of Vbe on transistor Q3 is compensated by the increase of voltage at the base of transistor Q3. Therefore, regardless of temperature fluctuations, the Vbandgap remains at a constant silicon bandgap voltage. For a further explanation of generation of a bandgap reference, including a theoretical derivation, see A. T. Brokaw, A Simple Three-Terminal IC Bandgap Reference, IEEE J. of Solid State Circuits, December, 1974, and Karel E. Kuijk, A Precision Reference Voltage Source, IEEE J. of Solid State Circuits, June 1973.Referring to FIG. 4, a programmable base to emitter voltage (Vbe) circuit configured in accordance with the present invention is illustrated. In a preferred embodiment of the present invention, a temperature varying voltage is generated from the characteristics of a base to emitter junction on a bipolar transistor. In general, the programmable Vbe circuit generates an output voltage, Vout, based on the Vbe voltage and the value of programmable input voltages Vp1, Vp2 and Vp3. A N-P-N bipolar transistor Q11 shown in FIG. 4 is utilized to generate the Vbe reference voltage. As described above, the Vbe/temperature characteristic curve may be shifted along the temperature axis to detect a desired threshold temperature. By shifting the Vbe/temperature characteristic curve along the temperature axis, a plurality of output voltages representing different threshold temperatures are generated.To generate the Vout for a particular threshold temperature, a programmable Vbe multiplier circuit is utilized. The programmable Vbe multiplier circuit contains resistive elements R5, R6, R7, R8, and R9, and metal oxide semiconductor field effect transistors (MOSFET) Q12, Q13, and Q14. In a preferred embodiment, Q12, Q13 and Q14 comprise N-MOS transistors. The drain terminal of transistor Q12 is coupled to a first input on resistive element R7, and the source of transistor Q12 is coupled to a second input on resistive element R7. The transistors Q13 and Q14 are similarly coupled to resistive elements R8 and R9, respectively. Programmable input voltages Vp1, Vp2, and Vp3 are input to the gate of transistors Q12, Q13 and Q14, respectively. The input voltages Vp1, Vp2, and Vp3 control the current flow by selecting either a resistive element or the respective MOS transistor.In operation, the programmable Vbe multiplier circuit outputs a voltage, Vout, comprising a multiple of the base to emitter voltage on bipolar transistor Q11. For purposes of explanation, consider resistive elements R6, R7, R8 and R9 as one resistive element: R6-R9. The resistive element R6-R9 is connected across the base to emitter junction of bipolar transistor Q11. Therefore, the voltage drop across the resistive element R6-R9 is equivalent to Vbe of bipolar transistor Q11. The current flowing through resistive element R6-R9 is approximately equal to the current flowing through resistive element R5 minus the current flowing into the base of transistor Q11. Therefore, if the value of resistive element R5 is equal to the value of resistive element R6-R9, the voltage at the collector of transistor Q11 equals 2Vbe. In general, the Vout voltage is defined by the following equation:Vout=VR5+Vbe Vbe=VR6-R9 Vout=VR5+VR6-R9 Therefore, Vout values greater than 1 Vbe are generated by changing the ratio between resistive element R5 and resistive element R6-R9.To move the Vbe curve 200 shown in FIG. 2 along the temperature axis via the programmable Vbe circuit 110, a combination of resistive elements R7, R8 and R9 are selected. To select a combination of resistive elements R7, R8 and R9, the voltages Vp1, Vp2, and Vp3 are applied to the gates of MOS transistors Q13, Q12, and Q14, respectively. The resistive elements R7, R8 and R9 are binary weighed resistors. Each individual resistor R7, R8 and R9 can be shorted through control by Q12, Q13 and Q14 respectively. By selecting resistive elements R7, R8 and R9 as series resistors with resistive element R6, the voltage Vout is changed. In a preferred embodiment of the present invention, the resistive element R5 equals 6380, the resistive element R6 equals 5880, the resistive element R7 equals 392, the resistive element R8 equals 787, and the resistive element R9 equals 1568. By setting the resistive elements R5-R9 to the above values and programming the transistors Q13, Q12, and Q14, the voltage Vout is generated to correspond to specific threshold temperatures. Specifically, Table 1 illustrates the threshold temperatures programmed in response to the input voltages Vp1, Vp2, and Vp3.<tb><sep>TABLE 1<tb><sep><sep><sep><sep>Threshold<tb><sep><sep><sep><sep>Temperature<tb><sep>Vp1<sep>Vp2<sep>Vp3<sep>(Degrees C.)<tb><sep>0<sep>0<sep>0<sep> 70[deg.]<tb><sep>0<sep>0<sep>1<sep> 80[deg.]<tb><sep>0<sep>1<sep>0<sep> 90[deg.]<tb><sep>0<sep>1<sep>1<sep>100[deg.]<tb><sep>1<sep>0<sep>0<sep>110[deg.]<tb><sep>1<sep>0<sep>1<sep>120[deg.]<tb><sep>1<sep>1<sep>0<sep>130[deg.]<tb><sep>1<sep>1<sep>1<sep>140[deg.]Referring to FIG. 5, a current source including the bandgap reference circuit configured in accordance with the present invention is illustrated. The bandgap reference circuit comprises resistors R1, R2, and R3 and bipolar transistors Q1, Q2, Q3 and Q8. The operation of the bandgap reference circuit 120 is described above. However, the bandgap reference circuit of FIG. 5 also incorporates a gain stage with bipolar transistor Q8. In order to incorporate a gain stage, the collector of bipolar transistor Q3 is coupled to the base of bipolar transistor Q8. The constant bandgap reference voltage generated at the collector of bipolar transistor Q3 controls the base of bipolar transistor Q8 resulting in a signal at the emitter of bipolar transistor Q8 containing a silicon bandgap voltage with increased current density. In addition to the bandgap reference circuit, FIG. 5 illustrates a constant current source 140 including a start-up circuit portion. The constant current source 140 comprises a bipolar transistor Q4, P-MOS transistors Q5, Q7 and Q15, and resistor R4. The constant current source 140 stabilizes operation of the thermal sensor of the present invention over a range of Vcc ranges.In general, the constant current source 140 is derived from the generation of the constant bandgap reference voltage. In operation, the constant bandgap reference voltage, Vbandgap, is coupled to the base of bipolar transistor Q4. The constant bandgap reference voltage drives the bipolar transistor Q4 to generate a constant current flowing from the collector to the emitter of transistor Q4 and through the resistor R4. The P-MOS transistor Q5 is mirrored with P-MOS transistors Q7 and Q15. The constant current flowing through resistor R4 also flows through P-MOS transistor Q5 and is mirrored through P-MOS transistors Q7 and Q15. In a preferred embodiment, resistive element R4 equals 6020. The P-MOS transistor Q15 provides a constant current source for the programmable Vbe circuit 110. Similarly, P-MOS transistor Q7 provides a constant current source to the bandgap reference circuit 120 through bipolar transistors Q3 and Q8.The current source and bandgap reference voltage circuit illustrated in FIG. 5 also comprises a start-up circuit. The start-up circuit within the current source is required because the bandgap reference voltage controls the current source which, in turn, controls the bandgap reference voltage. Therefore, an equilibrium between the bandgap reference voltage and the current source circuit is required to ensure the proper operation of the thermal sensor. The start-up circuit contains P-MOS transistors Q6, Q9 and Q10. The P-MOS transistor Q9 is configured such that the gate is coupled directly to the drain. In this configuration, the P-MOS transistor Q9 operates as a load resistor. In general, the start-up circuit generates a voltage for the bandgap reference voltage circuit during initial power-up of the thermal sensor. Specifically, during an initial power-up of the thermal sensor circuit, transistors Q5, Q7, Q10, and Q15 are biased such that no current flows through the respective devices. Also, during the initial power-up state, the P-MOS transistor Q9 is biased to conduct current thereby supplying a low voltage level to the gate of P-MOS transistor Q6. A low voltage level at the gate of P-MOS transistor Q6 biases the P-MOS transistor Q6 such that current flows from the Vcc to bipolar transistors Q3 and Q8. The P-MOS transistor Q6 biases the base of bipolar transistor Q8 allowing generation of the bandgap reference voltage.An increase in the bandgap reference voltage driving the base of bipolar transistor Q4 causes current to flow from the emitter of Q4 through resistor R4. As the current density increases through transistors Q5 and Q10, the voltage at the gate of transistor Q6 also increases. The build up of charge at the gate of transistor Q6 is facilitated by a large resistance generated by the load transistor Q9. As the voltage at the gate of P-MOS transistor Q6 raises to the pinch-off threshold voltage of the device, the P-MOS transistor Q6 conducts no current such that current is no longer supplied to bipolar transistors Q3 and Q8. Because of the gain provided at the emitter of bipolar transistor Q8, current continues to increase in the bandgap reference voltage circuit until the collector of bipolar transistor Q3 begins to control the base of bipolar transistor Q8. At this point, the circuit has reached an equilibrium such that the constant bandgap reference voltage generated supplies a constant voltage to the current source. Also shown in FIG. 5 is a disable P-MOS transistor Q21. The P-MOS transistor Q21 powers down, or disables, the thermal sensor circuit for testing. The P-MOS transistor Q21is utilized only for disabling, and it is not required to generate the constant current source or the bandgap reference voltage. The P-MOS transistor Q15 isolates the collector of bipolar transistor Q11 on the programmable Vbe circuit from the Vcc on the current source circuit.Referring to FIG. 6, a sense amplifier for the thermal sensor configured in accordance with the present invention is illustrated. In a preferred embodiment of the present invention, a sense amplifier 160 contains three stages. The first stage and the second stage are identical. The third stage comprises a current buffer 600. The current buffer 600 is illustrated in FIG. 6 as a standard logic inverter. In general, the sense amplifier 160 operates as a comparator circuit. In operation, if the Vbandgap is greater than the Vout voltage, then the output of sense amplifier 160 is a low logic level. Alternatively, if the Vout is greater than the Vbandgap voltage, then the output of sense amplifier 160 is a high logic level. The second stage of sense amplifier 160 generates a voltage gain of signals on lines S1 and S1#. The first stage contains PMOS transistors Q16, Q17 and Q18, and NMOS transistors Q19 and Q20. The transistors Q19 and Q20 are constructed as a current mirror.The voltage Vout is input to the gate of PMOS transistor Q16, and the voltage Vgap is input to the gate of PMOS transistor Q17. In operation, if the voltage Vout is greater than the Vbandgap, then PMOS transistor Q17 is biased to conduct more current than PMOS transistor Q16. Because a greater current density flows through PMOS transistor Q17 than PMOS transistor Q16, the voltage at line S1 rises and the voltage at line S1# decreases. The source and gate of NMOS transistor Q19 are connected, and the source/gate connection is controlled by the voltage at S1#. Consequently, when the voltage at line S1# decreases, NMOS transistor Q19 is biased to reduce the current density flow. The voltage on line S1# is input to the gate of PMOS transistor Q18. As the voltage on line S1# decreases, the PMOS transistor Q18 is biased to conduct a greater current density. The increase in current density through transistor Q18 further amplifies the voltage difference between lines S1 and S1#. When the Vbe voltage is less than the Vgap voltage, the first stage of the sense amplifier 160 operates in an analogous manner.The second stage of sense amplifier 160 comprises PMOS transistors Q22, Q23 and Q24, and NMOS transistors Q25 and Q26. The operation of the second stage of the sense amplifier 160 is analogous to the operation of the first stage. In addition, hysteresis is provided for the sense amplifier 160 via a feedback path from the output of sense amplifier 160 to the programmable Vbe circuit Vout input of sense amplifier 160. The hysteresis provides a more stable output signal from the sense amplifier 160 such that voltage variations on the inputs of the sense amplifier 160 after generation of a high output voltage level does not cause glitches in the output signal.For the programmable thermal sensor of the present invention to operate well over process variations, the resistors are constructed to have a width larger than the minimum specification for the resistive value. All bipolar transistors in the programmable thermal sensor contain at least double width emitters. For the MOS transistors, long channel lengths are constructed. The long channel lengths of the MOS transistors help stabilize the programmable thermal sensor as well as provide noise immunity. For the bandgap reference circuit 120, the bipolar transistor Q2 is constructed to be ten times greater in size than the bipolar transistor Q1. The large size differential between bipolar transistors Q1 and Q2 provides a stable bandgap voltage reference.Referring to FIG. 7, a first embodiment of a microprocessor incorporating a programmable thermal sensor configured in accordance with the present invention is illustrated. A microprocessor 700 contains, in part, the programmable thermal sensor 100 and a processor unit 705. The processor unit 705 is intended to present a broad category of microprocessor circuits comprising a wide range of microprocessor functions. In general, the programmable thermal sensor 100 is programmed to detect a threshold temperature within the microprocessor 100. If the microprocessor 700 attains the pre-programmed threshold temperature, the programmable thermal sensor 100 generates an interrupt. As described above, the programmable thermal sensor 100 detects the pre-programmed threshold temperature based on the temperature of the integrated circuit at the programmable thermal sensor 100. The temperature across a microprocessor die can vary as much as 8[deg.] F. In a preferred embodiment of the present invention, the programmable thermal sensor 100 is located in the middle of the die of microprocessor 700 so as to provide the best thermal sensing. However, placement of the programmable thermal sensor in the middle of the die increases noise in the microprocessor. In an alternative embodiment, several thermal sensors are placed across the microprocessor die. In this configuration, each thermal sensor provides an interrupt when attaining the threshold temperature, and an average temperature is calculated based on the several thermal sensors.In addition to the programmable thermal sensor 100 and processor unit 705, a microprocessor 700 contains an internal register 735, a read only memory (ROM) 730, and a phase lock loop (PLL) circuit 720. External to the microprocessor 700 is an external clock 710. The external clock 710 provides a clock signal to the PLL circuit 720. The PLL circuit 720 permits fine tuning and variable frequency adjustment of the input clock signal. Specifically, the PLL circuit 720 receives a value, and increases or decreases the frequency based on the value received. The PLL circuit 720 is intended to represent a broad category of frequency adjustment circuits, which are well known in the art and will not be described further. The output of the PLL circuit 720 is the microprocessor system clock, and is input to the processor unit 705.The programmable thermal sensor 100 is coupled to the ROM 730 and internal register 735. The ROM 730 contains a microprogram consisting of a plurality of microcode instructions. The operation of the microprogram within the microprocessor 700 is described more fully below. In general, the microprogram 740 writes values representing the threshold temperature in the internal register 735. The internal register 735 stores the threshold temperature values and is coupled to the programmable Vbe circuit 110. For example, in a preferred embodiment of the present invention, the Vp1, Vp2 and Vp3 voltage values stored in the internal register 735 are used to program the programmable Vbe circuit 110 in the manner as described above. However, the present invention is not limited to three input voltage values in that any number of values may be stored in the internal register 735 to program any number of threshold temperatures. When the microprocessor 700 attains the threshold temperature, the programmable threshold sensor generates a comparator signal via sense amplifier 160 as described above. The comparison signal is labeled as "interrupt" on FIG. 7. The interrupt is input to the ROM 730 and the processor unit 705.In response to the interrupt, the microprogram 740 generates new values representing a new threshold temperature. The microprogram writes the new values to the internal register 735. For example, if the programmable thermal sensor generates an interrupt based on a threshold temperature of 100[deg.] F., then the microprogram may write values to the internal register 735 to represent a threshold temperature of 110 F. In the first embodiment, the processor unit 705 receives the interrupt signal as a standard hardware interrupt input. In response to the interrupt, the processor unit 705 generates a clock control value for the PLL circuit 720. The clock signal value reduces the microprocessor system clock frequency.If the interrupt is again generated in response to the microprocessor 700 attaining the new threshold temperature value, the microprogram 740 writes a new temperature threshold value to the internal register 735, and the processor unit 705 further reduces the microprocessor system clock frequency. In addition, the processor unit 705 may set a standard timer circuit such that if a pre-determined amount of time elapses, then the processor unit 705 increases the clock frequency. Increasing the clock frequency permits the processor unit 705 to increase performance when the temperature of the microprocessor has decreased. In addition, to detect further decreases in the microprocessor temperature, the microprogram 740 may lower the threshold temperature and the processor unit may further increase the clock frequency. Therefore, the programmable thermal sensor of the present invention is utilized to control the temperature by increasing and decreasing the microprocessor clock frequency.Referring to FIG. 8, a flow diagram for a method of controlling the programmable thermal sensor configured in accordance with the present invention is illustrated. The method illustrated in the flow chart of FIG. 8 may be a microprogram such as microprogram 740 stored in ROM 730. Upon initialization of the microprocessor, a first threshold temperature is programmed into the programmable thermal sensor as shown in step 800. Although the present invention is described in conjunction with a microprocessor integrated circuit, one skilled in the art will appreciate that the thermal sensor of the present invention may be incorporated into any integrated circuit. The temperature of the integrated circuit is sensed as shown in step 810. The sensing of the integrated circuit may be performed by the programmable thermal sensor 110 of the present invention. The integrated circuit sensor determines whether the temperature of the integrated circuit equals the first threshold temperature. If the integrated circuit temperature is equal to or greater than the threshold temperature, then the threshold temperature is compared to a critical temperature as shown in step 830.The critical temperature is defined as the maximum temperature that the integrated circuit may attain before the integrated circuit is physically damaged. If the threshold temperature is equal to the critical temperature, then the integrated circuit is shut down as shown in step 860. Alternatively, if the threshold temperature is less than the critical temperature, then steps are taken to reduce the temperature in the integrated circuit as shown in step 840. For example, in a microprocessor integrated circuit, the microprocessor system clock frequency is reduced. In addition to reducing the system clock frequency, a message to a system user reporting the temperature of the integrated circuit is generated. By informing the user with the temperature information, the user may take steps external to the integrated circuit to facilitate cooling. Next, a new threshold temperature is programmed in the thermal sensor as shown in step 850. The process continues wherein the thermal sensor senses the integrated circuit temperature to detect if the integrated circuit temperature reaches the new threshold temperature, and based on the threshold temperature set, either shuts down the power to the integrated circuit or executes steps to reduce the temperature.Referring to FIG. 9, a block diagram of a programmable thermal sensor system configured in accordance with a second embodiment of the present invention is illustrated. A microprocessor 900 comprises, in part, a programmable thermal sensor 110 and a processor unit 905. The programmable thermal sensor 110 is configured as described above. The programmable thermal sensor 110 is connected to a ROM 910 and an internal register 920. The programmable thermal sensor 110 is also coupled to external sensor logic 940. The external sensor logic 940 is coupled to a counter 950 and an active cooling device 955. An external clock 945 is input to a counter 950, and the output of the counter 950 is input to a clock circuit 930. The clock circuit 930 buffers the input clock frequency to generate the microprocessor clock for the processor unit 905. In operation, a microprogram 915, stored in ROM 910, sets the internal register 920 to an initial threshold temperature value. If the temperature of the microprocessor 900 rises to the threshold temperature, an interrupt signal is generated to the external sensor logic 940.Upon receipt of the interrupt to the external sensor logic 940, the external sensor logic 940 programs a value to the counter 950, and activates the active cooling device 955. The active cooling device 955 may comprise a fan or other heat dissipating device. To activate the active cooling device 955, the external sensor logic 940 generates a signal to turn on the active cooling device 955 by any number of well known methods. The counter 950 is configured as a frequency divider such that a clock frequency, from the external clock 945, is input. The counter 950 generates a new clock frequency based on the counter value. The programming of a counter, such as counter 950, for use as a frequency divider is well known in the art and will not be described further. As one skilled in the art will recognize, the amount in which the clock frequency may be reduced is a function of the counter selected. The slower clock frequency is input to the clock circuit 930. The clock circuit 930 may perform a variety of functions such as buffering, clock distribution, and phase tuning. The system clock comprises a reduced frequency to facilitate the cooling of the device. In addition to triggering the external sensor logic 940, the programmable thermal sensor also interrupts the microprogram 915. Upon receiving the interrupt, the microprogram 915 programs the internal register 920 to sense a new threshold temperature. If the microprocessor 900 heats up to the new threshold temperature, the external sensor logic 940 is again triggered, and the system clock frequency is further reduced. The configuration illustrated in FIG. 9 provides closed loop control of the microprocessor system clock frequency, thereby automatically reducing the temperature when overheating occurs.Referring to FIG. 10, a block diagram of a fail safe thermal sensor configured in accordance with the present invention is illustrated. A fail safe thermal sensor 1010 is incorporated into a microprocessor 1000. Although the fail safe thermal sensor 1010 is incorporated into the microprocessor 1000, one skilled in the art will appreciate the fail safe thermal sensor may be incorporated into any integrated circuit. The fail safe thermal sensor 1010 contains a Vbe circuit 1012, a bandgap voltage reference circuit 120, a current source 140, and a sense amplifier 160. The bandgap voltage reference circuit 120, the current source 140 and the sense amplifier 160 operate in accordance with the respective circuits described above. The Vbe reference circuit 1012 is equivalent to the programmable Vbe circuit 110, except that the resistive value ratio is fixed. In the Vbe circuit 1012, the output Vbe voltage is fixed based on resistive values R5, R6, R7, R8 and R9. In a preferred embodiment of the present invention, the resistive values R5, R6, R7, R8 and R9 are fixed to the critical temperature. Consequently, the fail safe thermal circuit 1010 generates an interrupt when the temperature of the microprocessor 1000 attains the pre-programmed fixed critical temperature.The output of the fail safe thermal sensor 1010 is connected to stop clock logic 1015. The stop clock logic 1015 is coupled to the microprocessor clock circuit 1020. Upon receipt of the interrupt of the fail safe thermal sensor 1010, the stop clock logic 1015 halts operation of the microprocessor 1000 by inhibiting the microprocessor clock. In addition, the stop clock logic 1015 ensures that the microprocessor 1000 finishes a system cycle completely. The stop clock logic 1015 therefore protects loss of data when an interrupt is generated during a microprocessor clock cycle. A microprocessor clock circuit 1012 may comprise a simple clock oscillator or a more complex and controllable clock generator. The fail safe thermal sensor 1010 prohibits the microprocessor 1000 from attaining a critical temperature, thereby protecting the device without software control.Referring to FIG. 11, a computer system incorporating a microprocessor comprising thermal sensing configured in accordance with the present invention is illustrated. A computer system 1100 contains a central processing unit (CPU) 1105 incorporating the programmable thermal sensor 100 and the fail safe thermal sensor 1010. In a preferred embodiment, the CPU comprises a compatible Intel microprocessor architecture, manufactured by Intel Corporation, the assignee of the present invention. The computer system 1100 also contains memory 1110 and an I/O interface 1120. The I/O interface 1120 is coupled to an output display 1130 and input devices 1140 and 1145. In addition, I/O interface 1120 is coupled to a mass memory device 1160. The CPU 1105, memory 1110, I/O interface 1120, output device 1130, and input devices 1140 and 1145 are those components typically found in a computer system, and, in fact, the computer system 1100 is intended to represent a broad category of data processing devices. The memory 1110 stores software for operation of the computer system 1100. Specifically, memory 1110 stores, in part, an operating system and an interrupt handler routine for operation in conjunction with the thermal sensor.Upon generation of an interrupt in the programmable thermal sensor 100 or the fail safe thermal sensor 1010, the interrupt handler routine 1165 is executed. The calling of an interrupt handler routine upon generation of a hardware interrupt in a microprocessor is well-known in the art and will not be described further. In general, the interrupt handler routine 1165 generates a message to the output display 1130. The message informs the user of the computer system 1100 that the microprocessor 1105 has attained the threshold temperature. In response, a user may alter external environmental conditions to facilitate cooling of the CPU 1105. As described above, the CPU 1105 sets a new threshold temperature for the programmable thermal sensor. If the CPU 1105 temperature rises to the new threshold temperature, another interrupt is generated. Again, the interrupt handler routine 1165 is called to generate a message to the user on output display 1130. If the temperature reaches a critical temperature for which the fail safe thermal sensor is programmed, then the fail safe thermal sensor generates an interrupt to shut down the CPU 1105.Although the present invention has been described in terms of a preferred embodiment, it will be appreciated that various modifications and alterations might be made by those skilled in the art without departing from the spirit and scope of the invention. The invention should therefore be measured in terms of the claims which follow. |
The present disclosure includes apparatuses and methods related to accessing status information. One example apparatus comprises a host and a memory device coupled to the host. The memory device includes a controller configured to provide, to a status arbiter, a status signal indicating whether a status register of the controller contains generated status information. Responsive to the status signal indicating that the status register contains the generated status information, the controller can also provide the status information from the controller to the status arbiter via a status intermediary. |
1.An apparatus comprising:Host; anda memory device coupled to the host, wherein the memory device includes a controller configured to:Providing a status signal indicating whether the status register of the controller contains status information has been generated to the state arbitrator; andIn response to the status signal indicating that the status register contains the generated status information, the status information is provided from the controller to the status arbiter via a status medium.2.The device of claim 1, wherein the state medium is configured to request the generated status information from the status register in response to determining that the status signal is authorized.3.A device according to any of claims 1 to 2, wherein the state arbiter is configured to send the status information to the host via an in-band data bus.4.A device according to any of claims 1 to 2, wherein the state arbiter is configured to send the status information to the host via an out-of-band bus.5.The apparatus according to any one of claims 1 to 2, wherein said status signal is transmitted as an interrupt request signal.6.An apparatus comprising:Host; andA memory device comprising:State arbitera plurality of controllers, wherein each of the plurality of controllers is configured to:Providing a status signal indicating whether the corresponding status register of the controller contains generated status information to the status arbiter; andThe generated status information is provided to the status arbiter via a status medium.7.The device of claim 6 wherein said state medium is configured to:Requesting the generated status information from the corresponding status register in response to a status authorization signal received from the status arbiter; andThe status register is updated when the status medium receives the generated status information from the corresponding status register.8.The device of claim 6, wherein the state arbiter is configured to send the generated status information to the host via a data bus.9.The apparatus according to any one of claims 6 to 8, wherein said status signal is transmitted as an interrupt request signal.10.Apparatus according to any of claims 6 to 8, wherein said state medium is configured to provide said generated status information to said state arbiter in a time division multiplexed form.11.Apparatus according to any of claims 6 to 8, wherein said state arbiter is configured to continuously monitor a plurality of states provided from respective ones of said memory devices to said state arbiter signal.12.Apparatus according to any of claims 6 to 8, wherein each of said plurality of controllers includes a sequencer and control logic, said sequencer and said control logic each comprising a configured Respecting a respective status register that has generated status information, and wherein each of the sequencer and the control logic is configured to provide a respective status signal corresponding to the generated status information to the status medium .13.A state channel that includes:a state arbiter configured to continuously monitor status signals received from a plurality of controllers, wherein the status signals indicate whether respective corresponding status registers of the controller contain generated status information to be provided to the host;A status medium configured to provide the generated status information corresponding to the status authorization signal to the status arbiter in response to a status authorization signal received from the status arbiter.14.The device of claim 13, wherein the state arbiter is configured to provide the generated state information corresponding to the state authorization signal to the host via an out-of-band bus.15.The apparatus of claim 13 wherein each of said plurality of controllers is coupled to a readout circuit of a respective array of memory cells, said plurality of controllers being configured to control said readout circuitry to perform a store operation And calculation operations.16.The apparatus of claim 15 wherein said readout circuitry of each respective array of memory cells comprises a sense amplifier and a corresponding computational component per column.17.The apparatus of claim 13 wherein each controller is configured to transmit said generated status information corresponding to said status signal in a time division multiplexed form.18.A method of operating a memory, comprising:Providing a status signal indicating whether the controller's status register contains generated status information to the state arbitrator; andIn response to the status signal indicating that the status register contains the generated status information, the status information is provided from the controller to the status arbiter via a status medium.19.The method of claim 18 wherein said state arbiter and said controller are located on a memory device coupled to a host;Wherein the controller is coupled to a readout circuit coupled to the array of memory cells;Wherein the controller is configured to control the readout circuitry to perform a store operation and a compute operation on data stored in the array;Wherein the generated status information includes N-bit status information;Providing the N-bit status information from the controller to the state arbiter via the state medium includes: time division multiplexing the N-bit status information such that the N-bit status information is via Less than N data paths are provided to the state arbitrator.20.The method of claim 18, wherein the state medium is configured to selectively couple one of a plurality of first N-bit data paths associated with respective plurality of status registers to a gateway via a wired OR configuration a second N-bit data path of the state arbiter, and wherein each of the plurality of status registers is configured to store N-bit status information.21.The method of claim 18 wherein said status signal is one of a plurality of status signals corresponding to respective plurality of status registers, and wherein said method includes continuously monitoring a location provided to said state arbitrator A plurality of status signals are described. |
Access status informationTechnical fieldThe present invention relates generally to semiconductor memories and methods, and more particularly to apparatus and methods related to access status information.Background techniqueThe memory device is typically disposed in a computer or other electronic system as an internal semiconductor integrated circuit. There are many different types of memory, including volatile memory and non-volatile memory. Volatile memory may require power to maintain its data (eg, host data, erroneous data, etc.) and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), Synchronous Dynamic Random Access Memory (SDRAM) and Thyristor Random Access Memory (TRAM). The non-volatile memory can provide permanent data by retaining stored data when not being powered, and can include NAND flash memory, NOR flash memory, and resistance variable memory (eg, phase change random access memory (PCRAM) )) Resistive Random Access Memory (RRAM) and Magnetoresistive Random Access Memory (MRAM) (for example, from Rotating Moment Transfer Random Access Memory (STT RAM)) and the like.An electronic system typically includes a number of processing resources (eg, one or more processors) that can retrieve and execute instructions and store the results of the executed instructions to a suitable location. A processor may include a number of functional units, such as an arithmetic logic unit (ALU) circuit, a floating point unit (FPU) circuit, and a combinational logic block, for example, the functional unit may be used to pass data (eg, one or more operands) Executing an operation (for example, a calculation operation) to execute an instruction. As used herein, computing operations may be, for example, Boolean operations, such as AND, OR, NOT, NOT, NAND, NOR, and XOR, and/or other operations that may involve manipulating data (eg, Invert, shift, arithmetic, statistics, and many other possible operations). For example, a functional unit circuit can be used to perform arithmetic operations on operands, such as addition, subtraction, multiplication, and/or division, via a number of logical operations. For example, the computing operations described above may be distinguished from "storage operations," as used herein, may refer to operations that do not involve data manipulation (eg, via functional units typically associated with processing resources). Examples of storage operations include data read operations, data write operations, and data refresh operations.When an instruction is provided to a functional unit circuit for execution, several components in an electronic system may be involved. The instructions may be executed, for example, by processing resources, such as a controller and/or a host processor. Data (eg, the operands on which instructions are to be executed) may be stored in a memory array accessible by the functional unit circuitry. The instructions and/or data may be retrieved from the memory array and sequenced and buffered before the functional unit circuit begins executing instructions on the data. Furthermore, since different types of operations can be performed by functional unit circuits in one or more clock cycles, intermediate results of instructions and/or data can also be sequenced and buffered.In many examples, processing resources (eg, processors and/or associated functional unit circuits) can be external to the memory array and access data via a bus bar located between the processing resource and the memory array to execute a set of instructions. The processing performance of the processing performed in the memory device can be enhanced, and the processor can be implemented within the memory and/or implemented close to the memory (eg, directly on the same chip as the memory array). Processing in the memory device can save time and/or reduce power consumption by reducing or eliminating the amount of data transfer via the bus associated with performing computational operations, for example.In various examples, it may be useful for a host (eg, a host processor) to access state information from a memory device. For example, such status information may relate to process control, debug, exceptions, and/or errors, as well as various other status information associated with operations performed by the memory device.DRAWINGS1A is a block diagram of an apparatus in the form of a computing system including a memory device, in accordance with several embodiments of the present invention.FIG. 1B is a detailed block diagram of an example of the controller shown in FIG. 1A in accordance with several embodiments of the present invention.Figure 2A illustrates a portion of a state channel.2B is a block diagram illustrating a portion of a state channel in accordance with several embodiments of the present invention.3 is a block diagram illustrating additional details of a portion of the state channel shown in FIG. 2B.4 is a schematic diagram illustrating a readout circuit in accordance with several embodiments of the present invention.FIG. 5 is a schematic diagram illustrating a readout circuit in accordance with several embodiments of the present invention.6 is a logic table illustrating selectable logical operation results in accordance with several embodiments of the present invention, which may be implemented by the readout circuitry shown in FIG.Detailed waysThe present invention includes apparatus and methods related to access status information. The present invention includes apparatus and methods related to access status information. An example device includes a host and a memory device coupled to the host. The memory device includes a controller configured to provide a status signal indicating whether the controller's status register contains generated status information to the state arbiter. In response to the status signal indicating that the status register contains generated status information, the controller may also provide status information from the controller to the status arbiter via the status medium.Embodiments of the invention may include state channels that have various benefits as compared to prior methods. For example, several embodiments may include reduced logic (eg, fewer logical components and/or simplified logic), more efficient routing (eg, routing via fewer data paths) than previous approaches. And/or the latency associated with providing status information from the controller of the memory device to the host is reduced. For example, with respect to reduced logic, embodiments of the present invention can transfer status information generated by the controller from the controller's local status register to the host without re-storing the status information in a separate set of state aggregators. In the register (for example, status FIFO (first in, first out)), for example.Moreover, in terms of more efficient routing, in various examples, a memory device can include multiple memory arrays, each memory array having a corresponding controller to perform operations on the array (eg, memory operations and/or Calculation operation, etc.). The controllers can each have a number of status registers that are configured to store status information (eg, status messages) that can include multiple bits (eg, 64, 128, etc.). For example, consider that each of the eight controllers has two 128-bit wide status registers. In this example, each 128-bit wide status register may require 128 data paths from the register to the state aggregator and/or to the host. Therefore, in this example, 2K data paths (128/register x 16 registers) would be required to provide status information from the corresponding registers to the host. As further described herein, several embodiments of the present invention may provide generated status information via fewer data paths than prior methods, such as in the examples described above. For example, several embodiments include time division multiplexing of state information provided from respective status registers, which can reduce data for the data path and have other benefits.In terms of reducing latency associated with transmitting status information, several embodiments of the present invention continuously poll for status request signals provided from respective status registers. This continuous polling improves latency compared to methods in which components external to the controller (eg, state aggregator, host, etc.) can intermittently request status information from a particular register.BRIEF DESCRIPTION OF THE DRAWINGS In the following detailed description of the invention, reference to the drawing The embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments of the invention, and it is understood that other embodiments can be utilized and can be modified and changed without departing from the scope of the invention. And / or structural changes. As used herein, a designation such as "N", particularly with respect to a reference number in the drawings, may include a number of specific features so specified. As used herein, a number of specific things may refer to one or more of such things (eg, several memory arrays may refer to one or more memory arrays). "Multiple" specific things are intended to refer to more than one such thing.The numbering conventions used in the figures herein are that the first number or the first few numbers correspond to the figure numbers of the drawings, and the remaining numbers identify the elements or components in the drawings. Similar numbers may be used to identify similar elements or components between different figures. For example, 130 may refer to element "30" in Figure 1, and similar elements may be referred to as 430 in Figure 4. It will be appreciated that elements shown in the various embodiments herein can be added, exchanged, and/or eliminated to provide several additional embodiments of the invention. In addition, it will be appreciated that the proportions and relative scales of the elements provided in the figures are intended to illustrate particular embodiments of the invention and are not to be considered as limiting.1 is a block diagram of an apparatus in the form of a computing system 100 that includes a memory device 120, in accordance with several embodiments of the present invention. Memory device 120, controller 140, memory array 130, readout circuitry 150, logic circuitry 170, and/or status register 134, as used herein, may also be considered separately as "devices."System 100 includes a host 110 coupled to a memory device 120 that includes a memory array 130. Host 110 can be a host system such as a personal laptop, desktop, digital camera, smart phone or memory reader, as well as various other types of hosts. Host 110 can include a system motherboard and/or backplane and can include several processing resources (eg, one or more processors, microprocessors, etc.).System 100 can include a separate integrated circuit, or both host 110 and memory device 120 can be located on the same integrated circuit. System 100 can be, for example, a server system and/or a high performance computing (HPC) system and/or a portion thereof. Although the example shown in FIG. 1 illustrates a system having a Von Neumann architecture, embodiments of the invention may be implemented in a non-von Neumann architecture, a non-von Neumann architecture. One or more components (eg, CPU, ALU, etc.) that are typically associated with a von Neumann architecture may not be included.For the sake of clarity, system 100 has been simplified to focus on features that are specifically related to the present invention. For example, memory array 130 can be a DRAM array, an SRAM array, an STT RAM array, a PCRAM array, a TRAM array, an RRAM array, a NAND flash array, and/or a NOR flash array. Array 130 can include memory cells configured as rows and columns, the rows being coupled by access lines (which may be referred to herein as word lines and/or select lines), the columns being by readout lines (which are This may be referred to herein as a digital line or digital line) coupling. Although a single array 130 is shown in FIG. 1, embodiments are not limited thereto. For example, memory device 120 can include a number of arrays 130 (eg, a number of DRAM cell banks, NAND flash cells, etc.). Additionally, although not shown, multiple memory devices 120 can be coupled to host 110 via respective plurality of memory channels.The memory device 120 includes an address circuit 111 for latching address signals provided via the bus 156 through the I/O circuit 173. The bus 156 can be used as a data bus (for example, an I/O bus) and an address bus; however, the embodiment is not limited thereto. The address signal can be received by address circuit 111 and decoded by row decoder 184 and column decoder 185 to access memory array 130. Status information that may include exception information may be provided from controller 140 on memory device 120 to host 110 via a status channel containing a high speed interface (HSI), which may include out-of-band bus 157. An out-of-band bus can refer to a bus that is separate from a data (eg, DQ) bus. Data can be read from memory array 130 by sensing voltage changes and/or current changes on the digital lines using readout circuitry 150. Readout circuitry 150 can read and latch a page (eg, a row) of data from memory array 130. I/O circuitry 173 can be used to communicate bi-directionally with host 110 via bus 156. Write circuit 135 can be used to write data to memory array 130.Controller 140 decodes the signals provided by host bus 110 from control bus 154. These signals may include chip enable signals, write enable signals, and address latch signals to control operations performed on memory array 130, including data read, data write, and data erase operations. In various embodiments, controller 140 is responsible for executing instructions from host 110 and sequentially accessing array 130 and other functions. For example, executing instructions from host 110 may include performing computational operations using processing resources corresponding to readout circuitry 150 and/or logic 170, as further described herein. Controller 140 may include a state machine (eg, firmware and/or hardware in the form of an application specific integrated circuit (ASIC)), a sequencer, control logic, and/or some other type of control circuit. In the example shown in FIG. 1A, controller 140 includes a register 134 (eg, a status register) that can store status information in accordance with several embodiments described herein. A more detailed description of one example of controller 140 is depicted with respect to FIG. 1B.As further described below, in several embodiments, readout circuitry 150 can include a number of sense amplifiers and a number of computational components that can be used and can be referred to as accumulators and can be used to perform Computational operations (eg, performing logical operations on data associated with complementary readout lines). In several instances, a storage location (eg, a latch) corresponding to a computing component can be used as a stage of a shift register. For example, a clock signal can be applied to a computing component to shift data from one computing component to a neighboring computing component.In several embodiments, readout circuitry 150 can be used to perform logical operations using data stored in array 130 as input and store the results of the logical operations back to array 130 without the need to transfer data via read line address access. (For example, there is no need to fire the column decode signal). As such, various computing operations may be performed using readout circuitry 150 and within readout circuitry 150 rather than by processing resources external to the readout circuitry (eg, by a processor and/or other processing circuitry associated with host 110, For example, an ALU circuit located on device 120 (e.g., located on controller 140 or elsewhere) is executed (or associated with execution of processing resources).In various prior methods, for example, data associated with an operand will be read from memory via a readout circuitry and passed through an I/O line (eg, via a local I/O line and/or global area) The I/O line) is provided to the external ALU circuit. The external ALU circuit can contain several registers and the operands will be used to perform the computational operations and the results will be transferred back to the array via the I/O lines. In contrast, in several embodiments of the invention, readout circuitry 150 is configured to perform logic operations on data stored in memory array 130 and to disable I/O lines coupled to readout circuitry 150. The result is stored back to the memory array in the case of (for example, a local I/O line).In several examples, readout circuitry 150 can be formed on the pitch of the memory cells of the array. For example, a unit of a memory array can have a particular unit size, such as 4F2 or 6F2, where F is the feature size corresponding to the unit. As further described below, in several examples, sensing components corresponding to readout circuitry 150 (eg, respective sense amplifiers and pairs of computational components) are formed at the same pitch as the sense lines of the array and can be operated To perform various calculations. For example, if the sense line spacing is 3F, the transistors of the sensing component can be mated within the same 3F pitch. In contrast, devices (eg, logic gates) associated with ALU circuits of various previous in-memory processor (PIM) systems may not be formed at the pitch of memory cells, as compared to several embodiments of the present invention. This can increase the chip size, for example. Additional logic circuit 170 may be coupled to readout circuitry 150 and may be used to store (eg, a cache and/or buffer) the results of operations described herein.As such, in several embodiments, circuitry external to array 130 and readout circuitry 150 is not required to perform computational operations because readout circuitry 150 can be operated to perform various computational operations (eg, related to mathematical operations) Linked logical operations) without the need to use external processing resources. In several examples, readout circuitry 150 can be used as a number of 1-bit processing resources, with sensing components coupled to respective columns of array 130 for use as respective 1-bit processing elements. Thus, readout circuitry 150 can be used to supplement and/or replace external processing resources, such as the host's ALU circuitry, at least to some extent.Enabling an I/O line can include enabling (eg, turning on) a gate having one of a source coupled to a decoded signal (eg, a row of decoded signals) and one of a source/drain coupled to one of the I/O lines. However, where the column decode lines of the array are not enabled, embodiments are not limited to performing logic operations using readout circuitry (e.g., 150). Whether or not the local I/O line is used in association with performing a logical operation via the readout circuit 150, the local I/O line can be enabled to transfer the result to a suitable location in addition to being transmitted back to the array 130 (eg, Go to an external register, such as status register 134).FIG. 1B is a detailed block diagram of an example of the controller 140 shown in FIG. 1A in accordance with several embodiments of the present invention. In the example shown in FIG. 1B, controller 140 is shown to include control logic 131, sequencer 132, and timing circuitry 133. Control logic 131 and sequencer 132 can include status registers 134-1 and 134-2, respectively.Although not shown in FIG. 1B, control logic 131 may include several components (eg, program counters, registers, ALUs, branch logic, state machines, etc.) configured to control the extraction and execution of instructions. For example, microcode instructions can be fetched from a memory array (eg, 130) and/or from a host (eg, 110) and can be stored in a cache (eg, controller) for execution. In several examples, control logic 131 may decode the microcode instructions for execution by sequencer 132. Sequencer 132 may also include several components (e.g., a number of FIFO buffers, program counter logic, branch logic, registers, microcode instruction cache, ALU, state machine, etc.) configured to execute microcode instructions. The timing circuitry 133 can provide timing to coordinate the execution of operations (eg, storage operations and/or computational operations) and is responsible for providing collision free access to an array (eg, array 130 in FIG. 1A).In the example shown in FIG. 1B, control logic 131 and sequencer 132 include respective status registers 134-1 and 134-2. Status registers 134-1 and/or 134-2 may store generated status information. As an example, the status information associated with register 134-1 may include status information related to program instructions, such as program counter status information, breakpoints, illegal instructions, etc., as well as various other exceptions. The status information associated with register 134-2 may include status information related to the error status detected in the microcode instruction, the invalid circuit state, and the like. The status information may also include control flow information and debug information as well as other status information. The generated status information may be provided (e.g., reported) to a host (e.g., host 110) via a status channel, such as described herein. For example, status information within status registers 134-1 and 134-2 may be routed to the host via a state arbiter (eg, state arbiter 246 shown in FIG. 2B). In several instances, status signals corresponding to respective status registers 134-1 and 134-2 are provided to the state arbitrator. As described further below, the status signals are continuously monitored (e.g., by a state arbiter) to determine if the respective registers contain the generated status information to be reported.Figure 2A illustrates a portion of a state channel. The example shown in FIG. 2A includes a state aggregator 271. State aggregator 271 can be used as an arbiter component that can be configured to perform various functions, such as coordinating commands executed on multiple memory banks and providing status information from respective multiple controllers corresponding to the library. As used herein, a library can include a controller (eg, 140), a corresponding array of memory cells (eg, 130), and various associated circuits for performing operations on the array. As an example, although the memory device 120 illustrated in FIG. 1A illustrates a single controller 140 and array 130 (eg, a single library), the memory device can include four, eight, or sixteen banks.For example, in the example shown in FIG. 2A, state aggregator 271 is coupled to eight banks that include respective controllers (eg, controller 0 to controller 7). The controller can be a controller such as controller 140 shown in Figure IB. As such, in this example, each of the eight controllers includes two status registers (eg, 134-1 and 134-2 shown in FIG. 1B) that are configured to store with the corresponding library Corresponding status information. For example, each controller includes a control logic status register and a sequencer status register. Status information may be provided from the status register to the state aggregator 271 via respective data paths (e.g., 274-0, 274-1, ..., 274-12, 274-15). Data paths 274-0 through 274-15 may be "N" bit data paths, where N may represent the width of the corresponding status register. For example, if the status register is a 128-bit register, then the data paths can each be a 128-bit data path (eg, 16 separate 128-bit buses). Aggregator 271 can receive status information from the plurality of status registers via a FIFO interface. For example, each controller may each have a pair of FIFOs that enter state aggregator 271 for pushing state information. The state aggregator 271 can retrieve status information from each of the FIFOs and can push the status information back to the host (e.g., push back to the host 110 via the out-of-band bus 156). It may be desirable to provide status information to the host via the out-of-band bus to prevent and/or reduce memory bandwidth on the data bus (eg, DQ).In the example shown in FIG. 2A, state aggregator 271 includes a status FIFO that includes a plurality of registers 272-0 (state FIFO 0 bank 0), 272-1 (state FIFO 1 bank 0), ..., 272-14 (state FIFO0 bank 7), 272-15 (state FIFO 1 bank 7) to temporarily store state information retrieved from the controller's status register.The example state channel described with respect to FIG. 2A can provide an efficient way to report status information from the corresponding library to the host. However, this state channel can have defects. For example, setting a set of registers (eg, 272-0 to 272-15) on an arbiter component (eg, state aggregator 271) may increase the amount of logic associated with the system. As an example, for a 128-bit wide status message, a 2K latch (128 bits x 16) on state aggregator 271 would be required to store 16 status messages corresponding to 16 respective status registers. Moreover, providing a separate N-bit data path (e.g., 274-0 to 274-15) for each individual N-bit status register can result in increased signal routing complexity (e.g., with respect to the embodiment shown in Figure 2B) Describe the routing complexity compared to). Moreover, the state channel shown in FIG. 2A can rely on the state aggregator 271 to cyclically (eg, periodically) monitor (eg, poll) the controller's status register to determine if the corresponding status register contains a report to Status information generated by the host (eg, error information, exception information, etc.). Polling the status register in this manner can result in increased latency, for example, compared to a status channel in which the status register is continuously monitored.2B is a block diagram illustrating a portion of a state channel in accordance with several embodiments of the present invention. Similar to the example shown in FIG. 2A, the example embodiment shown in FIG. 2B includes a state arbiter 246. Similar to state aggregator 271, state arbiter 246 can function as an arbiter component that can be configured to perform various functions, such as coordinating commands executed on multiple memory banks and from a plurality of controllers (eg, 240) The status information is provided to correspond to the library.The state channel shown in FIG. 2B includes a number of registers 234 that are local to the controller 240 (eg, resident on controller 240). Controller 240 can be a controller such as controller 140 as described in Figures 1A and 1B. In several instances, controller 240 includes a state medium 248 that is configured to provide generated status information from register 234 to state arbiter 246. In this example, controller 240 represents a plurality of controllers coupled to state arbiter 246, and each controller can have a respective state medium 248 associated therewith. However, only one controller is shown in Figure 2B.Unlike the state channel in FIG. 2A, state arbiter 246 does not include a plurality of registers (eg, 272-0 through 272-15) that include state information to temporarily store state registers 234 retrieved from controller 240. Status FIFO. In contrast, state arbiter 246 includes state control component 292 that is configured to provide state information received from state register 234 (eg, via state medium 248) to the host (eg, in FIG. 1A) Host 110 shown). For example, state control component 292 can be a state machine (eg, such as state machine 392 shown in FIG. 3) and/or some other type of control circuit. Providing a state arbiter 246 that does not include a separate status FIFO register can reduce the size and/or circuit complexity of the system, as well as the inclusion and other benefits. More details regarding the state arbiter 246 and its operation associated with reporting the generated status information from the status register to the host are described in detail below with respect to FIG.In the example shown in FIG. 2B, status request signal 209-1 corresponding to each of respective status registers 234 is provided from controller 240 to state arbiter 246. As an example, if the memory device contains 16 status registers, then signal 209-1 may represent 16 status request signals provided to status arbiter 246. Status request signal 209-1 may be a flag that may be "set" to indicate that the corresponding register contains the generated status information that will be provided to the host. In several instances, once the status flag is set, it remains set until served (eg, until the corresponding status information has been successfully provided to the host). In the embodiment shown in FIG. 2B, status request signal 209-1 is continuously monitored (eg, via continuous polling via state control component 292), and as described, for example, in FIG. 2A, may involve passively monitoring status This can reduce the latency associated with servicing a status request compared to an instance of a register (eg, via periodic polling).State control component 292 can be configured to monitor status request signal 209-1 and determine the order in which the status request is serviced (e.g., in an event in which multiple status registers contain status information that will be reported). For example, in the embodiment illustrated in FIG. 2B, in response to status request signal 209-1 indicating that corresponding status register 234 contains generated status information, status control component 292 can provide authorization signal 209-2 to the status medium. 248, thereby indicating which status register 234 is authorized to provide its status information. In response to the authorization signal 209-2, the status medium can provide the signal 209-3 to a control component corresponding to the selected status register 234 indicating that the generated status information can be provided to the status medium 248 (eg, as indicated by arrow 290) .In several instances, state medium 248 is configured to time-multiplex multiplex N-bit status information (eg, messages) received from status register 234 such that in less than N data paths, multiple data transfers will occur The status information is provided to a state arbiter 246. For example, in the example shown in FIG. 2B, N-bit status information is provided via an "N/D" data path (eg, 290-1, 290-2, ..., 290-(N/D)) to State arbiter 246, where "N/D" is a positive integer less than "N". As an example, for a 128-bit status message (eg, N=128), N/D may be 8 (eg, D=16) or N/D may be 16 (eg, D=8). In several instances, each respective status register 234 can be associated with a different N/D data path (eg, 290-1 to 290-(N/D)). For example, if there are 16 status registers 234, each status register 234 is configured to store a 128-bit status message (eg, N=128) and D is 16 such that N/D is 8, then each status register 234 can Associated with eight data paths between state medium 248 and state arbiter 246. In this particular example, providing all 128 bits of a particular status message via time division multiplexing via an 8-bit data path would involve 16 data transfers via an 8-bit data path (eg, 128 bits/8-bit data path = Transfer 8 different bits 16 times). Embodiments are not limited to a particular number of data paths between state medium 248 and state arbiter 246.Performing time division multiplexing in accordance with several embodiments of the present invention can provide various benefits. For example, time division multiplexing a status message as described above can significantly reduce the number of data paths between a controller (e.g., 240) and a state arbiter (e.g., 246). For example, for 16 128-bit wide status registers, providing an 8-bit data path/register (eg, N/D=8) instead of a 128-bit data path would result in a number of routes from 2K (eg, 128x16) Reduce to 128 (for example, 8x16).In several instances and as shown in FIG. 2B, state medium 248 can be local to controller 240. For example, if the memory device includes eight controllers each including two status registers (eg, 16 registers in the memory device) (eg, controller 140 shown in FIG. 1B), the memory device can also include Eight state media corresponding to a pair of corresponding status registers and local to the respective controller 240. However, the embodiment is not limited to this. For example, state medium 248 can be external to controller 240 and still configured to individually correspond to a particular controller's status register.3 is a block diagram illustrating additional details of a portion of the state channel shown in FIG. 2B. State arbiter 346, state media 348, and status register 334-1/334-2 may be similar to corresponding state arbiter 246, state media 248, and status register 234 shown in FIG. 2B. The example shown in FIG. 3 includes 16 status registers 334-1/334-2 (eg, status registers 134-1 and 134-2) corresponding to control components 331/332, which may be associated with The control components 131 (e.g., control logic) and 132 (e.g., SEQUENCER) shown in Figure IB are similar. Embodiments are not limited to a particular number of status registers 334-1/334-2 and/or control components 331/332.As an example, the 16 status registers 334-1/334-2 may correspond to 8 controllers (eg, two registers/controllers 140, as shown in FIG. 1B). Although a single state medium 348 is shown, there may be multiple state media 348 corresponding to respective controllers (eg, controllers including control components 331/332).As shown in FIG. 3, state arbiter 346 includes state control component 392 (eg, state MACHINE), which may be similar to the state control component depicted in FIG. 2B. As described above, in operation, status request signal 309-1 corresponding to respective status register 334-1/334-2 is actively monitored (e.g., continuously polled) by state machine 392. In this example, there are 16 status request signals 309-1 corresponding to the 16 status registers 334-1/334-2. Control component 331/332 can generate status information stored in corresponding status register 334-1/334-2. Status request signal 309-1 is configured to provide an indication to state state machine 392 when corresponding status register 334-1/334-2 contains generated status information. The status request signal 309-1 may be a flag or interrupt signal indicating that the corresponding status register contains status information to be reported, as well as other types of signals.The state arbiter 346 can include a state selector 339 that can be configured to control the timing when the state signal 309-1 is serviced. For example, status selector 339 can select among a number of received status request signals 309-1 (eg, in an event where multiple status request signals are simultaneously "set"). Status selector 339 can provide status authorization signal 309-2 to status medium 348 to indicate from which status register 334-1/334-2 the status information is provided. In response to receiving the status authorization signal 309-2, the status medium 348 can provide the signal 309-3 to the corresponding control component 331/332 to transmit an indication that the corresponding generated status information (eg, sent to the status medium 348). In response to the controller receiving signal 309-3, the status information of the selected status register is sent to the corresponding status medium (eg, as indicated by arrow 312-1). For purposes of discussion, it will be assumed that the status register 334-1/334-2 is a 128-bit wide register (eg, to store a 128-bit wide status message); however, embodiments are not limited thereto.State medium 348 is configured to provide (e.g., transmit) selected state information to state arbiter 346 as indicated by arrow 312-2. As described above, status medium 348 can be configured to perform time division multiplexing on status information received from selected status registers 334-1/334-2. As an example, for a 128-bit status message, the status medium can be configured to perform 16 separate 8-bit data transfers to provide the selected status message to the state arbiter 346. In this example, arrow 312-2 may represent a plurality of 8-bit wide buses (eg, having an 8-bit wide bus for each of the 16 128-bit wide status registers). Multiplexer 397 can be used to select a particular one of the 16 8-bit wide buses (e.g., select a bus corresponding to the selected status register) for input to state machine 392 (e.g., as shown by arrow 312-3). . The data output from state machine 392 can be encoded (e.g., via encoder 394) as shown by arrow 312-4. The encoding may be, for example, 8b/10b encoding suitable for DC balancing and/or clock recovery, as well as various other encodings. As shown by arrow 312-5, the encoded data may be output from encoder 394 (e.g., in parallel) and serially transmitted via data serializer 396. Signal 312-6 may represent serial transmission data output from state arbiter 346 and provided to the host (eg, via an out-of-band bus).Status medium 348 can also send signal 321 to corresponding control component 331/332 to update selected status register 334-1/334-2. As an example, signal 321 can be a "pop" signal or a signal for clearing a selected status register, which can also cause a status request signal 309-1 corresponding to selected status register 334-1/334-2 to be reset. .Alternatively, arrow 312-2 may represent a "wired ORed" bus. For example, arrow 312-1 can represent 16 128-bit buses (corresponding to 16 corresponding 128-bit registers 334-1/334-2). In this example, bus 312-2 may be a 128-bit bus that is selectively driven by a particular status register 334-1/334-2. For example, a "wired OR configuration" can be used (eg, via state medium 348) to select the one of the 16 128-bit buses corresponding to arrow 312-1 to drive the bus corresponding to arrow 312-2. As such, the 128 bits of the selected status message will be provided to the state arbiter 346 in parallel and the multiplexer 397 will not be needed. In this example, state select 339 can be used to select which of status registers 334-1/334-2 is allowed to drive bus 312-2.In the example illustrated in FIG. 3, clock signal 322 (CLK) is provided to state arbiter 346. Clock signal 322 can be, for example, a DDR interface clock as well as other types of clock signals. The example of FIG. 3 includes a clock modification component 393 (STATUS CLKDIV5) that receives the clock signal 322 and outputs the modified clock signal 323. The modified clock signal 323 can have a frequency that is a particular portion of the clock signal 322 (eg, 1/2, 1/4, 1/5, etc.), for example. The amount of modification associated with the modified clock signal 323 can be based on various factors. For example, in several instances, state medium 348 can be configured such that every five clock cycles of clock signal 322 can only output 8 bits. Therefore, it may be beneficial to provide the state medium 348 with a clock signal 323 having a clock cycle time that is five times longer than the clock signal 322. Clock signal 322 and modified clock signal 323 can be provided to various components associated with the state channels shown in FIG.In several instances, generated status information may be provided from a status register (eg, 334-1/334-2) to a host via an in-band bus (eg, data bus 156 shown in FIG. 1A). For example, an alert signal corresponding to 16 status registers can be provided to the host (eg, via an alert pin). For example, the alert signal can be an OR in the 16 status request signals 309-1. In this embodiment, the alert signal will act in response to any of the status request signals 309-1 being set. In response to the alarm signal acting, the host can poll the status register 334-1/334-2 to determine which status register contains status information to be reported. After reading the particular status register 334-1/334-2, the read status register can be updated (eg, cleared) and the active (eg, reset) alarm signal can be deactivated, assuming that the status register does not contain status information to be reported. . As an example, in-band access to status register 334-1/334-2 can be accomplished via a DMA (Direct Memory Access) read command. In-band access (eg, access via an in-band data bus) may provide access to state information by, for example, a host that does not support out-of-band access (eg, via additional pins). Additionally, in several instances, a dedicated alert pin available for an alert signal may not be required in connection with in-band access. For example, the host can be configured to periodically poll the status register 334-1/334-2 via an alert signal instead of continuously polling.4 is a schematic diagram illustrating a readout circuit in accordance with several embodiments of the present invention. Readout circuitry 450 may correspond to readout circuitry 150 shown in FIG. In the example shown in FIG. 4, a memory cell includes a storage element (eg, a capacitor) and an access device (eg, a transistor). For example, the first memory cell includes transistor 402-1 and capacitor 403-1, and the second memory cell can include transistor 402-2, capacitor 403-2, and the like. In this embodiment, memory array 430 is a DRAM array of 1T1C (single transistor single capacitor) memory cells, although other cell configurations can be used (eg, 2T2C with two transistors and two capacitors per memory cell). In several embodiments, the memory unit can be a destructive read memory unit (eg, reading data stored in the unit can corrupt the data such that data originally stored in the unit is refreshed after being read).The cells of memory array 430 can be arranged to be coupled by access (word) lines 404-X (rows X), 404-Y (rows Y), etc., and by pairs of complementary readout lines (eg, as shown in FIG. A column in which the illustrated digital line DIGIT(D) is coupled to DIGIT(D)_ and DIGIT_(n) and DIGIT(n)_) shown in FIG. Individual read lines corresponding to each pair of complementary read lines may also be referred to as a digital line 405-1 for DIGIT (D) and a digital line 405-2 for DIGIT (D)_, respectively. Although only a pair of complementary digit lines are shown in FIG. 4, embodiments of the invention are not limited thereto, and the memory cell array can include additional memory cell columns and/or digit lines (eg, 4,096, 8,192, 16,384, etc.).Although the rows and columns are illustrated as being orthogonal to each other, the embodiments are not limited thereto. For example, rows and columns can be oriented relative to each other in a variety of other two- or three-dimensional configurations.The memory cells can be coupled to different digital lines and/or word lines. For example, a first source/drain region of transistor 402-1 can be coupled to digital line 405-1 (D), and a second source/drain region of transistor 402-1 can be coupled to capacitor 403-1, And the gate of transistor 402-1 can be coupled to word line 404-Y. A first source/drain region of transistor 402-2 can be coupled to digital line 405-2(D)_, a second source/drain region of transistor 402-2 can be coupled to capacitor 403-2, and a transistor The gate of 402-2 can be coupled to word line 404-X. The cell board shown in Figure 4 can be coupled to each of capacitors 403-1 and 403-2. The cell board can be a common node to which a reference voltage (eg, ground) can be applied in various memory array configurations.Memory array 430 is configured to be coupled to readout circuitry 450, in accordance with several embodiments of the present invention. In this embodiment, readout circuitry 450 includes sense amplifiers 406 and computation components 431 that correspond to respective memory cell columns (e.g., coupled to respective complementary digital line pairs). Sense amplifier 406 can be coupled to the pair of complementary digit lines 405-1 and 405-2. Computing component 431 can be coupled to sense amplifier 406 via pass gates 406-1 and 407-2. The gates through gates 407-1 and 407-2 can be coupled to logic operation selection logic 413.Operation selection logic 413 can be configured to include pass gate logic for controlling pass gates coupling the non-transposed pair of complementary digit lines between sense amplifier 406 and computing component 431; and switching gate logic It is used to control a switching gate that couples the transposed pair of complementary digit lines between sense amplifier 406 and computing component 431. Operation selection logic 413 may also be coupled to the pair of complementary digit lines 405-1 and 405-2. Operation selection logic 413 can be configured to control pass gates 407-1 and 407-2 based on the selected operation.Sense amplifier 406 can be operative to determine data values (e.g., logic states) stored in selected memory cells. Sense amplifier 406 can include a cross-coupled latch, which can be referred to herein as a primary latch. In the example illustrated in FIG. 4, the circuit corresponding to sense amplifier 406 includes a latch 415 that includes four transistors coupled to the pair of complementary digit lines 405-1 and 405-2. . However, embodiments are not limited to this example. Latch 415 can be a cross-coupled latch (eg, a gate of a transistor pair of n-channel transistors (eg, NMOS transistors) 427-1 and 427-2 and, for example, a p-channel transistor (eg, PMOS transistor) 427-1 And the other transistor pair of 429-2 is cross-coupled to the gate).In operation, when sensing (eg, reading) a memory cell, the voltage on one of the digital lines 405-1 (D) or 405-2 (D)_ will be slightly greater than the digital line 405-1 (D) Or the voltage on the other of 405-2(D)_. The ACT signal can be driven high and the RNL* signal can be driven low to enable (eg, fire) sense amplifier 406. A digital line 405-1(D) or 405-2(D)_ having a lower voltage will turn on one of the PMOS transistors 427-1 or 429-2 to be larger than the PMOS transistor 427-1 or 429-2 The extent of the other, thereby driving the digital line 405-1 (D) or 405-2 (D)_ with a higher voltage to be higher than another digit line 405-1 (D) or 405-2 ( D) _ is driven to a high degree.Similarly, a digital line 405-1(D) or 405-2(D)_ having a higher voltage will turn on one of the NMOS transistors 427-1 or 427-2 to be larger than the NMOS transistor 427-1 or 427- The degree of the other of 2, whereby the digital line 405-1 (D) or 405-2 (D)_ having a lower voltage is driven to be lower than another digit line 405-1 (D) or 405 -2(D)_ is driven to a low level. Therefore, after a short delay, the digit line 405-1 (D) or 405-2 (D)_ having a slightly larger voltage is driven to the voltage of the supply voltage VDD through the source transistor, and the other digit line 405-1 (D) or 405-2 (D)_ The voltage (eg, ground) that is driven to the reference voltage by the trough transistor. Thus, cross-coupled NMOS transistors 427-1 and 427-2 and PMOS transistors 427-1 and 429-2 are used as sense amplifier pairs that amplify digital lines 405-1(D) and 405-2 ( D) The differential voltage on _ and operates to latch the data values sensed from the selected memory cell.Embodiments are not limited to the sense amplifier 406 configuration illustrated in FIG. As an example, sense amplifier 406 can be a current mode sense amplifier and/or a single-ended sense amplifier (eg, a sense amplifier coupled to a digital line). At the same time, embodiments of the invention are not limited to, for example, the folded digital line architecture shown in FIG.Sense amplifier 406 is operative along with computing component 431 to perform various operations using data from the array as input. In several embodiments, data may be transferred without a digital line address access (eg, without exciting the column decode signal such that data is transferred via the local I/O line to the outside of the array and readout circuitry In the case of a circuit) the results of the operation are stored back into the array. As such, several embodiments of the invention may be capable of performing operations using less power than various prior methods. In addition, several embodiments may be implemented since several embodiments do not require data to be transferred across local and global I/O lines and/or external data buses to perform computational operations (eg, between memory and discrete processors) More powerful (eg, faster) processing power than previous methods.The sense amplifier 406 can further include a balancing circuit 414 that can be configured to balance the digital lines 405-1(D) and 405-2(D)_. In this example, balancing circuit 414 includes a transistor 424 coupled between digital lines 405-1 (D) and 405-2 (D)_. The balancing circuit 414 also includes transistors 425-1 and 425-2 each having a first source/drain region coupled to a balanced voltage (eg, VDD/2), where VDD is the supply voltage associated with the array. The second source/drain region of transistor 425-1 may be coupled to digital line 405-1(D), and the second source/drain region of transistor 425-2 may be coupled to digital line 405-2(D) _. The gates of transistors 424, 425-1, and 425-2 can be coupled together and coupled to a balance (EQ) control signal line 426. As such, activating the EQ enables the transistors 424, 425-1, and 425-2, which effectively shorts the digital lines 405-1(D) and 405-2(D)_ together and shorts to the balanced voltage (eg, VDD/2).As further described below, in several embodiments, readout circuitry 450 (e.g., sense amplifier 406 and computation component 431) can be operative to perform selected operations, and first the result is stored in sense amplifier 406 or computational component. In one of 431, data from the readout circuitry is transferred without local or global I/O lines (e.g., read line address access is not performed via, for example, an active column decode signal, for example).As shown in FIG. 4, the computing component 431 can also include a latch, which can be referred to herein as a secondary latch 464. Secondary latch 464 can be configured and operated in a manner similar to that described above with respect to primary latch 415, except in the following cases: the one included in the secondary latch A cross-coupled p-channel transistor (eg, a PMOS transistor) can have its respective source coupled to a supply voltage (eg, VDD), and the pair of cross-coupled n-channel transistors of the secondary latch (eg, The NMOS transistor) can have its respective source selectively coupled to a reference voltage (eg, ground) such that the secondary latch is continuously enabled. The configuration of the computing component 431 is not limited to the configuration shown in FIG. 4, and various other embodiments are possible.FIG. 5 is a schematic diagram illustrating a readout circuit in accordance with several embodiments of the present invention. 5 illustrates arrays each including a plurality of columns of complementary sense lines 505-1 and 505-2 coupled to corresponding sense amplifiers 506 and computing components. 535. Computing component 535 can be coupled to the sense amplifier 506 via pass gates 507-1 and 507-2. The sense amplifier 506 shown in FIG. 5 may correspond to the sense amplifier 406 shown in FIG. The readout circuitry shown in FIG. 5 may correspond to the readout circuitry 150 shown in FIG. 1A, for example. The logical operation selection logic 513 shown in FIG. 5 may correspond to the logical operation selection logic 413 shown in FIG.The gates of the pass gates 507-1 and 507-2 can be controlled by a logic operation selection logic signal Pass. For example, the output of the logic operation selection logic can be coupled to the gates of the pass gates 507-1 and 507-2. Computation component 535 can latch the respective data value and can be used as a shift register by shifting the data value (eg, right and/or left).As an example, the computing component 535 can include a respective stage (eg, a shifting unit) of a shift register configured to shift data values to the left and/or right. For example, as illustrated in FIG. 5, each computation component 535 (eg, stage) of the shift register includes a pair of right shift transistors 581 and 586, a pair of left shift transistors 589 and 590, and a pair. Inverters 587 and 588. Signals PHASE 1R, PHASE 2R, PHASE 1L, and PHASE 2L may be applied to respective control lines 582, 583, 541, and 543 to enable/disable and perform logic operations and/or shift data in accordance with embodiments described herein. The associated feedback on the latch of the corresponding computing component 535.The readout circuitry shown in FIG. 5 also shows logic operation selection logic 513 coupled to a number of logic select control input control lines (including ISO, TF, TT, FT, and FF). Controlling the state of the logic select control signal on the input control line in accordance with logic selection and presenting to the pair of complementary readout lines 505-1 and 505 when the isolation transistors 550-1 and 550-2 are enabled via the asserted ISO control signal The logical value selected from a plurality of logical operations is determined by the data value at -2.According to various embodiments, logical operation selection logic 513 may include four logic select transistors: logic select transistor 562 coupled between the gate of switch transistor 542 and the TF signal control line; logic select transistor 552 coupled through Between the gates of gates 507-1 and 507-2 and the TT signal control line; logic select transistor 554 coupled between the gates of pass gates 507-1 and 507-2 and the FT signal control line; and logic selection Transistor 564 is coupled between the gate of switching transistor 542 and the FF signal control line. The gates of logic select transistors 562 and 552 are coupled to a true sense line through isolation transistor 550-1 (having a gate coupled to the ISO signal control line). The gates of logic select transistors 564 and 554 are coupled to complementary sense lines through isolation transistor 550-2 (also having a gate coupled to the ISO signal control line).Data values present on the pair of complementary readout lines 505-1 and 505-2 can be loaded into the computing component 535 via the pass gates 507-1 and 507-2. When the pass gates 507-1 and 507-2 are on (e.g., turned on), data values on the pair of complementary sense lines 505-1 and 505-2 are passed to the computing component 535 ( For example, loaded into the shift register). The data values on the pair of complementary sense lines 505-1 and 505-2 may be data values stored in sense amplifier 506 when the sense amplifier is activated. The logic operation select logic signal Pass is high to turn on the pass gates 507-1 and 507-2.The ISO, TF, TT, FT, and FF control signals are operative to be implemented based on the selection of logic functions based on the data values ("B") in sense amplifier 506 and the data values ("A") in computation component 535. In particular, the ISO, TF, TT, FT, and FF control signals are configured to be implemented to select the logic function independently of the data values present on the pair of complementary readout lines 505-1 and 505-2 ( However, the result of performing the logical operation may depend on the data values present on the pair of complementary readout lines 505-1 and 505-2. That is, due to the presence of the pair of complementary readout lines 505-1 and 505. The data values on -2 are passed through logic to operate the gates of the pass gates 507-1 and 507-2, so the ISO, TF, TT, FT, and FF control signals select the logic operation to perform directly.In addition, FIG. 5 shows an exchange transistor 542 configured to exchange the orientation of a pair of complementary sense lines 505-1 and 505-2 between the sense amplifier 506 and the calculation component 535. When the switching transistor 542 is turned on, data values on the pair of complementary sense lines 505-1 and 505-2 on the sense amplifier 506 side of the switching transistor 542 are reverse coupled to the switching transistor 542. The pair of complementary readout lines 505-1 and 505-2 on the side of the computing component 535 are thereby loaded into the loadable shift register of the computing component 535.When the ISO control signal line is activated and any of the TT control signals are activated (eg, high) and the data value on the true value readout line is "1" or the FT control signal is activated (eg, high) and complementary When the data value on the read line is "1", the logical operation selection logic signal Pass can be activated (for example, high) to turn on the OPEN pass gates 507-1 and 507-2.A value of "1" on the true sense line turns on logic select transistors 552 and 562. A data value of "1" on the complementary readout line turns on logic select transistors 554 and 564. Passing any of the data values on the ISO control signal or the corresponding TT/FT control signal or corresponding sense line (eg, a sense line coupled to the gate of a particular logic select transistor) is not high, then pass Gates 507-1 and 507-2 will not be turned on by a particular logic select transistor.When the ISO control signal line is activated, and the TF control signal is activated (eg, high) and the data value on the true value readout line is "1" or the FF control signal is activated (eg, high) and the complementary readout line When the data value is "1", the logical operation selection logic signal Pass* can be activated (eg, high) to turn on the switching transistor 542 (eg, turned on). If the data value on the corresponding control signal or corresponding sense line (e.g., the sense line coupled to the gate of a particular logic select transistor) is not high, then switch transistor 542 will not be turned on by the particular logic select transistor.The Pass* control signal does not necessarily complement the Pass control signal. Both the Pass control signal and the Pass* control signal can be activated simultaneously or simultaneously. However, both the Pass control signal and the Pass* control signal simultaneously initiate shorting of the pair of complementary readout lines together.The readout circuitry illustrated in Figure 5 is configured to be implemented directly by selecting one of a plurality of logic operations in accordance with four logic select control signals (e.g., logic operation selection does not depend on the presence of the pair of complementary readouts The data value on the line). Some combinations of logic select control signals may cause pass gates 507-1 and 507-2 and switch transistor 542 to be turned on simultaneously, which shorts the pair of complementary readout lines 505-1 and 505-2. In accordance with several embodiments of the present invention, the logic operations that may be implemented by the readout circuitry illustrated in Figure 5 may be the logical operations summarized in the logic tables shown in Figure 6.6 is a logic table illustrating selectable logical operation results, which may be implemented by a readout circuit such as that shown in FIG. 5, in accordance with several embodiments of the present invention. Four logic select control signals (eg, TF, TT, FT, and FF), along with specific data values present on the complementary readout lines, can be used to select one of a plurality of logic operations to implement for storage in sense amplifier 506. And the start data value in the calculation component 535. The four control signals, along with the particular data values present on the complementary readout lines, control the state of pass gates 507-1 and 507-2 and switching transistor 542, which in turn affects computing component 535 and/or reads before/after excitation. The data value in amplifier 506. The ability to selectively control the state of swap transistor 542 facilitates the implementation of logic operations involving inverse data values (eg, inverse operands and/or inverse results), among others.The logic table 6-1 illustrated in Figure 6 shows the start data values ("A") stored in the calculation component 535 shown in column 644 and the sense amplifiers shown in column 645, stored in the sense amplifier. Start data value ("B") in 506. The other three column headers in logic table 6-1 refer to the states of pass gates 507-1 and 507-2 and switch transistor 542, which can select control signals (eg, TF, TT, FT, and FF) based on four logics. The states of the particular data values on the pair of complementary readout lines 505-1 and 505-2 are respectively controlled to be turned on (e.g., turned on) or turned off (e.g., not turned on). The "not open" column corresponds to the pass gates 507-1 and 507-2 and the switch transistor 542 are both in a non-conducting state, and the "turn on true value" corresponds to the pass gates 507-1 and 507-2 being in the on state. And "turning on the inversion" corresponds to the switching transistor 542 being in an on condition. The configuration corresponding to the pass gates 507-1 and 507-2 and the switching transistor 542 being in the on condition is not reflected in the logic table 6-1 because this result causes the readout lines to be shorted together.By selectively controlling the pass gates 507-1 and 507-2 and the switch transistor 542, each of the three columns of the upper portion of the logic table 6-1 can be in the three columns with the lower portion of the logic table 6-1. Each is combined to provide 3x 3=9 different combinations of results corresponding to nine different logical operations, as indicated by the various connection paths shown at 675. The nine different selectable logic operations that can be implemented by the readout circuitry (e.g., 150 in Figure 1A) are summarized in logic table 6-2 illustrated in Figure 6, the nine different selectable logic operations comprising XOR logic operation.The column of logic table 6-2 illustrated in Figure 6 shows a header 680 containing the states of the logic select control signals (FF, FT, TF, and TT). For example, the state of the first logic select control signal is provided in row 676, the state of the second logic select control signal is provided in row 677, the state of the third logic select control signal is provided in row 678, and the fourth logic The state of the selection control signal is provided in row 679. The specific logical operations corresponding to the results are summarized in line 647.Although specific embodiments have been illustrated and described herein, it will be understood by those skilled in the art The invention is intended to cover modifications or variations of one or more embodiments of the invention. It should be understood that the above description has been made by way of illustration and not limitation. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those skilled in the <RTIgt; The scope of one or more embodiments of the invention encompasses other applications in which the above structures and methods are used. The scope of the one or more embodiments of the invention should beIn the foregoing detailed description, for purposes of illustration This method of the invention should not be construed as reflecting the invention, that is, the embodiments disclosed herein are intended to have more features than those explicitly recited in the appended claims. Instead, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Therefore, the following claims are hereby incorporated into the Detailed Description of |
The present invention provides for a method and an apparatus for performing automatic control adjustments during photolithography processes. A plurality of semiconductor devices are processed. Optical data analysis is performed upon at least one of the processed semiconductor device. Control adjustments to the processing is performed in response to the optical data analysis. |
What is claimed: 1. A method for performing automatic control adjustments during photolithography processes, comprising:processing a plurality of semiconductor devices; performing optical data analysis upon at least one of said processed semiconductor devices by correlation with metrology data; and performing control adjustments to said processing in response to said optical data analysis. 2. The method described in claim 1, wherein performing a process run of semiconductor devices further comprises processing semiconductor wafers.3. The method described in claim 2, wherein processing semiconductor wafers further comprises performing a photolithography process on said semiconductor wafers.4. The method described in claim 3, wherein performing photolithography process further comprises performing an overlay process upon said semiconductor wafers.5. The method described in claim 1, wherein performing optical data analysis further comprises:performing a photolithography process upon a semiconductor wafer; acquiring optical manufacturing data from said processing of semiconductor wafer; storing said optical manufacturing data; performing a metrology data acquisition upon said semiconductor wafer; correlating said optical manufacturing data and said metrology data; determining whether a significant deviation exists between said correlated optical manufacturing data and said metrology data; and performing control adjustments in response to a determination that a significant deviation exists between said correlated optical manufacturing data and said metrology data. 6. The method described in claim 5, wherein acquiring optical manufacturing data further comprises acquiring data relating to an intensity of a reflected process light.7. The method described in claim 5, wherein acquiring optical manufacturing data further comprises acquiring data relating to an angle of reflection of a reflected process light.8. The method described in claim 5, wherein storing said optical manufacturing data further comprises storing said optical manufacturing data in a memory of a computer system.9. The method described in claim 5, wherein determining whether a significant deviation exists between said correlated optical manufacturing data and said metrology data further comprises comparing said optical manufacturing data to a predetermined reference optical data that corresponds to a reference metrology data.10. The method described in claim 5, wherein performing control adjustments further comprises modifying a reference plane of focus during a photolithography process.11. The method described in claim 10, wherein modifying a reference plane of focus further comprises modifying the relative positioning of a semiconductor wafer being processed and a source of a photolithography light element.12. The method described in claim 5, wherein performing control adjustments further comprises altering a period of time of the flash associated with a photolithography light element.13. The method described in claim 5, wherein performing control adjustments further comprises altering an exposure dose of a photolithography light element.14. The method described in claim 13, wherein altering an exposure dose of a photolithography light element further comprises altering an intensity of a photolithography light element.15. An apparatus for performing automatic control adjustments during photolithography processes, comprising:a processing tool capable of processing semiconductor devices; an optical sensor coupled with said processing tool, said optical sensor being capable of acquiring optical manufacturing data during the operation of said processing tool; a machine interface electronically coupled with said processing tool and said optical sensor such that said machine interface is capable of delivering a control signal to said processing tool and receiving the optical manufacturing data from said optical sensor; a metrology tool coupled with said processing tool, said metrology tool being capable of receiving a processed semiconductor wafer from said processing tool and performing metrology data acquisition upon said processed semiconductor wafer; and a system electronically interfaced with said machine interface and said metrology tool, said system being capable of receiving data from said machine interface and sending a control signal to said machine interface. 16. The apparatus described in claim 15, wherein said processing tool is a photolithography process tool.17. The apparatus described in claim 15, wherein said optical sensor is capable of detecting the intensity of a reflected process light.18. The apparatus described in claim 15, wherein said optical sensor is capable of detecting the angle of reflection of a reflected process light.19. The apparatus described in claim 18, wherein said optical sensor further comprises an array of optical sensors.20. The apparatus described in claim 15, wherein said computer system further comprises process control system embedded into said computer system.21. An apparatus for performing automatic control adjustments during photolithography processes, comprising:means for processing a plurality of semiconductor devices; means for performing optical data analysis upon at least one of said processed semiconductor devices by correlation with metrology data; and means for performing control adjustments to said processing in response to said optical data analysis. 22. A method for performing automatic control adjustments during photolithography processes, comprising:performing a photolithography process upon a semiconductor wafer; acquiring optical manufacturing data from said processing of said semiconductor wafer; storing said optical manufacturing data in a memory of a computer system; performing a metrology data acquisition upon said semiconductor wafer; correlating said optical manufacturing data and said metrology data; determining whether a significant deviation exists between said correlated optical manufacturing data and said metrology data; and performing control adjustments in response to a determination that a significant deviation exists between said correlated optical manufacturing data and said metrology data. 23. The method described in claim 22, wherein acquiring optical manufacturing data further comprises acquiring data relating to an intensity of a reflected process light.24. The method described in claim 22, wherein acquiring optical manufacturing data further comprises acquiring data relating to an angle of reflection of a reflected process light.25. The method described in claim 22, wherein determining whether a significant deviation exists between said correlated optical manufacturing data and said metrology data further comprises comparing said optical manufacturing data to a predetermined reference optical data that corresponds to a reference metrology data.26. The method described in claim 22, wherein performing control adjustments further comprises modifying a reference plane of focus during a photolithography process.27. The method described in claim 26, wherein modifying a reference plane of focus further comprises modifying the relative positioning of a semiconductor wafer being processed and a source of a photolithography light element.28. The method described in claim 22, wherein performing control adjustments further comprises altering a period of time of the flash associated with a photolithography light element.29. The method described in claim 22, wherein performing control adjustments further comprises altering an exposure dose of a photolithography light element.30. The method described in claim 29, wherein altering an exposure dose of a photolithography light element further comprises altering an intensity of a photolithography light element. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to semiconductor products manufacturing, and, more particularly, to a method and apparatus for monitoring and performing improved focus methods for photolithography processing of semiconductor devices.2. Description of the Related ArtThe technology explosion in the manufacturing industry has resulted in many new and innovative manufacturing processes. Today's manufacturing processes, particularly semiconductor manufacturing processes, call for a large number of important steps. These process steps are usually vital, and therefore, require a number of inputs that are generally fine-tuned to maintain proper manufacturing control.The manufacture of semiconductor devices requires a number of discrete process steps to create a packaged semiconductor device from raw semiconductor material. The various processes, from the initial growth of the semiconductor material, the slicing of the semiconductor crystal into individual wafers, the fabrication stages (etching, doping, ion implanting, or the like), to the packaging and final testing of the completed device, are so different from one another and specialized that the processes may be performed in different manufacturing locations that contain different control schemes.Among the important aspects in semiconductor device manufacturing are RTA control, chemical-mechanical (CMT) control, etching, and overlay control. Overlay is one of several important steps in the photolithography area of semiconductor manufacturing. The overlay process involves measuring the misalignment between two successive patterned layers on the surface of a semiconductor device. Generally, minimization of misalignment errors is important to ensure that the multiple layers of the semiconductor devices are connected and functional. As technology facilitates smaller critical dimensions for semiconductor devices, the need for reduction of misalignment errors increases dramatically. Errors in photolithography processes are also caused by inadequate focusing during exposure steps.Generally, photolithography engineers currently analyze the overlay and misalignment errors a few times a month. The results from the analysis of the overlay and misalignment errors are used to make updates to exposure tool settings manually. Generally, a manufacturing model is employed to control the manufacturing processes. Some of the problems associated with the current methods include the fact that the exposure tool settings are only updated a few times a month. Furthermore, currently the exposure tool updates are performed manually. Many times, errors in semiconductor manufacturing are not organized and reported to quality control personnel. Often, the manufacturing models themselves incur bias errors that could compromise manufacturing quality.Generally, a set of processing steps is performed on a lot of wafers on a semiconductor manufacturing tool called an exposure tool or a stepper. The manufacturing tool communicates with a manufacturing framework or a network of processing modules. The manufacturing tool is generally connected to an equipment interface. The equipment inter-face is connected to a machine interface to which the stepper is connected, thereby facilitating communications between the stepper and the manufacturing framework. The machine interface can generally be part of an advanced process control (APC) system. The APC system initiates a control script based upon a manufacturing model, which can be a software program that automatically retrieves the data needed to execute a manufacturing process. Often, semiconductor devices are staged through multiple manufacturing tools for multiple processes, generating data relating to the quality of the processed semiconductor devices. Many times, errors can occur during the processing of semiconductor devices. These errors can cause appreciable inconsistencies in the critical dimensions of multiple parameters in the processed semiconductor devices. Many times photolithography processes are performed outside an acceptable focus window, causing degradation in the quality of manufactured semiconductor devices. Manual monitoring of manufacturing parameters during photolithography processes can improve focus during exposure process but can result in some focus errors.The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.SUMMARY OF THE INVENTIONIn one aspect of the present invention, a method is provided for performing automatic control adjustments during photolithography processes. A plurality of semiconductor devices are processed. Optical data analysis is performed upon at least one of the processed semiconductor devices. Control adjustments to the processing are performed in response to the optical data analysis.In another aspect of the present invention, an apparatus is provided for performing automatic control adjustments. The apparatus of the present invention comprises: a processing tool capable of processing semiconductor devices; an optical sensor coupled with said processing tool, said optical sensor being capable of acquiring optical manufacturing data during the operation of said processing tool; a machine interface electronically coupled with said processing tool and said optical sensor such that said machine interface is capable of delivering a control signal to said processing tool and receiving the optical manufacturing data from said optical sensor; a metrology tool coupled with said processing tool, said metrology tool being capable of receiving a processed semiconductor wafer from said processing tool and performing metrology data acquisition upon said processed semiconductor wafer; and a system electronically interfaced with said machine interface and said metrology tool, said system being capable of receiving data from said machine interface and sending a control signal to said machine interface.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 illustrates one embodiment of the present invention;FIG. 2 illustrates a flowchart representation of one method of performing automated focus adjustment in a processing tool, as taught by the present invention;FIG. 3 illustrates a flowchart representation of a more detailed depiction of the method of performing the optical data analysis and the control modifications steps described in FIG. 2;FIG. 4 illustrates one embodiment of a diagram for acquiring optical manufacturing data as described by the present invention;FIG. 5 illustrates an alternative embodiment of a diagram for acquiring optical manufacturing data as described by the present invention; andFIG. 6 illustrates one embodiment of a Focus-Exposure graph generated by the optical manufacturing data described by the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.There are many discreet processes that are involved in semiconductor manufacturing. Many times, semiconductor devices are stepped through multiple manufacturing process tools. One process in semiconductor manufacturing is the overlay process. In particular, the overlay process involves measuring misalignment errors between semiconductor layers during manufacturing processes. Improvements in semiconductor manufacturing processes, such as the overlay process, could result in substantial enhancements, in terms of quality and efficiency, in manufacturing of semiconductor devices. As semiconductor devices are processed through manufacturing tools, production data, or manufacturing data, is generated. The production data that is acquired can be used to automatically perform correction from one manufacturing run of semiconductor devices to another (run-to-run basis). The present invention provides a method of implementing automated error correction for control of semiconductor processes, such as the overlay process. Furthermore, the present invention provides a method of performing run-to-run control of semiconductor manufacturing processes.In photolithography processes, operating within an acceptable focus window is important in reducing manufacturing defects in semiconductor devices. The ability to make wafer by wafer focus assessments and feeding that information to subsequent process steps to make focus corrections can improve the quality of processed semiconductor devices. The present invention teaches a method and an apparatus for automatically acquiring and analyzing focus data and making corrections for subsequent manufacturing processes.Turning now to FIG. 1, one embodiment of the present invention is illustrated. In one embodiment, semiconductor products 105, such as semiconductor wafers are processed on processing tools 110, 112 using a plurality of control input signals on a line 120. In one embodiment, the control input signals on the line 120 are sent to the processing tools 110, 112 from a computer system 130 via machine interfaces 115, 117. In one embodiment, the first and second machine interfaces 115, 117 are located outside the processing tools 110, 112. In an alternative embodiment, the first and second machine interfaces 115, 117 are located within the processing tools 110, 112.In one embodiment, the computer system 130 sends control input signals on a line 120 to the first and second machine interfaces 115, 117. The computer system 130 employs a manufacturing model 140 to generate the control input signals on the line 120. In one embodiment, the manufacturing model 140 defines a process script and input control that implement a particular manufacturing process. The control input signals on a line 120 that are intended for processing tool A 110 are received and processed by the first machine interface 115. The control input signals on a line 120 that are intended for processing tool B 112 are received and processed by the second machine interface 117. Examples of the processing tools 110, 112 used in semiconductor manufacturing processes are steppers, scanners, and step-and-scan tools.For processing tools such as steppers, the control inputs, on the line 120, that are used to operate the processing tools 110, 112 include an x-translation signal, a y-translation signal, an x-expansion wafer scale signal, a y-expansion wafer scale signal, a reticle magnification signal, and a reticle rotation signal. Generally, errors associated with the reticle magnification signal and the reticle rotation signal relate to one particular exposure process on the surface of the wafer being processed in the exposure tool.For photolithography processes, when a process step in a processing tool 110, 112 is concluded, the semiconductor product 105 or wafer that is being processed is examined in a review station. One such review station is a KLA review station. One set of data derived from the operation of the review station is a quantitative measure of the amount of misregistration that was caused by the previous exposure process. In one embodiment, the amount of misregistration relates to the misalignment in the process that occurred between two layers of a semiconductor wafer. In one embodiment, the amount of misregistration that occurred can be attributed to the control inputs for a particular exposure process. The control inputs generally affect the accuracy of the process steps performed by the processing tools 110, 112 on the semiconductor wafer. Modifications of the control inputs can be utilized to improve the performance of the process steps employed in the manufacturing tool. Many times, the errors that are found in the processed semiconductor products 105 can be correlated to a particular fault analysis and corrective actions can be taken to reduce the errors.A metrology tool 150, such as a measurement tool for measuring misalignment and misregistration errors, is employed in the semiconductor device manufacturing system illustrated in FIG. 1. In one embodiment, the metrology tool 150 is capable of performing photolithography registration measurements on semiconductor products 105 that are processed by the processing tools 110, 112. In one embodiment, data from the metrology tool 150 is sent, on a line 155, to the computer system 130, which in one embodiment is part of a process control system (not shown), such as an APC system.When the processing tools 110, 112 perform photolithography processing on the semiconductor products 105, an optical sensor 160 monitors the process and acquires optical manufacturing data. In one embodiment, the optical manufacturing data acquired by the optical sensor 160 is sent to the machine interfaces 115, which sends the optical manufacturing data to the computer system 130 via the line 120. In one embodiment, the computer system 130, which in one embodiment is part of a process control system, correlates and compares the optical manufacturing data with the corresponding manufacturing data from the metrology tool 150. The present invention teaches a method of utilizing the optical manufacturing data and manufacturing data from the metrology tool 150 to perform corrections for photolithography processes on a run-to-run basis as well as on a wafer-to-wafer basis.Turning now to FIG. 2, a flowchart representation of one method of performing automated optical feedback control correction, as taught by the present invention, is illustrated. A manufacturing process run of semiconductor devices, such as semiconductor wafers, is performed, as described in block 210 of FIG. 2. Optical data analysis is performed with the optical manufacturing data that is acquired during the processing run of semiconductor wafers, as described in block 220 of FIG. 2. Furthermore, control modifications are performed on the control signals on the line 120 in response to the optical manufacturing data analysis described in block 220 of FIG. 2. A flowchart representation of a more detailed depiction of the method of performing the optical data analysis and the control modifications steps described in FIG. 2, is illustrated in FIG. 3.Turning now to FIG. 3, in one embodiment, a photolithography process is performed on the semiconductor product 105, such as a semiconductor wafer, as described in block 310 of FIG. 3. As the photolithography process is performed on the semiconductor wafer, the optical sensor 160 acquires optical manufacturing data corresponding to the photolithography process being performed, as described in block 320 of FIG. 3. One embodiment of the arrangement of the apparatus for acquiring optical manufacturing data is illustrated in FIG. 4.Turning now to FIG. 4, photolithography light element 410 illuminates a semiconductor product 105, such as semiconductor wafer. In one embodiment, reflected process light 420, which is the reflected photolithography light element 410 from the semiconductor product 105, is captured by the optical sensor 160, as illustrated in FIG. 4. In one embodiment, the intensity of the reflected process light 420 is recorded by the optical sensor 160.Analysis of the optical measurement data using the apparatus illustrated in FIG. 4 is based on the intensity of the reflected process light 420 as compared to a predetermined reference range of a minimum reflectance and a maximum reflectance. In other words, the intensity of the reflected process light 420, as determined by the optical sensor 160, is quantified by utilizing a reference range of reflectance. The minimum reflectance and the maximum reflectance can be determined by those skilled in the art. In one embodiment, a quality of a processed semiconductor wafer is associated with a predetermined intensity of the reflected process light 420. When the measured intensity of the reflected process light 420 is below the minimum reflectance or above the maximum reflectance, the quality of the processed semiconductor wafer may not be within acceptable specifications.An alternative embodiment of the arrangement of the apparatus for acquiring optical manufacturing data is illustrated in FIG. 5. In one embodiment, an array of optical sensors 160 is used to capture the reflected process light 420 that are the reflections of the photolithography light element 410 from the semiconductor product 105. In one embodiment, the array of optical sensors 160 is used to acquire data relating to the angle of reflection of the reflected process light 420, as illustrated in FIG. 5.In one embodiment, a predetermined reference angle that corresponds to an acceptable manufacturing output is defined. In other words, a reference angle of the reflected process light 420 is defined, which corresponds to a reflected angle that would be realized if a semiconductor wafer that has an acceptable manufacturing quality were to be processed. The reference angle of the reflected process light 420 can be determined by those skilled in the art who have the benefit of the present disclosure. The reflection angle of the reflected process light 420 that is acquired by the array of optical sensors 160 is then compared to the predetermined reference angle. When the series of reflection angles of the reflected process light 420 are not within an acceptable range of the predetermined reference angle, the quality of the processed semiconductor wafer may not be within acceptable specifications.Turning back to FIG. 3, once the optical manufacturing data is acquired, the optical manufacturing data is stored into a data storage medium for later retrieval, as described in block 330 of FIG. 3. Generally, the optical manufacturing data is stored into the data storage medium when the photolithography process is concluded. In one embodiment, the optical manufacturing data is stored into a storage medium (not shown) that is located within the computer system 130. In one embodiment, the optical manufacturing data that is acquired is stored in a corresponding sequence relating to the sequence of the semiconductor wafers that are processed. Therefore, optical manufacturing data corresponding to any particular processed semiconductor wafer can be retrieved by a process control system, which in one embodiment resides in the computer system 130.Once the acquired optical manufacturing data is stored for later retrieval, which is generally performed when a photolithography process is completed, a metrology process is performed on the processed semiconductor wafer, as described in block 340 of FIG. 3. In one embodiment, the semiconductor wafer processed by the processing tool 110, 112 is sent to the metrology tool 150 for acquisition of metrology data or manufacturing data. The metrology data is sent from the metrology tool 150 to the computer system 130 through the line 155. Once the metrology data is acquired for a particular processed semiconductor wafer, the metrology data is correlated to the corresponding optical manufacturing data, as described in block 350 of FIG. 3.Once the metrology data for a particular processed semiconductor wafer is correlated with its corresponding optical manufacturing data, a determination is made whether there are significant deviations between the actual result of the processing of the semiconductor wafers and the expected result, as described in block 360 of FIG. 3. The determination of whether there are significant deviations between the actual result of the processing of the semiconductor wafers and the expected result is made by comparing the acquired optical manufacturing data to the reference optical data and the metrology data.When a determination is made that there are no significant deviations between the actual manufacturing results and the expected results, no adjustments to the process control are performed, as described in block 370 of FIG. 3. In one embodiment, when no adjustments to the process control are performed, a subsequent photolithography process on semiconductor wafers is performed, as shown in FIG. 3. When a determination is made that there are significant deviations between the actual manufacturing results and the expected results, control adjustments to the process control are performed, as described in block 380 of FIG. 3.The control adjustments described in block 380 of FIG. 3 include modifying the reference plane of focus during a photolithography process. In other words, the focus of the photolithography light element410 is modified. In one embodiment, the reference plane of focus is modified by changing the relative positioning of the semiconductor wafer being processed and the source (not shown) of the photolithography light element 410.Another control adjustment that may be made is altering the period of time of the flash associated with the photolithography light element 410. Still another control adjustment that may be made is altering the exposure dose of the photolithography light element 410, or altering the intensity of the photolithography light element 410. FIG. 6 illustrates an exposure-focus graph, wherein point A on the focus-axis represents an ideal focus setting for a photolithography process. The focus point A on the graph in FIG. 6 may be determined manually by one skilled in the art. The focus point A on the graph in FIG. 6 may be determined automatically by the computer system 130 using the acquired optical manufacturing data acquired by the optical sensor(s) 160. It is understood that other control adjustments that control photolithography process known to those skilled in the art can be made to perform the methods taught by the present invention. Once the control adjustment are performed, subsequent processing should result in improved feature size and improved resolution in photolithography processes. Once the control adjustments are performed, the new control structure that controls the subsequent photolithography process can be used to perform subsequent manufacturing runs of semiconductor wafers.Turning back to FIG. 2, the completion of the control adjustments, which is performed in response to a determination that there are significant deviations between the actual manufacturing results and the expected results as described in FIG. 3, concludes the optical data analysis and control modification, described in block 220 of FIG. 2. The new control inputs for the photolithography process are used to perform subsequent manufacturing runs of semiconductor wafers, as described in block 230 of FIG. 2. The steps described in FIGS. 2 and 3 can be performed automatically utilizing computer software programs integrated with a process control system, such as an APC framework, which in one embodiment resides in the computer system 130. The principles taught by the present invention can be implemented into other types of manufacturing frameworks.The principles taught by the present invention can be implemented in an Advanced Process Control (APC) Framework. The APC is a preferred platform from which to implement the overlay control strategy taught by the present invention. In some embodiments, the APC can be a factory-wide software system, therefore, the control strategies taught by the present invention can be applied to virtually any of the semiconductor manufacturing tools on the factory floor. The APC framework also allows for remote access and monitoring of the process performance. Furthermore, by utilizing the APC framework, data storage can be more convenient, more flexible, and less expensive than local drives. The APC platform allows for more sophisticated types of control because it provides a significant amount of flexibility in writing the necessary software code.Deployment of the control strategy taught by the present invention onto the APC framework could require a number of software components. In addition to components within the APC framework, a computer script is written for each of the semiconductor manufacturing tools involved in the control system. When a semiconductor manufacturing tool in the control system is started in the semiconductor manufacturing fab, it generally calls upon a script to initiate the action that is required by the process controller, such as the overlay controller. The control methods are generally defined and performed in these scripts. The development of these scripts can comprise a significant portion of the development of a control system.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
A mobile device has a power saving state which it may enter into when, for example, the remaining battery power is low. In the power saving state, energy is conserved by reducing one or more illumination factors and one or more animation factors for the display screen. The illumination factors may include reducing the illumination level, turning off at least one sector of the screen or of the backlight for the screen and changing a background or foreground illumination colour to a colour which uses less energy; whilst animation factors may include replacing animation sequences with at least one still of the animation and changing the colour of an animation to a colour which uses less energy. The changes may be gradually imposed in dependence upon the remaining power. The location of the device may be used to determine whether the it is roaming or far from home and may be used to automatically switch to power saving mode. May be used with OLED or LCD displays. |
CLAIMS1. A method comprising: generating display images for a display screen of a mobile device; transitioning the mobile device to a reduced power consumption state, wherein transitioning to the reduced power consumption state includes: reducing one or more illumination factors for the display screen; and reducing one or more animation factors for the display screen. 2. The method of claim 1, wherein the reduction of the one or more illumination factors and the one or more animation factors is based at least in part on a type of the display screen for the mobile device. 3. The method of claim 1, wherein reducing the one or more illumination factors includes reducing a level of illumination for the display screen. 4. The method of claim 1, wherein reducing the one or more illumination factors includes transitioning a background or foreground illumination to a colour that requires less power than a previous colour. 5. The method of claim 1, wherein reducing the one or more illumination factors includes turning off a background illumination for the display screen. 6. The method of claim 1, wherein reducing the one or more illumination factors includes turning off one or more but less than all of a plurality of sectors of abackground illumination for the display screen. 7. The method of claim 1, wherein further reducing the one or more illumination factors includes turning off one or more but less than all of a plurality of sectors of the display screen. 8. The method of claim 1, wherein reducing the one or more animation factors includes changing an animation to a colour that requires less power than a previous colour. 9. The method of claim 1, wherein reducing the one or more animation factors includes capturing a series of still frames of an animation and replacing the animation with a sequence of the still frames. 10. The method of claim 1, wherein reducing the one or more animation factors includes capturing a single still frame of an animation and replacing the animation with the still frame. 11. A mobile device comprising: a display screen to display elements, the elements including one or more animations; an internal power supply to supply power to the mobile device; and a power management system, the power management system to reduce power consumption related to the operation of the display through changes in one or more illumination factors for the display screen and changes in one or more animation factors for the display screen. 12. The mobile device of claim 11, wherein the power management system includes a processor of the mobile device. 13. The mobile device of claim 11, wherein the power management system includes a dedicated power manager element for the mobile device. 14. The mobile device of claim 11, wherein the power management system is to reduce power by gradually imposing the changes in the one or more illumination factors for the display screen and the one or more animation factors for the display screen based at least in part on a power state for the internal power supply. 15. The mobile device of claim 11, further comprising a memory containing a power setting, and wherein the power management system is to impose the changes in the one or more illumination factors for the display screen and the one or more animation factors for the display screen based on the power setting. 16. The mobile device of claim 11, wherein the internal power supply is a rechargeable battery. 17. The mobile device of claim 11, wherein the display screen is operable without a backlight, and wherein the changes to the one or more illumination factors includesturning off background illumination. 18. The mobile device of claim 11, wherein the display screen is an organic light emifting diode (OLED). 19. The mobile device of claim 11, wherein the display screen uses a backlight, and wherein the changes to the one or more illumination factors includes turning off the backlight to one or more but less than all of a plurality of sectors of the display screen. 20. The mobile device of claim 11, further comprising a location determination element, and wherein the changes in the one or more illumination factors and one or more animation factors are based at least in part on a physical location of the mobile device. 21. A system comprising: a display screen to display elements, the elements including one or more animations; a rechargeable baftery to power the mobile device; a power management system, the power management system to reduce power consumption related to the operation of the display through changes in one or more illumination factors for the display screen and changes in one or more animation factors for the display screen; a transmitter to transmit data and a receiver to receive data, including data for display on the display screen; and a dipole antenna for the transmission and reception of data. 22. The system of claim 21, wherein the system includes a plurality of power states, one or more the plurality of power states including a certain level of illumination factors and animation factors for the display screen. 23. The system of claim 22, wherein a transition from a first power state to a second power state includes one or more of a change in illumination to reduce power consumption or a change in animation to reduce power consumption. 24. The system of claim 23, wherein the transition to the second power state includes a change to one or more foreground or background colours to colours that require less power to display. 25. The system of claim 23, wherein the transition to the second power state includeselimination of background illumination. 26. The system of claim 23, wherein the transition to the second power state includes a transformation of an animation into a sequence of captured still images. 27. The system of claim 23, wherein the transition to the second power state includes a transformation of an animation into a single still image. 28. A power management system for a mobile device comprising: a first power management subsystem, the first power management subsystem to reduce power consumed by a display screen of the mobile device by imposing changes in one or more illumination factors for the display screen; and a second power management subsystem, the second power management subsystem to reduce power consumed by the display screen of the mobile device by imposing changes in one or more animation factors for the display screen. 29. The power management system of claim 28, wherein the changes in the one or more illumination factors for the display screen to be imposed by the first power management subsystem include one or more of: reducing a level of illumination for the display screen; transitioning a background or foreground illumination to a colour that requires less power than a previous colour; turning off a background illumination for the display screen; turning off one or more but less than all of a plurality of sectors of a background illumination for the display screen; and turning off one or more but less than all of a plurality of sectors of the display screen. 30. The power management system of claim 28, wherein the changes in the one or more animation factors for the display screen to be imposed by the second power management subsystem include one or more of: changing an animation to a colour that requires less power than a previous colour; capturing a series of still frames of an animation, storing the series of still frames in a plurality of registers, and replacing the animation with a sequence of the still frames; and capturing a single still frame of the animation and replacing the animation with the still frame. 31. A power management method for a mobile device substantially as hereinbefore described with reference to, or as illustrated in Figure 1, 2 or 3 of the accompanying drawings. 32. A power management system for a mobile device substantially as hereinbefore described with reference to, or as illustrated in Figure 5 of the accompanying drawings. |
POWER CONSERVATION FOR MOBILE DEVICE DISPLAYS TECINICAL FTELD Embodiments of the invention generally relate to the field of electronic devices and, more particularly, to a method and apparatus for power conservation for mobile device displays. BACKGROUND Mobile devices, includes cellular phones, smart phones, personal digital computes, and other similar devices, are increasing used and relied up on for many different fields and endeavors as the devices become more powerful and flexible in operation. The very mobility and connectiveness of mobile devices allows the devices to operate as substitutes for larger computes, as well as performing communication and entertainment functions. In addition to other attributes, the graphical abilities of the new devices has also become more powerful, allowing more elaborate visual displays for the users of such devices, including extensive animations. However, the processing power and graphical display of a mobile device comes at a price of power consumption. The limited size of mobile devices limits power storage, and the mobility and varied utility of the devices often limits charging opportunities. Thus, devices may be extremely useful but the usefulness may be limited by power consumption. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements. Figure 1 is a state diagram of an embodiment of power management for a mobile device or system; Figure 2 is a flowchart to illustrate an embodiment of a process illumination power management for a mobile device or system; Figure 3 is a flowchart to illustrate an embodiment of a process animation power management for a mobile device or system; Figure 4 is an illustration of a display screen for an embodiment of a mobile device or system; and Figure 5 illustrates an embodiment of a mobile device or system. DETAILED DESCRIPTION Embodiments of the invention are generally directed to power conservation for mobile device displays. As used herein: "Mobile device or system" means a mobile electronic device or system including a cellular telephone, smart phone, personal digital device, handheld computer. In some embodiments, a mobile device or system reduces power consumption related to a display through an organized reduction in power consumption to minimize loss of user experience as power needs are reduced. In some embodiments, a mobile device or system will gradually reduce power consumption for the display as the situation warrants. User interfaces for electronic devices, including mobile devices or systems, have become increasingly sophisticated and intricate. One aspect of this general trend is a move towards animated interfaces incorporating extensive video and rendered 3-D (three-dimensional) objects and environments. However, the processing power (and therefore electrical power dr) required to generate such animated interface elements may be substantial. While the processing power for complex animations is often available even in handheld devices, continued use of the processing power may lead to unacceptably short battery life. In order to keep pace with the increasingly sophisticated functionality offered by mobile devices such as cell phones and PDAs, mobile device manufacturers may address certain power consumption behavior of their mobile devices. In such devices, screen savers are commonly used to reduce power consumption by displays during period of presumed inactivity. However, a screen saver approach provides only coarse management of power consumption, with the screen saver generally providing only two states in which either (1) the display draws full power and the user is provided full interaction with the display, or (2) the display draws much less power but the user is provided with no interaction with the display. In some embodiments, a mobile device or system instead provides a more finely grained response, acting to gradually reduce the power consumption related to the display of a mobile device or system by taking actions that include, but are not limited to, changing colors of items, changing colors of backgrounds, turning off certain sectors, eliminating pictures, reducing or eliminating animation, and reducing intensity of the display. In some embodiments, a mobile device or system includes a power conservation or management feature or element. In some embodiments, the mobile device will slowly and selectively shut down one or more illumination and animated content factors based upon the relationship between the remaining battery level, core processes, and available CPU (central processing unit) cycles. Therefore, the user interface of the device will become less illuminated and less animated (such as still photos displayed instead of background videos, static icons instead of animated icons, and transitions between content screens becoming less graphically intensive) as battery life is reduced or as settings are changed. In some embodiments, a mobile device or system will provide power consumption reduction that is based at least in part on the type of display screen contained in the mobile device. In some embodiments, a mobile device or system will include a display screen that allows for reduction in power consumption when the display screen is partially illuminated, and will utilize the display screen's characteristics to reduce power consumption. In some embodiments, a mobile device may include an OLED (organic light-emitting diode) display. Unlike mobile device technologies such as LCD (liquid crystal display) screens, an OLED display does not require a backlight to function. The operation of such a display without requiring the backlight may provide numerous advantages, including operation with lower power consumption. In some embodiments, a mobile device or system, powered by a battery or other similar internal power source, provides a process or system for fractionally illuminating the display of a portable (battery powered) device on an as-needed basis. In some embodiments, a mobile device includes an OLED display screen, and, with such screen operating without requiring a backlight, it is thus possible to illuminate only certain portions of the screen, thus reducing overall power consumption. In some embodiments, the portion of the display that is illuminated is determined based at least in part on the content to be displayed. For example, if an incoming call is received, and the incoming phone number is to be displayed, a mobile device may illuminate only the portion of the display actually providing the incoming phone number, and thus the power associated with illuminating the complementary portion of the display is saved. In some embodiments in which a different type of display screen is utilized, such as an LCD display, the backlight or background illumination for such display may be dimmed in the unused portion of the display. In one embodiment, a backlight of a mobile device display maybe divided into sectors, with the backlight for the active sector being illuminated. For example, a display may be divided into left and right halves, and the left and right halves may be selectively illuminated based on the location of the cursor. In some embodiments, a power management system for a mobile device, including one or more of a processor for the mobile device or a dedicated power manager element, provides for reducing the power draw associated with highly animated user interfaces on handheld devices. In some embodiments, the power management system tailors the animations to be displayed in a manner to draw reduced power for the particular display technology in use. For example, the power drawn by an OLED display may be minimized by incorporating red-, green-, or blue-on-black animations more frequently than white-on-black animations, and white-on-black animation more frequently than black-on-white and color-on-white animations, which are relatively very power intensive. In some embodiments, animations are carried out by a separate graphics processing unit (GPU) to minimize the cycling of the primary processor, or CPU (central processing unit). Further, in some embodiments, the invention allows adjustment of a level of illumination and animation presented, such as specifying the level of display operations within a set of power management preferences. For example, a "roaming" power setting (where the user expects to be far from a charging source for an extended period of time) may specify a low level of animation to provide longer battery life. Conversely, an "at home" or "full power" power setting may specify full animation because of the ease of charging in the home environment. In some embodiments, a mobile device may also automatically adjust the level of animation based on knowledge of the current location of the device. In some embodiments, an adjustment of a level of animation may also be made via power setting definitions for the mobile device (such as by invoking the roaming setting based on the location of the device), directly in response to the measured battery charge (animation may be reduced if battery shortfall is imminent), or in response to the time of day (animation may, for example, be increased near the end of the day when imminent recharging is assumed). In some embodiments, a mobile device may adjust the amount of animation in a predictive manner, based on a desired battery life. In some embodiments, animation may be reduced to a single icon or to a series of icons that may be cycled with minimal power draw. In an example, an element of a display may be a relatively complex animation that requires significant processor or video processor computation. In some embodiments, a mobile device may choose a single frame or element of the animation to generate a single icon to replace the animation. In some embodiments, the mobile device may recognize a pattern in the animation and may store a series of frames or still images of the animation to reflect a simplified version of the original animation. In some embodiments, the mobile device will store the series of frames of the animation in, for example, a set of registers, and the mobile device will replace the original animation with a sequence of the frames of the animation to generate a simplified, less power intensive animation. Figure 1 is a state diagram of an embodiment of power management for a mobile device or system. In this illustration, a mobile device 100 having one or more display screens 105 utilizes a plurality of different states to determine which illumination and animation power management processes to utilize. While certain states are described for illustration, embodiments are not limited to these states. Embodiments of mobile devices or systems may include different states or a different number of states. In some embodiments, the states may include a full operation state 110, which may be entered when there are minimal concerns regarding power limitations. In some embodiments, the state may be employed when a mobile device is plugged into a power source, or, more specifically, when the mobile device is plugged into a power source and the battery level has risen above a certain level. In some embodiments, the mobile device may also be placed in the full operation state when, for example, the device is used in a home environment in which power sources are easily available for charging. In some embodiments, during the full operation state the mobile device imposes no limitations on illumination or animation, and all applications and processes may use full illumination and full animation. As illustrated, the mobile device may return to the full operation state from any other state upon being connected to a power source. In some embodiments, the states may include a normal power consumption state 115, which may be entered when the mobile device is operating on battery power (not connected to an external power source) and has a relatively full battery charge, but, for example, is not located in a home environment. During such state, the mobile device may allow most animation and illumination, with only limitations to avoid very high power consumption. For example, the mobile device may not allow full display intensity at the normal consumption state. In some embodiments, the states may include a conservative power consumption state 120, which may be entered when the device is in a "roaming" state and may not be near a power source, or when the device is set to the conservative power consumption state by the user. Tn such state, the mobile device may take action to reduce animation and illumination. For example, the mobile device may modify illumination colors to less power intensive choices, and may reduce the incidence of animation. In an example, the mobile device may detect that an animation is relatively repetitive, may store a certain number of frames of the animation, and may step through the frames rather than allowing the processor or video processor to generate the animation. In some embodiments, the states include a low power consumption state 125, which may be entered when the battery has been drained to a medium level or the mobile device has been set to a long battery life setting. In such state, the mobile device may allow only minimal animation, possible replacing animation with still images. In addition only low level illumination is utilized, including for example only illumination backlighting as necessary, such as only illuminating certain sectors of the display. In some embodiments, the states include a minimum power consumption state 130, which may be entered when there is little battery life left. In some state, the mobile device may eliminate all animation, use only low power colors, and only illuminate the display screen as needed to briefly show notifications and warnings. Figure 2 is a flowchart to illustrate an embodiment of a process illumination power management for a mobile device or system. In some embodiments, an illumination power management process 200 includes a series of measures to gradually reduce power consumption while minimizing the impact of the user of the mobile device. In some embodiments, the measures are implements in combinations with the measures illustrated in Figure 3. While a certain series of measures is provided for illustration, embodiments are not limited to these measures or any particular order of implementation. In some embodiments, a mobile device may begin at full illumination 205, in which there are no power related limitations on illumination levels or colors. In some embodiments, the mobile device may then gradually impose power consumption restrictions by: reducing the overall intensity of the display screen 210; reducing high power illumination choices 215, including color combinations such as black on a white background; adjusting colors of illumination to less power intensive choices 220; eliminating backlighting in certain sectors of the display 225 so that the overall power consumption is reduced;further reducing illumination to color that utilize the least amount of power 230; using backlighting only in active areas of the display screen 235; and eliminate all illumination other that necessary notifications and warnings 240. Figure 3 is a flowchart to illustrate an embodiment of a process for animation power management for a mobile device or system. In some embodiments, an animation power management process 300 includes a series of measures to gradually reduce power consumption while minimizing the impact of the user of the mobile device. In some embodiments, the measures are implements in combinations with the measures illustrated in Figure 2. While a certain series of measures is provided for illustration, embodiments are not limited to these measures or any particular order of implementation. In some embodiments, a mobile device may begin at full animation 305, in which there are no power related limitations on animations. In some embodiments, the mobile device may then gradually impose power consumption restrictions by: Reducing the overall intensity of the animations 310; if this not already done, shifting the processing of the animation from the CPU to a dedicated video processor 315; adjusting colors of animation to less power intensive colors 320; determining that certain animations are repetitive an storing a number of frames or images of the animation to be stepped through without requiring any processing 325; further limiting animations to single icons without animation 330; and turning off all animation 335. Figure 4 is an illustration of a display screen for an embodiment of a mobile device or system. In this illustration, the display screen 400 of a mobile device may include a location for one or more status 405, which in some embodiments may be shown only as necessary in the lowest power consumption state, such as, for example, only being illuminated as needed when a status changes. In some embodiments, the display screen 400 may include application text 410, which may be shown in a particular color combination. In some embodiments, the mobile device may modify the application text to use a less power consuming color as needed. Tn some embodiments, the display screen may include a warning, such as an incoming call warning 415. Tn some embodiments, the mobile device may reduce power to a minimal level by only showing the warnings with no other screen illumination, and only showing the warnings for a limited amount of time. In some embodiments, the display screen 400 may include one or more photos 420, which may be gradually changed in color or eliminated to reduce power consumption. Further, the display screen may included one or more larger animations 425 and one or more icons 43 0-440 that may be animated, which may be gradually reduced in power consumption by modifying colors, by changing animation to a series of still images that are stepped through to approximate the animation, and by substituting the animations with still images having no animations. The display screen 400 may also include a background illumination 450 (including a backlight for some display technologies), which may be modified to reduce power consumption, including changing colors of the background illumination, and turning off the background illumination in certain sectors of the display screen 405. Figure 5 illustrates an embodiment of a mobile device or system. In this illustration, a mobile device 500 includes elements for reduction of power consumption caused by a display screen of the device or system. In some embodiments, a mobile device 500 includes one or more transmitters 502 and receivers 504 for transmitting and receiving data. In some embodiments, the mobile device includes one or more antennas 506 for the transmission and reception of data, where the antennas may include dipole and monopole antennas. The mobile device 500 may further include a user interface 508, including, but not limited to, a graphical user interface (GUT), which may include the use of extensive animation. The mobile device 500 may further include one or more location determination elements for the determination of physical location, including, but limited to, a GPS receiver 510 and GPS circuitry 512. In some embodiments, the location determination elements may include network detection elements. The location determination elements may be used to determination location for power management of the mobile device 500, such as in determining when the device is in a home environment or when the device is roaming and may not be near an external power source. The mobile device 500 may further include one or more memories or sets of registers 520, which may include non-volatile memory, such as flash memory, and other types of memory. The memory or registers 520 may include one or more applications 522, which may utilizes various applications, one or more power settings for the device 524, such as conservative power consumption and maximum batter life settings, and registers for the storage of images of animations 526 that may be used to replace processor intensive animations with a few images that are repeatedly cycled. The mobile device 500 may include a display 530 and display circuitry 532, which may be addressed to reduce power consumption as needed. In some embodiments, the mobile device 500 may further include one or more processors 540 to execute instructions, including instructions regarding power consumption of the mobile device 500. In some embodiments, the mobile device 500 may include a power manager system or element 550, which may include a first power management portion or subsystem to reduce power consumption by limitation of illumination produced by the display 552 and a second power management portion or subsystem to reduce power consumption by limitation of animation produced by the display 554. The mobile device 500 further includes a battery pack 560 or other similar mobile power source, which may be connected to (or may contain) a battery charger 562 that is connected with an external power source 564, such as a standard household or automotive power outlet. In the description above, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form. There may be intermediate structure between illustrated components. The components described or illustrated herein may have additional inputs or outputs which are not illustrated or described. Various embodiments of the present invention may include various processes. These processes may be performed by hardware components or may be embodied in computer program or machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the processes. Alternatively, the processes may be performed by a combination of hardware and software. Portions of various embodiments of the present invention may be provided as a computer program product, which may include a computer-readable medium having stored thereon computer program instructions, which may be used to program a computer (or other electronic devices) for execution by one or more processors to perform a process according to the embodiments of the present invention. The computer-readable medium may include, but is not limited to, floppy diskettes, optical disks, compact disk read-only memory (CD-ROM), and magneto-optical disks, read-only memory (ROM), random access memory (RAM), erasable programmable read-only memory (EPROM), electrically-erasable programmable read-only memory (EEPROM), magnet or optical cards, flash memory, or other type of computer-readable medium suitable for storing electronic instructions. Moreover, the present invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer. Many of the methods are described in their most basic form, but processes can be added to or deleted from any of the methods and information can be added or subtracted from any of the described messages without departing from the basic scope of the present invention. It will be apparent to those skilled in the art that many further modifications and adaptations can be made. The particular embodiments are not provided to limit the invention but to illustrate it. The scope of the embodiments of the present invention is not to be determined by the specific examples provided above but only by the claims below. If it is said that an element "A" is coupled to or with element "B," element A may be directly coupled to element B or be indirectly coupled through, for example, element C. When the specification or claims state that a component, feature, structure, process, or characteristic A "causes" a component, feature, structure, process, or characteristic B, it means that "A" is at least a partial cause of "B" but that there may also be at least one other component, feature, structure, process, or characteristic that assists in causing "B." If the specification indicates that a component, feature, structure, process, or characteristic "may", "might", or "could" be included, that particular component, feature, structure, process, or characteristic is not required to be included. If the specification or claim refers to "a" or "an" element, this does not mean there is only one of the described elements. An embodiment is an implementation or example of the present invention. Reference in the specification to "an embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of "an embodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. It should be appreciated that in the foregoing description of exemplary embodiments of the present invention, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims are hereby expressly incorporated into this description, with each claim standing on its own as a separate embodiment of this invention. |
Integrated circuit structures having source or drain structures with low resistivity are described. In an example, integrated circuit structure includes a fin having a lower fin portion and an upper fin portion. A gate stack is over the upper fin portion of the fin, the gate stack having a first side opposite a second side. A first source or drain structure includes an epitaxial structure embeddedin the fin at the first side of the gate stack. A second source or drain structure includes an epitaxial structure embedded in the fin at the second side of the gate stack. Each epitaxial structure of the first and second source or drain structures include silicon, germanium and boron. The first and second source or drain structures have a resistivity less than or equal to 0.3 mOhm.cm. |
1.An integrated circuit structure, including:Fins, which have a lower fin part and an upper fin part;A gate stack above the upper fin portion of the fin, the gate stack having opposite first and second sides;A first source structure or a drain structure, which includes an epitaxial structure embedded in the fin on the first side of the gate stack; andThe second source structure or the drain structure includes an epitaxial structure embedded in the fin on the second side of the gate stack, the first source structure or the drain structure and the Each epitaxial structure of the second source structure or the drain structure includes silicon, germanium, and boron, wherein the atomic concentration of boron is in the range of 1E20 atoms/cm3-3E21 atoms/cm3, and the germanium concentration is between 10% and 85%. %, and the first source structure or the drain structure and the second source structure or the drain structure have a resistivity less than or equal to 0.3 mOhm·cm.2.The integrated circuit structure of claim 1, wherein the resistivity of the first source structure or the drain structure and the second source structure or the drain structure is between 0.1 mOhm·cm and 0.3 mOhm·cm Within range.3.The integrated circuit structure of claim 1 or 2, wherein the first source structure or drain structure and the second source structure or drain structure cause uniaxial compressive strain on the fin .4.The integrated circuit structure of claim 1 or 2, wherein the first source structure or the drain structure and the second source structure or the drain structure are adjacent to the isolation structure.5.4. The integrated circuit structure of claim 4, wherein the first source structure or the drain structure and the second source structure or the drain structure have a lower surface under the upper surface of the isolation structure.6.The integrated circuit structure of claim 1 or 2, wherein the lower fin portion includes a portion of a lower bulk single crystal silicon substrate.7.The integrated circuit structure according to claim 1 or 2, further comprising:A first dielectric gate sidewall spacer and a second dielectric gate sidewall spacer are along the first side and the second side of the gate stack, respectively.8.The integrated circuit structure according to claim 1 or 2, further comprising:A first conductive contact part on the epitaxial structure of the first source structure or drain structure; andThe second conductive contact portion is located on the epitaxial structure of the second source structure or the drain structure.9.8. The integrated circuit structure of claim 8, wherein the first conductive contact portion and the second conductive contact portion are located in the first source structure or the drain structure and the second source structure or The drain structure is partially recessed in the epitaxial structure.10.An integrated circuit structure, including:Fins, which have a lower fin part and an upper fin part;A gate stack above the upper fin portion of the fin, the gate stack having opposite first and second sides;A first source structure or a drain structure, which includes an epitaxial structure embedded in the fin on the first side of the gate stack, the epitaxial structure including a lower semiconductor layer and a cap semiconductor layer ;as well asA second source structure or a drain structure, which includes an epitaxial structure embedded in the fin on the second side of the gate stack, the epitaxial structure including a lower semiconductor layer and a cap semiconductor layer , The lower semiconductor layer of each of the epitaxial structure of the first source structure or the drain structure and the second source structure or the drain structure includes silicon, germanium, and boron, and the first The cap semiconductor layer of the epitaxial structure of each of the source structure or the drain structure and the second source structure or the drain structure has a germanium concentration greater than that of the lower semiconductor layer, and the first A source structure or a drain structure and the second source structure or a drain structure have a resistivity less than or equal to 0.3 mOhm·cm.11.The integrated circuit structure according to claim 10, wherein the lower semiconductor of each of the epitaxial structure of the first source structure or the drain structure and the second source structure or the drain structure The layer has a boron atom concentration in the range of 1E20 atoms/cm3-3E21 atoms/cm3 and a germanium concentration in the range of 10% to 85%.12.The integrated circuit structure of claim 10 or 11, wherein the resistivity of the first source structure or the drain structure and the second source structure or the drain structure is between 0.1 mOhm·cm and 0.3 Within the range of mOhm·cm.13.The integrated circuit structure according to claim 10 or 11, wherein the first source structure or the drain structure and the second source structure or the drain structure cause uniaxial compressive strain on the fin .14.The integrated circuit structure of claim 10 or 11, wherein the cap semiconductor layer is substantially composed of germanium.15.The integrated circuit structure of claim 10 or 11, wherein the lower fin portion includes a portion of a lower bulk single crystal silicon substrate.16.The integrated circuit structure according to claim 10 or 11, further comprising:A first dielectric gate sidewall spacer and a second dielectric gate sidewall spacer are along the first side and the second side of the gate stack, respectively.17.The integrated circuit structure according to claim 10 or 11, further comprising:A first conductive contact part on the cap semiconductor layer of the first source structure or drain structure; andThe second conductive contact is located on the cap semiconductor layer of the second source structure or the drain structure.18.The integrated circuit structure of claim 17, wherein the first conductive contact portion and the second conductive contact portion are located in the first source structure or the drain structure and the second source structure or The cap of the drain structure is in a partial recess in the semiconductor layer.19.An integrated circuit structure, including:Fins, which have a lower fin part and an upper fin part;A gate stack above the upper fin portion of the fin, the gate stack having opposite first and second sides;A first source structure or a drain structure, which includes an epitaxial structure embedded in the fin on the first side of the gate stack, the epitaxial structure including a lower semiconductor layer and a cap semiconductor layer ;as well asA second source structure or a drain structure, which includes an epitaxial structure embedded in the fin on the second side of the gate stack, the epitaxial structure including a lower semiconductor layer and a cap semiconductor layer , The lower semiconductor layer of each of the epitaxial structure of the first source structure or the drain structure and the second source structure or the drain structure includes silicon, germanium, and boron, and the first The cap semiconductor layer of the epitaxial structure of each of the source structure or the drain structure and the second source structure or the drain structure has a germanium concentration greater than that of the lower semiconductor layer, and the first A source structure or a drain structure and the second source structure or a drain structure have a resistivity less than or equal to 0.3 mOhm·cm;A first conductive contact portion located on the cap semiconductor layer of the first source structure or the drain structure;A second conductive contact portion located on the cap semiconductor layer of the second source structure or the drain structure;A first dielectric spacer along the sidewall of the first conductive contact, wherein the cap semiconductor layer of the first source structure or drain structure is confined between the first dielectric spacers ;as well asA second dielectric spacer along the sidewall of the second conductive contact, wherein the cap semiconductor layer of the second source structure or the drain structure is confined between the second dielectric spacers .20.The integrated circuit structure of claim 19, further comprising:A first dielectric gate sidewall spacer and a second dielectric gate sidewall spacer are along the first side and the second side of the gate stack, respectively.21.The integrated circuit structure according to claim 19 or 20, wherein the first source structure or the drain structure and the second source structure or the drain structure of each of the epitaxial structure The lower semiconductor layer has a boron atom concentration in the range of 1E20 atoms/cm3-3E21 atoms/cm3 and a germanium concentration in the range of 10% to 85%.22.The integrated circuit structure according to claim 19 or 20, wherein the resistivity of the first source structure or the drain structure and the second source structure or the drain structure is 0.1 mOhm·cm to 0.3 mOhm· cm.23.The integrated circuit structure of claim 19 or 20, wherein the first source structure or the drain structure and the second source structure or the drain structure cause uniaxial compressive strain on the fin .24.The integrated circuit structure of claim 19 or 20, wherein the cap semiconductor layer is substantially composed of germanium.25.The integrated circuit structure of claim 19 or 20, wherein the lower fin portion includes a portion of a lower bulk single crystal silicon substrate. |
Source structure or drain structure with low resistivityTechnical fieldThe embodiments of the present disclosure relate to the field of advanced integrated circuit structure fabrication, and in particular, to an integrated circuit structure having a source structure and a drain structure with low resistivity.Background techniqueOver the past few decades, the scaling of features in integrated circuits has become the driving force behind the growing semiconductor industry. Scaling to smaller and smaller features enables the realization of increased density functional units on the limited chip area of semiconductor chips. For example, shrinking the size of transistors allows an increased number of memories or logic devices to be incorporated on a chip, thereby manufacturing products with increased capacity. However, the drive to higher and higher capacity is not without problems. The need to optimize the performance of each device has become increasingly important.The variability in conventional and currently known manufacturing processes may limit the possibility of further extending these processes to the 10nm node or sub-10nm node range. Therefore, the production of functional components required by future technology nodes may require the introduction of new methods, or the integration of new technologies into current production processes, or the replacement of current production processes with new technologies.Description of the drawings1A-1D show cross-sectional views representing various operations in a method of fabricating an integrated circuit structure having a source structure or a drain structure with low resistivity according to an embodiment of the present disclosure.2A-2G show cross-sectional views showing various operations in a method of fabricating an integrated circuit structure having a source structure or a drain structure with low resistivity according to an embodiment of the present disclosure.2G' shows a cross-sectional view of another integrated circuit structure having a source structure or a drain structure with low resistivity according to another embodiment of the present disclosure.2G" shows a cross-sectional view of another integrated circuit structure having a source structure or a drain structure with low resistivity according to another embodiment of the present disclosure.FIG. 3A shows a plan view of a plurality of gate lines above a pair of semiconductor fins according to another embodiment of the present disclosure.Fig. 3B shows a cross-sectional view taken along the a-a' axis of Fig. 3A according to an embodiment of the present disclosure.4 shows a cross-sectional view of an integrated circuit structure having trench contacts for PMOS devices according to another embodiment of the present disclosure.Figure 5 shows a cross-sectional view of an integrated circuit structure having conductive contacts on raised source or drain regions according to an embodiment of the present disclosure.6A and 6B show cross-sectional views of various integrated circuit structures according to embodiments of the present disclosure, each of the integrated circuit structures having trench contacts including an overlying insulating cap layer and having an overlying insulating cap The gate stack of the cap layer.Fig. 7 shows a computing device according to an embodiment of the present disclosure.Figure 8 shows an interpolator that includes one or more embodiments of the present disclosure.9 is an isometric view of a mobile computing platform according to an embodiment of the present disclosure, the mobile computing platform adopts an IC made according to one or more processes described herein or includes one or more features described herein.FIG. 10 shows a cross-sectional view of a flip-chip mounted die according to an embodiment of the present disclosure.detailed descriptionAn integrated circuit structure having a source structure or a drain structure with low resistivity and a method of fabricating a source structure or a drain structure with low resistivity are described. In the following description, many specific details such as specific integration and material system are explained in order to provide a thorough understanding of the embodiments of the present disclosure. It will be obvious to those skilled in the art that the embodiments of the present disclosure can be practiced without these specific details. In other instances, well-known features such as integrated circuit design layout are not described in detail to avoid unnecessarily obscuring the embodiments of the present disclosure. In addition, it should be appreciated that the various embodiments shown in the drawings are illustrative representations and are not necessarily drawn to scale.The following specific embodiments are merely illustrative in nature, and are not intended to limit the embodiments of the subject matter or the applications and uses of such embodiments. As used herein, the word "exemplary" means "serving as an example, instance, or example." Any embodiment described herein as exemplary need not be construed as being preferred or advantageous over other embodiments. In addition, it is not intended to be bound by any expressed or implied theory presented in the foregoing technical field, background art, inventive content or the following specific embodiments.This specification includes references to "one embodiment" or "an embodiment." The appearances of the phrase "in one embodiment" or "in an embodiment" do not necessarily refer to the same embodiment. The specific features, structures, or characteristics can be combined in any suitable manner consistent with the present disclosure.the term. The following paragraphs provide definitions or context for the terms that exist in this disclosure (including the appended claims):"include". The term is open ended. As used in the appended claims, this term does not exclude additional structures or operations."Is configured as". Various units or components may be described or claimed as being "configured" to perform one or more tasks. In such a context, "configured to" is used to imply structure by indicating that the unit or component includes a structure that performs one or more of those tasks during operation. As such, even when the designated unit or component is not currently operating (for example, not turned on or activated), the unit or component can be said to be configured to perform the task. The description of a unit or circuit or component as "configured to" perform one or more tasks is expressly intended not to invoke the sixth paragraph of 35 U.S.C. §112 for that unit or component."First", "Second", etc. As used herein, these terms are used as labels for the nouns that follow, and do not imply any kind of order (for example, space, time, logic, etc.)."coupling". The following description refers to elements or nodes or features "coupled" together. As used herein, unless expressly stated otherwise, “coupled” means that one element or node or feature is directly or indirectly joined to (or directly or indirectly communicates with) another element or node or feature, and does not necessarily have to be mechanical coupling.In addition, the following description also uses certain terms for reference purposes only, and therefore these terms are not intended to be limiting. For example, terms such as "upper", "lower", "above", or "below" refer to the direction in which reference is provided in the drawings. Terms such as "front", "back", "rear", "side", "outside the board" and "inside the board" describe the orientation or position of parts of the component or both in a consistent but arbitrary reference frame, by reference The text describing the discussed components and the related drawings can clearly understand the orientation or position or both. Such terms may include the words specifically mentioned above, their derivatives, and words of similar meaning."inhibition". As used herein, suppression is used to describe reducing or minimizing the impact. When a part or feature is described as inhibiting an action, movement, or condition, it can completely prevent the result or consequence or future state. In addition, "inhibiting" can also mean reducing or attenuating the consequences, performance, or influence that may occur in other ways. Therefore, when a part, element, or structure is referred to as a suppressed result or state, it does not necessarily prevent or eliminate the result or state completely.The embodiments described herein may involve front-end-of-line (FEOL) semiconductor processing and structures. FEOL is the first part of integrated circuit (IC) production. In FEOL, various devices (for example, transistors, capacitors, resistors, etc.) are patterned on a semiconductor substrate or a semiconductor layer. FEOL generally covers all operations up to (but not including) the deposition of the metal interconnection layer. Immediately after the last FEOL operation, the result is usually a wafer with isolated transistors (for example, without any wiring).The embodiments described herein may involve back end of line (BEOL) semiconductor processing and structures. BEOL is the second part of IC production. In BEOL, wiring (for example, one or more metallization layers) on the wafer is used to interconnect various devices (for example, transistors, capacitors, resistors, etc.). BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connection. In the BEOL part of the manufacturing stage, contacts (pads), interconnect lines, vias, and dielectric structures are formed. For modern IC processes, more than 10 metal layers can be added to BEOL.The embodiments described below may be applicable to FEOL processing and structure, BEOL processing and structure, or both FEOL processing and structure and BEOL processing and structure. In particular, although the exemplary processing scheme may be shown using FEOL processing scenarios, such an approach may also be applicable to BEOL processing. Likewise, although an exemplary processing scheme may be shown using BEOL processing scenarios, such an approach may also be applicable to FEOL processing.According to one or more embodiments of the present disclosure, a PMOS transistor having an ultra-low resistivity source or drain (source/drain, S/D) structure is described.In order to provide context, typical PMOS source structures or drain structures have high resistance (for example, greater than 0.4 mOhm·cm) or lack of selectivity. Prior art solutions include attempting to incorporate more dopant atoms than needed and using thermal annealing to activate the dopant atoms during downstream processing. However, excessive chemical doping may cause defects in the source structure or the drain structure, thereby reducing the ability of the source structure or the drain structure to strain the adjacent channel region. In addition, such thermal annealing must be controlled or limited to prevent the diffusion of dopant atoms to areas that may not be suitable for other features of integrated circuits. Furthermore, any dopants activated by thermal annealing may be deactivated in subsequent processing operations.According to an embodiment of the present disclosure, appropriate selection of precursors and process conditions is implemented to achieve a selective PMOS source with a resistivity less than or equal to 0.3 mOhm·cm (for example, between 0.2 and 0.3 mOhm·cm) Structure or drain structure. The embodiments described herein can provide a source structure or a drain structure with a relatively much lower external resistance, resulting in improved transistor performance. In an embodiment, the external resistance of the associated transistor is minimized, resulting in increased current in the channel and overall performance improvement. In addition, embodiments may involve high effective as-deposited doping levels to allow removal or reduction of subsequent activation annealing. Embodiments can be implemented to minimize any diffusion of dopants, thereby allowing for more steep junctions. According to the embodiments described in the text, the source structure or the drain structure having a chemical concentration equal to or substantially equal to the maximum effective dopant concentration of the source or drain semiconductor material reduces or eliminates the source structure or the drain structure The risk or the formation of defects, resulting in increased strain in the channel and increased channel mobility. In addition, any excess non-effective dopant atoms in the source/drain will act as scattering sites in the source/drain and reduce the mobility in the source or drain.According to one or more embodiments of the present disclosure, a low-resistivity source structure or a drain structure can be fabricated into a typical PMOS transistor processing scheme. However, during source or drain material deposition, precursor options and process conditions are selected so that selective PMOS source and drain deposition can have low resistivity (less than 0.3 mOhm·cm) during deposition. Embodiments of the present disclosure may be based on the presence of an ultra-low resistivity PMOS source structure or drain structure based on a combination of SIMS and/or APT, combined with low external resistance measured from the device, and the presence of ( For example) when the resistivity between 0.2mOhm·cm and 0.3mOhm·cm is detected, the combination of SIMS and/or APT shows the source-drain composition of SiGe:B (where Ge is at 10% to 85% And B is in the range of approximately 1E20cm3 to 3E21cm3). In addition, XSEM and XTEM analysis can reveal that there are no nodules from the PMOS source or drain growth.According to embodiments of the present disclosure, the ultra-low resistivity PMOS source structure or drain structure described herein can be fabricated as strained or unstrained silicon (Si) channels, strained or unstrained silicon germanium (SiGe) ) Channel, or strained or unstrained germanium (Ge) channel. The processing scheme that combines the fabrication of the ultra-low resistivity PMOS source structure or the drain structure described herein can be a gate first method or a gate last method. Embodiments may be suitable for use with nanowires, nanoribbons, fins, and planar transistors. The embodiments can be adapted for use with stacked CMOS or transistors, where the back-end contacts can be made from the backside of the wafer through vias. Embodiments may include, when the via hole for the contact is opened when the trench contact (TCN) is formed (or immediately after the source or drain is deposited), fabricating a film deposited on top of the ultra-low resistivity film A cap of higher Ge% (for example, up to 100% Ge) is attached to provide an ultra-low resistivity PMOS source structure or drain structure including a cap layer.As an exemplary process flow, FIGS. 1A-1D show cross-sections representing various operations in a method of fabricating an integrated circuit structure having a source structure or a drain structure with low resistivity according to an embodiment of the present disclosure. Figure.1A, the starting structure 100 includes a substrate 102, for example, a silicon substrate. As shown in FIG. 1B, then, a patterned mask 106 including a stack of mask layers 106A, 106B, and 106C is formed on the substrate 102. The patterned mask 106 is used to pattern the fins 104 into the substrate 102, thereby forming a patterned substrate 102'. 1C, a shallow trench isolation (STI) structure 108 is formed between the lower portion of the fin 104. A dummy gate structure including a dummy gate dielectric 110, a dummy gate electrode 112, and a hard mask layer 114 is formed over the upper portion of the fin 104 extending through the STI structure 108. Then, spacers 116 are formed along the sidewalls of the gate and above some fin parts, while exposing other fin parts. As shown in Figure 1D. Then, the exposed fin portion is etched to form a twice-patterned substrate 102" with a channel region 104' and an epitaxial ultra-low resistivity source structure or drain structure 118 therein. The subsequent processing can be This includes replacing the dummy gate structure with a high-k gate dielectric layer and a metal gate electrode.The source or drain structures with low resistivity as described herein can be grown on or in-plane, tri-gate, FinFET, nanowire or nanoribbon structures, with minimal modification to the baseline process flow. In the embodiment, the entire epitaxial structure of the source structure or the drain structure is composed of a single ultra-low resistivity film, an example of which will be described below in connection with FIG. 2G'. However, it should be recognized that instead, the ultra-low resistivity film can be used only in the tip, or only in the lower structured portion, on which the boron-doped high-content germanium filler and/or cap are formed , The following will describe its example in connection with Figure 2G and Figure 2G.One or more embodiments described herein relate to the fabrication process and structure of a low-resistivity source structure or a drain structure including a cap grown thereon, examples of which will be described in connection with FIGS. 2A-2G. One or more embodiments described herein relate to the fabrication process and structure of a low-resistivity source structure or a drain structure including no capping layer, and examples thereof will be described in connection with FIGS. 2A-2D and 2G'. One or more embodiments described herein relate to the fabrication process and structure of a low-resistivity source structure or a drain structure including a cap grown thereon. Examples thereof will be described in connection with FIGS. 2A-2D and 2G. .As an exemplary process flow, FIGS. 2A-2G show cross-sections representing various operations in a method of fabricating an integrated circuit structure having a source structure or a drain structure with low resistivity according to an embodiment of the present disclosure. Figure. 2G' shows a cross-sectional view of another integrated circuit structure having a source structure or a drain structure with low resistivity according to another embodiment of the present disclosure. 2G" shows a cross-sectional view of another integrated circuit structure having a source structure or a drain structure with low resistivity according to another embodiment of the present disclosure.2A, optionally, a channel material 204 is grown on a substrate 202 (e.g., a silicon substrate). In an embodiment, the channel material 204 includes silicon. In an embodiment, the channel material 204 includes silicon and germanium. In an embodiment, the channel material 204 includes germanium. In an embodiment, the channel material 204 is a III-V group material. In other embodiments, the differentiated channel material 204 is not formed, and the process operations described below are performed on the surface of the substrate 202.Referring to FIG. 2B, the channel material 204 is patterned into fins 206. Patterning can form depressions 208 in the substrate 202, as shown.2C, the trenches between the fins 206 are filled with a shallow trench isolation material, and then the shallow trench isolation material is polished and recessed to form an isolation structure 210. The process may also involve the deposition, patterning and recessing of the dielectric isolation barrier layer. The process continues to the deposition and patterning of gate oxide materials and gate electrode materials (which may be dummy gate oxide materials and dummy gate electrode materials) and the formation of gate spacers, thereby forming gate stacks 212 and Gate spacer 214.Referring to FIG. 2D, the fin 206 adjacent to the side of the gate stack 212 is etched at position 218. This etching leaves the channel region 216 under the gate stack 212.Referring to FIG. 2E, the formation of the source or drain structure involves growing a lower source or drain material 220 and a capping semiconductor layer 222 (which may be grown in situ). Alternatively, the cap semiconductor layer 222 is not grown, and an exemplary resultant structure thereof will be described in connection with FIG. 2G'. In either case, in an embodiment, the source structure or the drain structure includes silicon, germanium, and boron. In an embodiment, the source structure or the drain structure is composed of silicon germanium doped with boron atoms during deposition (eg, in situ). In one such embodiment, during the in-situ deposition, boron atoms are activated as impurity atoms, for example, incorporated into the silicon germanium lattice by substitution. That is, the boron dopant that achieves a high concentration of activation during deposition is in contrast to the typical interstitial boron inclusions that require subsequent annealing to achieve bonding and activation.In an embodiment, the in-situ deposition of low-resistivity silicon germanium source or drain materials with activated boron dopants incorporated therein during deposition involves the use of silicon precursors, germanium precursors, and boron precursors. In one embodiment, the silicon precursor is such as but not necessarily limited to SiH4, Si2H6, CSiH6, C6H16Si, CH3SiH3, (Si(CH3)2)6, (Si(CH3)3)2, [(CH3)3C]2SiH2 , [(CH3)2N]2Si(CH3)2, [NH(C4H9)]2SiH2, C8H22N2Si, C8H23NSi2, C7H19NSi, Dichlorosilane (DCS), Trichlorosilane (TCS), SiCl4, CH3(CH2)3SiCl3, ( CH3)3SiNHSi(CH3)3, (CH3)3SiSi(CH3)2Cl, [ClSi(CH3)2]2, C2H6Cl2Si, C12H10Cl2Si, C2H5Cl3Si, CH3SiHCl2, CH3Cl3Si or SiBr4 precursor. In one embodiment, the germanium precursor is such as but not necessarily limited to GeH4, Ge2H6, GeCl4, GeBr4, GeI2, C16H36Ge, (CH3)4Ge, (CH3)3GeGe(CH3)3, [CH3(CH2)3]3GeH, A precursor of (C2H5)3GeH, (C6H5)3GeH, (CH3)3GeCl, (CH3)2GeCl2, C2H5GeCl3, (C6H5)3GeCl, (CH3)3GeBr or GeF4. In one embodiment, the boron precursor is such as, but not necessarily limited to, B2H6, B10H14, BBr3, BCl3, BF3, B2F4, C18F15B, B3Cl3H3N3, trimethylborane (TMB), triethylborane, B(CD3) 3. Precursors of C3H9B, C6H15B, C18H15B, C12H24B2O4, [(CH3)2CHO]3B, [(CH3)3CO]3B, C10H19BClNSi or [(CH3)2N]2BB[N(CH3)2]2. In a specific embodiment, BCl3 is used as a boron precursor, and the source or drain material is at a deposition temperature between 400-850 degrees Celsius (and in a specific embodiment at about 700 degrees Celsius) and silicon precursor, The germanium precursor is formed together, and hydrogen chloride (HCl) is used as the codeposition gas.Referring to FIG. 2F, an isolation material is formed on the source structure or the drain structure of FIG. 2E. Then, the isolation material is patterned and recessed to expose the source structure or the drain structure, and auxiliary spacers 226 and trenches 228 are formed. In one embodiment, the recessing of the isolation material is performed using an etching process that stops on the cap semiconductor layer 222 or partially enters the cap semiconductor layer 222 and stops, wherein, in the latter case Next, a patterned source or drain capping semiconductor layer 222' is formed. In another embodiment, when the capping semiconductor layer 222 is not implemented, the etching process stops on the source or drain material 220 or partially enters the source or drain material 220 and stops.2G, source or drain contact material deposition and patterning are performed to form a conductive contact 230. In an embodiment, the conductive contact 230 is on the capping semiconductor layer 222 or 222' of the first and second source or drain structures. In one such embodiment, the first and second conductive contacts 230 are in partial recesses in the capping semiconductor layer 222' of the first and second source or drain structures. It should be appreciated that although not depicted, back-end processing can then be performed on the structure of Figure 2G.Referring again to FIG. 2G, according to an embodiment of the present disclosure, the integrated circuit structure has fins (216 and the patterned portion of the substrate 202). The fin has a lower fin portion (the portion of 216 below the top surface of the isolation structure 210 and the patterned portion of 202) and an upper fin portion (the portion of 216 above the top surface of the isolation structure 210). ). The gate stack 212 is above the upper fin portion of the fin, and the gate stack 212 has opposite first and second sides. The first source structure or the drain structure includes an epitaxial structure embedded in the fin on the first side of the gate stack (for example, the left side of the gate stack 212). The second source structure or the drain structure includes an epitaxial structure embedded in the fin on the second side of the gate stack (for example, the right side of the gate stack 212). The epitaxial structure of the first and second source structures or drain structures includes a lower semiconductor layer 220 and a cap semiconductor layer 222' (or 222 in FIG. 2E if there is no recess). In an embodiment, the lower semiconductor layer 220 of each of the first and second source structures or the epitaxial structure of the drain structure includes silicon, germanium, and boron. The cap semiconductor layer 222' or 222 of the epitaxial structure of each of the first and second source structure or the drain structure has a germanium concentration greater than that of the lower semiconductor layer 220. The first and second source structures or drain structures have a resistivity less than or equal to 0.3 mOhm·cm.With regard to FIG. 2G, in an embodiment, the lower semiconductor layer 220 of each of the epitaxial structure of the first and second source structures or the drain structure has boron atoms in the range of 1E20 atoms/cm3-3E21 atoms/cm3 Concentration, and germanium concentration in the range of 10% to 85%. In an embodiment, the capping semiconductor layer 222' or 222 has a germanium concentration greater than 50%. In an embodiment, the cap semiconductor layer 222' or 222 is substantially composed of germanium.Regarding FIG. 2G, in the embodiment, the resistivity of the first and second source or drain structures is in the range of 0.1 mOhm·cm to 0.3 mOhm·cm. In one such embodiment, the first and second source or drain structures induce uniaxial compressive strain on the fin. In an embodiment, the lower semiconductor layer 220 of the first and second source structures or drain structures is adjacent to the isolation structure 210. In one such embodiment, the lower semiconductor layer 220 of the first and second source or drain structures has a lower surface under the upper surface of the isolation structure 210.In contrast to FIG. 2G, in FIG. 2G', an embodiment in which a capping semiconductor layer is not used is depicted. In particular, the source or drain structure only includes a single source or drain material 220'. The conductive contact 230 is on a single source or drain material 220' of the first and second source or drain structures. In one such embodiment, although not depicted, the first and second conductive contacts are partially recessed in the single source or drain material 220' of the first and second source or drain structures in. It should be appreciated that although not depicted, back-end processing can then be performed on the structure of Figure 2G'.2G', according to an embodiment of the present disclosure, the integrated circuit structure includes a fin (216 and a patterned portion of the substrate 202) having a lower fin portion (on top of the isolation structure 210) The portion of 216 below the surface and the patterned portion of 202) and the upper fin portion (the portion of 216 above the top surface of the isolation structure 210). The gate stack 212 is above the upper fin portion of the fin, and the gate stack 212 has opposite first and second sides. The first source structure or the drain structure includes an epitaxial structure (for example, the left side 220') embedded in the fin on the first side of the gate stack 212. The second source structure or the drain structure includes an epitaxial structure embedded in the fin on the second side of the gate stack 212 (for example, the right side 220'). In an embodiment, each epitaxial structure of the first and second source structures or drain structures includes silicon, germanium, and boron, wherein the atomic concentration of boron is in the range of 1E20 atoms/cm3-3E21 atoms/cm3, and The germanium concentration is in the range of 10% to 85%, and the first and second source or drain structures have a resistivity less than or equal to 0.3 mOhm·cm.Regarding FIG. 2G', in the embodiment, the resistivity of the first and second source or drain structures is in the range of 0.1 mOhm·cm to 0.3 mOhm·cm. In one such embodiment, the first and second source or drain structures induce uniaxial compressive strain on the fin. In an embodiment, the epitaxial structure 220' of the first and second source or drain structures is adjacent to the isolation structure 210. In one such embodiment, the epitaxial structure 220' of the first and second source or drain structures has a lower surface under the upper surface of the isolation structure 210.In contrast to FIGS. 2G and 2G', in FIG. 2G", an embodiment in which a cap semiconductor layer is formed after forming the auxiliary spacer 226 is depicted. In particular, the first and second source structures or drains The epitaxial structure of the structure includes a cap semiconductor layer 225 on the lower semiconductor layer 220". The conductive contact 230 is on the cap semiconductor layer 225 of the first and second source or drain structures. It should be appreciated that although not depicted, back-end processing can then be performed on the structure of Figure 2G".2G" again, according to an embodiment of the present disclosure, the integrated circuit structure includes a fin (216 and a patterned portion of the substrate 202) with a lower fin portion (on the top surface of the isolation structure 210). The lower part 216 and the patterned part 202) and the upper fin part (the part 216 above the top surface of the isolation structure 210). The gate stack 212 is above the upper fin part of the fin, The gate stack 212 has opposite first and second sides. The first source structure or the drain structure includes an epitaxial structure embedded in a fin on the first side of the gate stack, the epitaxial structure having a lower portion Semiconductor layer (220" on the left) and cap semiconductor layer (225 on the left). The second source structure or the drain structure includes an epitaxial structure embedded in the fin on the second side of the gate stack. The epitaxial structure has a lower semiconductor layer (220" on the right) and a cap semiconductor layer (220" on the right). 225). The second source structure or drain structure includes a lower epitaxial source structure or drain structure embedded in the fin on the second side of the gate stack 212 (for example, the right side 220"). The first and second source or drain structures include a capping semiconductor layer 225 confined between the dielectric spacers 226 of the conductive contact 230. In an embodiment, the lower semiconductor layer of each of the epitaxial structure of the first and second source or drain structures includes silicon, germanium, and boron, and each of the first and second source or drain structures The cap semiconductor layer of one epitaxial structure has a germanium concentration greater than that of the lower semiconductor layer, and the first and second source structures or drain structures have a resistivity less than or equal to 0.3 mOhm·cm.In an embodiment, referring again to FIG. 2G", the first conductive contact (left 230) is on the cap semiconductor layer (left 225) of the first source structure or drain structure. The second conductive contact (right Side 230) is on the capping semiconductor layer (right side 225) of the second source structure or drain structure. The first dielectric spacer (left side 226) is along the sidewall of the first conductive contact (left side 230), And the capping semiconductor layer (left 225) of the first source structure or drain structure is limited between the first dielectric spacer (left 226). The second dielectric spacer (right 226) is in contact with the second conductive The sidewall of the part (right side 230), and the second source structure or the capping semiconductor layer of the drain structure (right side 225) is confined between the second dielectric spacers (right side 226). Not depicted In one embodiment, the cap semiconductor layer 225 is in a partial recess in the first and second lower semiconductor layers 220". In another embodiment, as depicted in the figure, the first and second lower semiconductor layers 220" are not recessed.Regarding FIG. 2G", in an embodiment, the lower semiconductor layer 220" of each of the epitaxial structure of the first and second source structures or the drain structure has a thickness in the range of 1E20 atoms/cm3-3E21 atoms/cm3 The concentration of boron atoms and the concentration of germanium in the range of 10% to 85%. In an embodiment, the capping semiconductor layer 225 has a germanium concentration greater than 60%. In an embodiment, the cap semiconductor layer 225 is substantially composed of germanium.Regarding Figure 2G", in an embodiment, the resistivity of the first and second source or drain structures is in the range of 0.1 mOhm·cm to 0.3 mOhm·cm. In one such embodiment, the first and The second source structure or the drain structure induces uniaxial compression strain on the fin. In an embodiment, the lower semiconductor layer 220" of the first and second source or drain structures is adjacent to the isolation structure 210. In one such embodiment, the lower semiconductor layer 220" of the first and second source or drain structures has a lower surface under the upper surface of the isolation structure 210.In another aspect, FIG. 3A shows a plan view of a plurality of gate lines above a pair of semiconductor fins according to another embodiment of the present disclosure.Referring to FIG. 3A, a plurality of effective gate lines 304 are formed above the plurality of semiconductor fins 300. The dummy gate line 306 is at the end of the plurality of semiconductor fins 300. The space 308 between the gate lines 304/306 is a trench contact that can be positioned to provide conductive contacts to the source or drain regions (for example, the source or drain regions 351, 352, 353, and 354) s position. In an embodiment, the pattern of the plurality of gate lines 304/306 or the pattern of the plurality of semiconductor fins 300 is depicted as a grid structure. In one embodiment, the grid-like pattern includes a pattern of a plurality of semiconductor fins 300 and/or a plurality of gate lines 304/306 spaced at a constant pitch and having a constant width or both.Fig. 3B shows a cross-sectional view taken along the a-a' axis of Fig. 3A according to an embodiment of the present disclosure.3B, a plurality of effective gate lines 364 are formed above the semiconductor fin 362 formed on the substrate 360. The dummy gate line 366 is at the end of the semiconductor fin 362. The dielectric layer 370 is outside the dummy gate line 366. The trench contact material 397 is between the effective gate lines 364 and between the dummy gate lines 366 and the effective gate lines 364. The embedded lower source or drain structure 368 and the corresponding cap semiconductor layer 369 are between the effective gate lines 364 and between the dummy gate lines 366 and the effective gate lines 364 in the semiconductor fin 362. The embedded lower source or drain structure 368 and the corresponding source or drain capping semiconductor layer 369 may be as described in connection with the source or drain structure of FIG. 2G. Alternatively, a source structure or a drain structure such as the source structure or the drain structure described in connection with FIG. 2G' and FIG. 2G" may be used.The effective gate line 364 includes a gate dielectric structure 398/399, a work function gate electrode portion 374 and a filled gate electrode portion 376, and a dielectric capping layer 378. The dielectric spacer 380 serves as a line of the effective gate line 364 and the dummy gate line 366.In another aspect, a trench contact structure for the source region or the drain region, for example, is described. In an example, FIG. 4 shows a cross-sectional view of an integrated circuit structure having trench contacts for PMOS devices according to another embodiment of the present disclosure.Referring to FIG. 4, the integrated circuit structure 450 includes a fin 452, for example, a silicon germanium fin. The gate dielectric layer 454 is above the fin 452. The gate electrode 456 is above the gate dielectric layer 454. In an embodiment, the gate electrode 456 includes a conformal conductive layer 458 and a conductive filler 460. In an embodiment, the dielectric cap 462 is above the gate electrode 456 and above the gate dielectric layer 454. The gate electrode has a first side 456A and a second side 456B opposite to the first side 456A. The dielectric spacer is along the sidewall of the gate electrode 456. In one embodiment, the gate dielectric layer 454 is also located between the first one of the dielectric spacers 463 and the first side 456A of the gate electrode 456, and between the second one of the dielectric spacers 463 and the gate electrode 456. Between the second side 456B, as depicted in the figure. In an embodiment, although not depicted, a thin oxide layer such as a thermal or chemical silicon oxide or silicon dioxide layer is between the fin 452 and the gate dielectric layer 454.The first 464 and second 466 semiconductor source regions or drain regions are adjacent to the first side 456A and the second side 456B of the gate electrode 456, respectively. In one embodiment, the first 464 and second 466 semiconductor source or drain regions include embedded epitaxial lower regions and the corresponding source or drain capping semiconductor layer 495 or 497, and the first 464 and the second A 466 semiconductor source region or drain region is formed in the recesses 465 and 467 of the fin 452, respectively, as depicted. The embedded lower source structure or drain structure and the corresponding cap semiconductor layer 495 or 497 may be as described in connection with the source structure or drain structure of FIG. 2G. Alternatively, a source structure or a drain structure such as the source structure or the drain structure depicted in connection with FIG. 2G' and FIG. 2G" may be used.The first 468 and second 470 trench contact structures are respectively located above the first 464 and second 466 semiconductor source regions or drain regions adjacent to the first side 456A and the second side 456B of the gate electrode 456. Both the first 468 and the second 470 trench contact structure include a U-shaped metal layer 472 and a T-shaped metal layer 474 on and above the entire U-shaped metal layer 472. In one embodiment, the U-shaped metal layer 472 and the T-shaped metal layer 474 are different in composition. In one such embodiment, the U-shaped metal layer 472 includes titanium, and the T-shaped metal layer 474 includes cobalt. In one embodiment, both the first 468 and the second 470 trench contact structure further include a third metal layer 476 on the T-shaped metal layer 474. In one such embodiment, the third metal layer 476 and the U-shaped metal layer 472 have the same composition. In a particular embodiment, the third metal layer 476 and the U-shaped metal layer 472 include titanium, and the T-shaped metal layer 474 includes cobalt.The first trench contact via 478 is electrically connected to the first trench contact 468. In a particular embodiment, the first trench contact via 478 is on and coupled to the third metal layer 476 of the first trench contact 468. The first trench contact via 478 is also over and in contact with a part of one of the dielectric spacers 463 and over and in contact with a part of the dielectric cap 462. The second trench contact via 480 is electrically connected to the second trench contact 470. In a specific embodiment, the second trench contact via 480 is on and coupled to the third metal layer 476 of the second trench contact 470. The second trench contact via 480 is also over and in contact with a part of the other one of the dielectric spacers 463 and over and in contact with the other part of the dielectric cap 462.In an embodiment, the metal silicide layer 482 is directly located between the first 468 and second 470 trench contact structure and the first 464 and second 466 semiconductor source regions or drain regions, respectively. In an embodiment, the first 464 and second 466 semiconductor source or drain regions are the first and second P-type semiconductor source or drain regions.One or more embodiments described herein relate to the use of metal chemical vapor deposition for wraparound semiconductor contacts. Embodiments may be applicable to or include one or more of chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), atomic layer deposition (ALD), conductive contact production, or thin film. Certain embodiments may include using low-temperature (for example, less than 500 degrees Celsius or within a range of 400-500 degrees Celsius) chemical vapor deposition of the contact metal to produce a metal layer such as titanium to provide a conformal source or drain contact. Such conformal source or drain contact implementations can improve three-dimensional (3D) transistor complementary metal oxide semiconductor (CMOS) performance.To provide context, sputtering can be used to deposit metal onto the semiconductor contact layer. Sputtering is a line of sight process, so it is not very suitable for 3D transistor production. Known sputtering solutions have poor or incomplete metal-semiconductor junctions on the device contact surface at an angle relative to the incidence of deposition. According to one or more embodiments of the present disclosure, a low-temperature chemical vapor deposition process is implemented to make the contact metal to provide three-dimensional conformality and maximize the contact area of the metal semiconductor junction. The resulting larger contact area can reduce junction resistance. Embodiments may include deposition on a semiconductor surface having a non-flat topology, wherein the topological structure having a certain area refers to the surface shape and the feature itself, and the non-flat topology includes the non-flat surface shape and Features or parts of surface shapes and features, that is, surface shapes and features that are not completely straight. In an embodiment, the deposition is on the semiconductor surface of the source structure or the drain structure with a relatively high germanium content.The embodiments described herein may include the fabrication of wraparound contact structures. In one such embodiment, a pure metal conformal deposition on the source-drain contact of a transistor by chemical vapor deposition, plasma enhanced chemical vapor deposition, atomic layer deposition, or plasma enhanced atomic layer deposition is described. use. Such conformal deposition can be used to increase the usable area of metal-semiconductor contacts and reduce resistance, thereby improving the performance of transistor devices. In an embodiment, a relatively low deposition temperature leads to a minimized junction resistance per unit area.It should be recognized that an integrated solution involving the metal layer deposition process described herein can be used to fabricate a wide variety of integrated circuit structures. According to an embodiment of the present disclosure, a method of fabricating an integrated circuit structure includes providing a substrate in a chemical vapor deposition (CVD) chamber with an RF source, the substrate having features thereon. The method includes reacting titanium tetrachloride (TiCl4) and hydrogen (H2) to form a titanium (Ti) layer on the features of the substrate. In an embodiment, the titanium layer has a total atomic composition including 98% or more of titanium and 0.5% to 2% of chlorine. In an alternative embodiment, a similar process is used to produce a high-purity metal layer of zirconium (Zr), hafnium (Hf), tantalum (Ta), niobium (Nb) or vanadium (V).According to an embodiment of the present disclosure, the substrate is characterized by exposing the source or drain contact trench of the semiconductor source structure or drain structure. The titanium layer (or other high-purity metal layer) is a conductive contact layer for semiconductor source structure or drain structure. Hereinafter, an exemplary embodiment of such an implementation will be described in connection with FIG. 5.Figure 5 shows a cross-sectional view of an integrated circuit structure having conductive contacts on raised source or drain regions according to an embodiment of the present disclosure.Referring to FIG. 5, the semiconductor structure 550 includes a gate structure 552 on a substrate 554. The gate structure 552 includes a gate dielectric layer 552A, a work function layer 552B, and a gate filling 552C. The source region 558 and the drain region 560 are on opposite sides of the gate structure 552. The source or drain contact 562 is electrically connected to the source region 558 and the drain region 560 and is separated from the gate structure 552 by one or both of the interlayer dielectric layer 564 or the gate dielectric spacer 566. The source region 558 and the drain region 560 include an epitaxial or embedded lower material region formed in the etched area of the substrate 554 and the corresponding source or drain capping semiconductor layer 502. The embedded lower source structure or drain structure and the corresponding cap semiconductor layer 502 may be as described in connection with the source structure or drain structure of FIG. 2G. Alternatively, a source structure or a drain structure such as the source structure or the drain structure described in connection with FIG. 2G' and FIG. 2G" may be used.In an embodiment, the source or drain contact 562 includes a high-purity metal layer 562A (such as described above) and a conductive trench filling material 562B. In one embodiment, the high-purity metal layer 562A has a total atomic composition including 98% or more of titanium. In one such embodiment, the total atomic composition of the high-purity metal layer 562A also includes 0.5%-2% chlorine. In an embodiment, the high-purity metal layer 562A has a thickness variation of 30% or less. In an embodiment, the conductive trench filling material 562B is composed of a conductive material such as but not limited to Cu, Al, W, Co or alloys thereof.In another aspect, the COAG structure and process are described. One or more embodiments of the present disclosure relate to a semiconductor structure or device having one or more gate contact structure (e.g., gate electrode) disposed over an effective portion of the gate electrode of the semiconductor structure or device As a gate contact via). One or more embodiments of the present disclosure relate to a method of fabricating a semiconductor structure or device, the semiconductor structure or device having one or more gate contact structures formed over an effective portion of the gate electrode of the semiconductor structure or device . The method described herein can be used to reduce the standard cell area by realizing the formation of gate contacts above the effective gate region. In one or more embodiments, the gate contact structure made to contact the gate electrode is self-aligned via the structure.In an embodiment, the integrated circuit structure, semiconductor structure, or device is a non-planar device, such as but not limited to a fin-FET or a tri-gate device. In such an embodiment, the corresponding semiconducting channel region is composed of or formed in a three-dimensional body. In one such embodiment, the gate electrode stack of the gate line at least surrounds the top surface of the three-dimensional body and a pair of sidewalls. In another embodiment, for example, in a gate-all-around device, at least the channel region is made as a discrete three-dimensional body. In one such embodiment, each gate electrode stack of the plurality of gate lines completely surrounds the channel region.More generally, one or more embodiments relate to methods for allowing gate contact vias to directly land on the effective transistor gate and structures formed therefrom. This approach can eliminate the need to extend the gate line on the isolation portion for contact purposes. This approach can also eliminate the need for a separate gate contact (GCN) layer for conducting signals from the gate line or structure. In an embodiment, the elimination of the above-mentioned features is achieved by recessing the contact metal in the trench contact (TCN) and introducing additional dielectric material in the process flow (for example, TILA). This additional dielectric material is included as a trench contact dielectric cap layer, and its etching characteristics are different from those that have been used in the gate aligned contact process (GAP) processing scheme (eg, GILA) for trench contact pairing. The etch characteristics of the cap layer of the quasi-gate dielectric material.In an embodiment, providing an integrated circuit structure involves forming a contact pattern that is substantially ideally aligned with an existing gate pattern, while eliminating the use of photolithography operations with extremely strict registration budgets. In one such embodiment, this approach enables the use of inherently highly selective wet etching (for example, contrast dry or plasma etching) to generate contact openings. In an embodiment, the contact pattern may be formed by using an existing gate pattern combined with a contact plug photolithography operation. In one such embodiment, the method described can eliminate the need for other rigorous photolithography operations (as used in other methods) for generating contact patterns. In an embodiment, the trench contact grid is not individually patterned, but is formed between multiple (gate) lines. For example, in one such embodiment, the trench contact grid is formed after the gate grid is patterned but before the gate grid is cut.In addition, the gate stack structure may be fabricated through a replacement gate process. In such a scheme, permanent gate electrode materials can be removed and replaced with dummy gate materials such as polysilicon or silicon nitride columnar materials. In one such embodiment, it is also possible to form a permanent gate dielectric layer in this process, as opposed to performing the formation of this layer by a process from earlier. In an embodiment, the dummy gate is removed by a dry etching or wet etching process. In one embodiment, the dummy gate is made of polysilicon or amorphous silicon, and is removed by a dry etching process including SF6. In another embodiment, the dummy gate is made of polysilicon or amorphous silicon, and is removed by a wet etching process including water-containing NH4OH or tetramethylammonium hydroxide. In one embodiment, the dummy gate is composed of silicon nitride and is removed by wet etching including phosphoric acid containing water.In the embodiment, one or more of the methods described herein mainly envisage combining the dummy and replacement gate process with the dummy and replacement contact process to obtain an integrated circuit structure. In one such embodiment, the replacement contact process is performed after the replacement gate process to allow high temperature annealing of at least a portion of the permanent gate stack. For example, in a specific such embodiment, for example, after the gate dielectric layer is formed, annealing of at least a portion of the permanent gate structure is performed at a temperature higher than about 600 degrees Celsius. The annealing is performed before the permanent contact is formed.It should be recognized that a differentiated structural relationship between the insulating gate capping layer and the insulating trench contact capping layer can be established. As an example, FIGS. 6A and 6B show cross-sectional views of various integrated circuit structures according to embodiments of the present disclosure, each of which has trench contacts including an overlying insulating cap layer and has A gate stack covered with an insulating cap layer.6A and 6B, the integrated circuit structure 600A and the integrated circuit structure 600B respectively include a fin 602, for example, a silicon germanium fin. Although depicted as a cross-sectional view, it should be appreciated that the fin 602 has a top 602A and side walls (in and out of the page with the viewing angle shown). The first 604 and second 606 gate dielectric layers are above the top 602A of the fin 602 and are adjacent to the sidewalls of the fin 602 in the lateral direction. The first 608 and second 610 gate electrodes are located above the first 604 and second 606 gate dielectric layers, respectively. The first 604 and second 606 gate dielectric layers are above the top 602A of the fin 602 and are laterally connected to the fin 602. The side walls of the object 602 are adjacent. Both the first 608 and second 610 gate electrodes include a conformal conductive layer 609A (for example, a work function setting layer) and a conductive filling material 609B on the conformal conductive layer 609A. Both the first 608 and second 610 gate electrodes have a first side 612 and a second side 614 opposite to the first side 612. Both the first 608 and second 610 gate electrodes also have an insulating cap 616 which has a top surface 618.The first dielectric spacer 620 is adjacent to the first side 612 of the first gate electrode 608. The second dielectric spacer 622 is adjacent to the second side 614 of the second gate electrode 610. The semiconductor source or drain region 624 is adjacent to the first 620 and second 622 dielectric spacers. The trench contact structure 626 is above the semiconductor source or drain region 624, adjacent to the first 620 and second 622 dielectric spacers. In an embodiment, the semiconductor source or drain region 624 has a structure such as that described above in connection with FIG. 2G, FIG. 2G', FIG. 2G" and other embodiments described herein.The trench contact structure 626 includes an insulating cap 628 on the conductive structure 630. The insulating cap 628 of the trench contact structure 626 has a top surface 629 that is substantially coplanar with the top surface 618 of the insulating cap 616 of the first 608 and second 610 gate electrodes. In an embodiment, the insulating cap 628 of the trench contact structure 626 extends laterally into the recesses 632 in the first 620 and second 622 dielectric spacers. In such an embodiment, the insulating cap 628 of the trench contact structure 626 overhangs the conductive structure 630 of the trench contact structure 626. However, in other embodiments, the insulating cap 628 of the trench contact structure 626 does not extend laterally into the recesses 632 in the first 620 and second 622 dielectric spacers, and therefore does not overhang to the trench contacts. The part structure 626 is outside the conductive structure 630.It should be appreciated that the conductive structure 630 of the trench contact structure 626 may not be rectangular, as shown in FIGS. 6A and 6B. For example, the conductive structure 630 of the trench contact structure 626 may have a cross-sectional geometric structure similar to or the same as the geometric structure shown in the conductive structure 630A shown in the projection view of FIG. 6A.In an embodiment, the insulating cap 628 of the trench contact structure 626 has a composition different from that of the insulating cap 616 of the first 608 and second 610 gate electrodes. In one such embodiment, the insulating cap 628 of the trench contact structure 626 includes a carbide material, for example, a silicon carbide material. The insulating cap 616 of the first 608 and second 610 gate electrodes includes a nitride material, for example, a silicon nitride material.In an embodiment, as shown in FIG. 6A, the insulating caps 616 of the first 608 and second 610 gate electrodes both have a bottom surface 617A under the bottom surface 628A of the insulating cap 628 of the trench contact structure 626. In another embodiment, as shown in FIG. 6B, the insulating caps 616 of the first 608 and second 610 gate electrodes both have a bottom surface 628B that is substantially coplanar with the bottom surface 628B of the insulating cap 628 of the trench contact structure 626. Bottom surface 617B. In another embodiment, although not depicted, the insulating caps 616 of the first 608 and second 610 gate electrodes both have a bottom surface above the bottom surface of the insulating cap 628 of the trench contact structure 626.In an embodiment, the conductive structure 630 of the trench contact structure 626 includes a U-shaped metal layer 634, a T-shaped metal layer 636 on and above the entire U-shaped metal layer 634, and a third T-shaped metal layer 636. Metal layer 638. The insulating cap 628 of the trench contact structure 626 is on the third metal layer 638. In one such embodiment, the third metal layer 638 and the U-shaped metal layer 634 include titanium, and the T-shaped metal layer 636 includes cobalt. In certain such embodiments, the T-shaped metal layer 636 also includes carbon.In an embodiment, the metal silicide layer 640 is directly between the conductive structure 630 of the trench contact structure 626 and the semiconductor source region or the drain region 624. In one such embodiment, the metal silicide layer 640 includes titanium and silicon. In certain such embodiments, the semiconductor source or drain region 624 is a P-type semiconductor source or drain region.As described throughout this application, the substrate may be composed of a semiconductor material that can withstand the manufacturing process and where electric charges can migrate. In an embodiment, the substrate described herein is a bulk substrate composed of a layer of crystalline silicon, silicon/germanium or germanium, which is doped with charge carriers such as but not limited to phosphorus, arsenic or boron or a combination thereof to Form the active area. In one embodiment, the concentration of silicon atoms in such a bulk substrate is greater than 97%. In another embodiment, the bulk substrate is composed of an epitaxial layer grown on top of a distinct crystalline substrate, for example, a silicon epitaxial layer grown on top of a boron-doped bulk silicon single crystal substrate constitute. Alternatively, the bulk substrate may be composed of III-V group materials. In an embodiment, the bulk substrate is made of, but not limited to, gallium nitride, gallium phosphide, gallium arsenide, indium phosphide, indium antimonide, gallium indium arsenide, gallium aluminum arsenide, gallium indium phosphide, or a combination thereof Made of III-V materials. In one embodiment, the bulk substrate is composed of III-V group materials, and the charge carrier dopant impurity atoms are atoms such as, but not limited to, carbon, silicon, germanium, oxygen, sulfur, selenium, or tellurium.As described throughout this application, isolation regions such as shallow trench isolation regions or sub-fin isolation regions may be finally electrically isolated or facilitated by a portion adapted to isolate the permanent gate structure from the underlying bulk substrate, or The material is suitable for isolating the active region (for example, the isolation fin active region) formed in the lower body substrate. For example, in one embodiment, the isolation region is composed of one or more layers of dielectric material such as but not limited to silicon dioxide, silicon oxynitride, silicon nitride, carbon-doped silicon nitride, or a combination thereof .As described throughout this application, the gate line or gate structure may be composed of a gate electrode stack including a gate dielectric layer and a gate electrode layer. In an embodiment, the gate electrode of the gate electrode stack is composed of a metal gate and a gate dielectric layer composed of a high-k material. For example, in one embodiment, the gate dielectric layer is made of, but not limited to, hafnium oxide, hafnium oxynitride, hafnium silicate, lanthanum oxide, zirconium oxide, zirconium silicate, tantalum oxide, barium strontium titanate, barium titanate, Strontium titanate, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, lead zinc niobate, or a combination of materials. In addition, a part of the gate dielectric layer may include a native oxide layer formed from several layers on top of the semiconductor substrate. In an embodiment, the gate dielectric layer is composed of a top high-k part and a lower part composed of an oxide of a semiconductor material. In one embodiment, the gate dielectric layer is composed of a top portion of hafnium oxide and a bottom portion of silicon dioxide or silicon oxynitride. In some embodiments, a portion of the gate dielectric is a "U"-shaped structure including a bottom portion that is substantially parallel to the surface of the substrate and two portions that are substantially perpendicular to the top surface of the substrate. The side wall part.In one embodiment, the gate electrode is composed of a metal layer, such as but not limited to metal nitride, metal carbide, metal silicide, metal aluminide, hafnium, zirconium, titanium, tantalum, aluminum, ruthenium, palladium , Platinum, cobalt, nickel or conductive metal oxides. In a specific embodiment, the gate electrode is composed of a non-work function setting filling material formed on the metal work function setting layer. Depending on whether the transistor is a PMOS transistor or an NMOS transistor, the gate electrode layer may be composed of P-type work function metal or N-type work function metal. In some embodiments, the gate electrode layer may be composed of a stack of two or more metal layers, where one or more metal layers are work function metal layers, and at least one metal layer is a conductive filling layer. For PMOS transistors, metals that can be used for the gate electrode include but are not limited to ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides (for example, ruthenium oxide). The P-type metal layer will be able to form a PMOS gate electrode with a work function between about 4.9 eV and about 5.2 eV. For NMOS transistors, metals that can be used for gate electrodes include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals, for example, hafnium carbide, zirconium carbide, titanium carbide, carbide Tantalum and aluminum carbide. The N-type metal layer will be able to form an NMOS gate electrode with a work function between about 3.9 eV and about 4.2 eV. In some embodiments, the gate electrode may be composed of a "U"-shaped structure including a bottom portion substantially parallel to the surface of the substrate and two sides substantially perpendicular to the top surface of the substrate. Wall part. In another embodiment, at least one of the metal layers forming the gate electrode may simply be a planar layer substantially parallel to the top surface of the substrate, and does not include sidewalls substantially perpendicular to the top surface of the substrate section. In other embodiments of the present disclosure, the gate electrode may be composed of a U-shaped structure and a combination of planar and non-U-shaped structures. For example, the gate electrode may be composed of one or more U-shaped metal layers formed on top of one or more flat, non-U-shaped layers.As described throughout this application, spacers associated with gate lines or electrode stacks may be adapted to ultimately electrically isolate the permanent gate structure from adjacent conductive contacts (e.g., self-aligned contacts) ( Or promote the isolation). For example, in one embodiment, the spacer is composed of a dielectric material such as but not limited to silicon dioxide, silicon oxynitride, silicon nitride, or carbon-doped silicon nitride.In an embodiment, the approach described herein may involve forming a contact pattern that aligns very well with the existing gate pattern, while eliminating the use of photolithography operations with extremely strict registration budgets. In one such embodiment, this approach can use inherently highly selective wet etching (for example, contrast dry or plasma etching) to generate contact openings. In an embodiment, the contact pattern may be formed by using an existing gate pattern in combination with a contact plug photolithography operation. In one such embodiment, this approach can eliminate the need for other rigorous photolithography operations (as used in other approaches) for generating contact patterns. In an embodiment, the trench contact grid is not individually patterned, but is formed between multiple (gate) lines. For example, in one such embodiment, the trench contact grid is formed after the gate grid is patterned but before the gate grid is cut.The pitch dividing process and patterning scheme may be implemented to implement the embodiments described herein, or may be included as part of the embodiments described herein. Pitch division patterning usually means that the pitch is halved, the pitch is quartered, and so on. The pitch division scheme can be applied to FEOL processing, BEOL processing, or both FEOL (device) and BEOL (metallization) processing. According to one or more embodiments described herein, photolithography is first performed to print unidirectional lines (for example, strictly unidirectional or mainly unidirectional) at a predetermined pitch. Then, the pitch division process, which is a technique for increasing the line density, is implemented.In an embodiment, the term "grid structure" for fins, gate lines, metal lines, ILD lines, or hard mask lines is used herein to refer to a closely spaced grid structure. In one such embodiment, the tight pitch cannot be achieved directly by the selected photolithography. For example, a pattern based on the selected photolithography may be formed first, but the pitch is halved by patterning using a spacer mask, which is known in the art. Furthermore, the initial pitch can be divided into four by the second round of spacer mask patterning. Accordingly, the grid-shaped pattern described herein may have metal lines, ILD lines, or hard mask lines spaced at substantially uniform intervals and having substantially uniform widths. For example, in some embodiments, the pitch change will be within ten percent and the width change will be within ten percent; and in some embodiments, the pitch change will be within five percent and the width change will be within 100 percent. Within five quarters. The pattern can be made by halving or quartering the pitch or by dividing the pitch by other means. In an embodiment, the grid may not be of a single pitch.In an embodiment, as used throughout this specification, the interlayer dielectric (ILD) material is composed of or includes a layer of dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, silicon oxide (e.g., silicon dioxide (SiO2)), doped silicon oxide, fluorinated silicon oxide, carbon-doped silicon oxide, Various low-k dielectric materials and combinations thereof are known in the art. For example, the interlayer dielectric material may be formed by a technique such as chemical vapor deposition (CVD), physical vapor deposition (PDV), or other deposition methods.In the embodiment, as used throughout this specification, the metal line or interconnect line material (and via material) is composed of one or more metals or other conductive structures. Common examples are the use of copper wires and structures that may or may not include a barrier layer between the copper and the surrounding ILD material. As used herein, the term "metal" includes alloys, stacks, and other combinations of multiple metals. For example, the metal interconnection line may include a barrier layer (for example, a layer including one or more of Ta, TaN, Ti, or TiN), a stack of different metals or alloys, and the like. Thus, the interconnection line may be a single material layer or may be formed of several layers (including a conductive liner layer and a filling layer). Any suitable deposition process (for example, electroplating, chemical vapor deposition, or physical vapor deposition) may be used to form the interconnection lines. In an embodiment, the interconnection line is composed of conductive materials such as but not limited to Cu, Al, Ti, Zr, Hf, V, Ru, Co, Ni, Pd, Pt, W, Ag, Au or alloys thereof. Interconnect lines are sometimes referred to in the art as traces, leads, wires, metals, or simply interconnects.In the embodiment, again as used throughout this specification, the hard mask material is composed of a dielectric material different from the interlayer dielectric material. In one embodiment, different hard mask materials may be used in different regions in order to provide different growth or etch selectivities relative to each other and relative to the underlying dielectric and metal layers. In some embodiments, the hard mask layer includes a silicon nitride (eg, silicon nitride) layer or a silicon oxide layer, or both, or a combination thereof. Other suitable materials may include carbon-based materials. In another embodiment, the hard mask material includes metal species. For example, the hard mask or other overlying material may include a nitride (e.g., titanium nitride) layer of titanium or other metals. Potentially, smaller amounts of other materials, such as oxygen, can be included in one or more of these layers. Alternatively, other hard mask layers known in the art may be used according to specific implementations. The hard mask layer can be formed by CVD, PVD or other deposition methods.In the embodiments, as used throughout this specification, 193nm immersion lithography (i193), extreme ultraviolet (EUV) lithography, or electron beam direct writing (EBDW) lithography, etc. are used to perform photolithography operations. Either positive or negative resist can be used. In one embodiment, the photolithography mask is a three-layer mask composed of a topography mask part, an anti-reflective coating (ARC) layer and a photoresist layer. In certain such embodiments, the topography mask portion is a carbon hard mask (CHM) layer, and the anti-reflective coating layer is a silicon ARC layer.It should be recognized that all aspects of the process described above do not need to be practiced to fall within the spirit and scope of the embodiments of the present disclosure. For example, in one embodiment, there is no need to form a dummy gate before making the gate contact over the active portion of the gate stack. The gate stack described above may actually be a permanent gate stack when initially formed. Moreover, the process described herein can be used to fabricate one or more semiconductor devices. The semiconductor device may be a transistor or the like. For example, in the embodiment, the semiconductor device is a metal oxide semiconductor (MOS) transistor for logic or memory, or a bipolar transistor. Moreover, in an embodiment, the semiconductor device has a three-dimensional architecture, for example, a tri-gate device, an independent access dual-gate device, a FIN-FET, a nanowire device, or a nanoribbon device. One or more embodiments can be particularly used to fabricate semiconductor devices at a 10 nanometer (10 nm) technology node or a sub-10 nanometer (10 nm) technology node.Additional or intermediate operations for FEOL layer or structure fabrication can include standard microelectronic fabrication processes (e.g., photolithography, etching, thin film deposition, planarization (e.g., chemical mechanical polishing (CMP)), diffusion, metering, sacrificial layer Use, use of an etch stop layer, use of a planarization stop layer, or any other process associated with the production of microelectronic components. Moreover, it should be recognized that the process operations described for the foregoing process flow can be practiced in an alternative order, and No need to perform every operation, or additional operations can be performed, or both.It should be recognized that in the above exemplary FEOL embodiment, in the embodiment, the 10 nanometer node processing or the sub-10 nanometer node processing is directly implemented into the production scheme, and the resulting structure is used as the technical driving force. In other embodiments, FEOL considerations may be driven by BEOL 10 nanometer or sub-10 nanometer processing requirements. For example, the material selection and layout of FEOL layers and devices may need to be adapted to BEOL processing. In one such embodiment, the material options and gate stack architecture are selected to accommodate the high-density metallization of the BEOL layer, thereby, for example, reducing the coupling of high-density metallization formed in the FEOL layer but through the BEOL layer Fringe capacitance in the transistor structure together.The embodiments disclosed herein can be used to manufacture a wide range of different types of integrated circuits or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, microcontrollers, and the like. In other embodiments, semiconductor memories can be manufactured. In addition, the integrated circuits or other microelectronic devices can be used in a wide variety of electronic devices known in the art. For example, in computer systems (e.g., desktops, laptops, servers), cellular phones, personal electronic devices, and so on. The integrated circuit can be coupled with the bus and other components in the system. For example, the processor may be coupled to the memory, chipset, etc. through one or more buses. Potentially, each of the processor, memory, and chipset is manufactured using the methods disclosed herein.FIG. 7 shows a computing device 700 according to an embodiment of the present disclosure. The computing device 700 houses a board 702. The board 702 may include several components, including but not limited to a processor 704 and at least one communication chip 706. The processor 704 is physically and electrically coupled to the board 702. In some embodiments, the at least one communication chip 706 may also be physically and electrically coupled to the board 702. In other embodiments, the communication chip 706 is part of the processor 704.Depending on its application, the computing device 700 may include other components that may or may not be physically and electrically coupled to the board 702. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, graphics processor, digital signal processor, cryptographic processor, chipset, antenna, Displays, touch screen displays, touch screen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices (e.g. , Hard drive, compact disk (CD), digital versatile disk (DVD), etc.).The communication chip 706 can implement wireless communication for transmitting data from the computing device 700 and transmitting data to the computing device 700. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc. that can transmit data through non-solid media using modulated electromagnetic radiation. The term does not imply that the related devices do not contain any leads, but in some embodiments they may not contain any leads. The communication chip 706 can implement any of many wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-Fi DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, their derivatives and any other wireless protocols designated as 3G, 4G, 5G and higher generations. The computing device 700 may include multiple communication chips 706. For example, the first communication chip 706 may be dedicated to short-range wireless communication, such as Wi-Fi and Bluetooth, and the second communication chip 706 may be dedicated to long-range wireless communication, such as GPS, EDGE, GPRS, CDMA , WiMAX, LTE, Ev-DO and others.The processor 704 of the computing device 700 includes an integrated circuit die packaged in the processor 704. In some embodiments of the present disclosure, the integrated circuit die of the processor includes one or more structures, for example, an integrated circuit structure constructed according to the embodiments of the present disclosure. The term "processor" may refer to any device or part of a device that processes electronic data from a register or memory or both to transform the electronic data into other electronic data that can be stored in the register or memory or both.The communication chip 706 also includes an integrated circuit die packaged in the communication chip 706. According to another embodiment of the present disclosure, the integrated circuit die of the communication chip is constructed according to the embodiment of the present disclosure.In other embodiments, another component housed in the computing device 700 may include an integrated circuit die constructed in accordance with an embodiment of the present disclosure.In various embodiments, the computing device 700 may be a laptop computer, netbook, notebook, ultrabook, smart phone, tablet computer, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer, Scanner, monitor, set-top box, entertainment control unit, digital camera, portable music player or digital video recorder. In other embodiments, the computing device 700 may be any other electronic device that processes data.Figure 8 shows an interpolator 800 that includes one or more embodiments of the present disclosure. The interposer 800 is an intermediate substrate for bridging the first substrate 802 to the second substrate 804. The first substrate 802 may be, for example, an integrated circuit die. The second substrate 804 may be, for example, a memory module, a computer motherboard, or another integrated circuit die. Generally speaking, the purpose of the interposer 800 is to extend the connection to a wider pitch or to reroute the connection to a different connection. For example, the interposer 800 may couple the integrated circuit die to a ball grid array (BGA) 806, which in turn may be coupled to the second substrate 804. In some embodiments, the first and second substrates 802/804 are attached to opposite sides of the interposer 800. In other embodiments, the first and second substrates 802/804 are attached to the same side of the interposer 800. And in other embodiments, three or more substrates are interconnected by the interposer 800.The interposer 800 may be formed of epoxy resin, glass fiber reinforced epoxy resin, ceramic material, or polymer material such as polyimide. In other embodiments, the interposer 800 may be formed of alternating rigid or flexible materials, which may include the same materials used in the semiconductor substrate described above, for example, silicon, germanium, and others. Group III-V and Group IV materials.The interposer 800 may include a metal interconnection 808 and a via 810 including but not limited to a through silicon via (TSV) 812. The interposer 800 may also include embedded devices 814, which include both passive devices and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices can also be formed on the interposer 800. According to an embodiment of the present disclosure, the device or process disclosed herein may be used in the production of the interposer 800 or the production of components included in the interposer 800.9 is an isometric view of a mobile computing platform 900 according to an embodiment of the present disclosure. The mobile computing platform 900 uses an integrated circuit manufactured according to one or more processes described herein or including one or more features described herein (IC).The mobile computing platform 900 may be any portable device configured for each of electronic data display, electronic data processing, and wireless electronic data transmission. For example, the mobile computing platform 900 may be any one of a tablet computer, a smart phone, a laptop computer, etc., and may include a display screen 905, a chip-level (SoC) or package-level integrated system 910, and a battery 913. The display screen 905 is In the exemplary embodiment, it is a touch screen (capacitive, inductive, resistive, etc.). As shown in the figure, as shown in the figure, the higher the degree of integration achieved in the system 910 through higher transistor packaging density, the mobile computing platform 900 can be occupied by a battery 913 or a non-volatile memory (eg, a solid state drive) The larger the part, or the larger the number of transistor gates used to achieve improved platform functionality. Similarly, the greater the carrier mobility of each transistor in the system 910, the greater the functionality. As such, the technology described herein can achieve improvements in performance and form factor in the mobile computing platform 900.The integrated system 910 is further shown in the enlarged view 902. In an exemplary embodiment, the packaged device 977 includes at least one memory chip (e.g., RAM), or at least one processor chip (for example, RAM) made according to one or more processes described herein or including one or more features described herein ( For example, multi-core microprocessors and/or graphics processors). The packaged device 977 is also combined with a power management integrated circuit (PMIC) 915, an RF (wireless) integrated circuit (RFIC) 925 including a broadband RF (wireless) transmitter and/or receiver (for example, including a digital baseband, and an analog front-end module also One or more of the power amplifier on the transmission path and the low noise amplifier on the reception path and its controller 911 are coupled to the board 960 together. Functionally, the PMIC 915 performs battery power adjustment, DC to DC conversion, etc., and thus has an input terminal coupled to the battery 913, and an output terminal that provides current supply to all other functional modules. As further shown in the figure, in an exemplary embodiment, RFIC 925 has an output coupled to an antenna to provide for implementing any of many wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, their derivatives and any Others are designated as 3G, 4G, 5G and higher generation wireless protocols. In alternative embodiments, each of these board-level modules may be integrated on a separate IC coupled to the packaging substrate of the package device 977, or integrated into a single IC coupled to the packaging substrate of the package device 977 ( SoC).In another aspect, semiconductor packages are used to protect integrated circuit (IC) chips or dies, and also provide electrical interfaces for the dies to external circuits. As the demand for smaller electronic devices increases, semiconductor packages are designed to be more compact and must support greater circuit density. In addition, the need for higher performance devices has led to the need for improved semiconductor packages that can achieve thin package outlines and achieve low total warpage compatible with subsequent assembly processes.In an embodiment, wire bonding with ceramic or organic packaging substrates is used. In another embodiment, a C4 process is used to mount the die to a ceramic or organic packaging substrate. In particular, C4 solder ball connections can be implemented to provide flip-chip interconnection between the semiconductor device and the substrate. Flip chip or controlled collapse chip connection (C4) is a type of mounting for semiconductor devices, such as integrated circuit (IC) chips, MEMS, or components, which uses solder bumps instead of wire bonding. The solder bumps are deposited on the C4 pads located on the top side of the substrate package. In order to mount the semiconductor device on the substrate, the semiconductor device is turned over so that the active side faces down above the mounting area. The semiconductor device is directly connected to the substrate using solder bumps.FIG. 10 shows a cross-sectional view of a flip-chip mounted die according to an embodiment of the present disclosure.10, according to an embodiment of the present disclosure, a device 1000 includes a die 1002, for example, an integrated circuit (IC) manufactured according to one or more processes described herein or including one or more features described herein. The die 1002 includes metallized pads 1004 thereon. The package substrate 1006 (such as a ceramic or organic substrate) includes a connection portion 1008 thereon. The die 1002 and the package substrate 1006 are electrically connected through a solder ball 1010 coupled to the metalized pad 1004 and the connection portion 1008. The underfill material 1012 surrounds the solder balls 1010.The processing of flip-chips can be similar to conventional IC production, but with a few additional operations. Towards the end of the manufacturing process, the attachment pad is metalized to make it easier to accept solder. This usually consists of several processes. Then deposit small solder dots on each metalized pad. The chips are then cut from the wafer as normal. In order to attach the flip chip to the circuit, the chip is turned upside down to place the solder dots down on the electronic device below or the connector on the circuit board. The solder is then usually remelted using ultrasonic or alternatively a reflow soldering process to create an electrical connection. This also leaves a small space between the circuit of the chip and the mounting below. In most cases, the electrically insulating adhesive is then "underfilled" to provide a stronger mechanical connection, provide a thermal bridge, and to ensure that the solder joints are not stressed due to the heating of the chip and the rest of the system.In other embodiments, according to the embodiments of the present disclosure, newer packaging and die-to-die interconnection methods (such as through-silicon vias (TSV) and silicon interposers) are implemented to produce a combination of High-performance multi-chip modules (MCM) and system-in-package (SiP) made by one or more processes or integrated circuits (ICs) with one or more features described herein.Thus, it is described that the embodiments of the present disclosure include integrated circuit structures having a source structure or a drain structure with low resistivity, and a method of fabricating an integrated circuit having a source structure or drain structure with low resistivity .Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even when only a single embodiment is described with respect to specific features. Unless otherwise stated, the examples of features provided in this disclosure are intended to be illustrative and not restrictive. The above description is intended to cover those alternative forms, modifications, and equivalent forms that will be obvious to those skilled in the art and have the beneficial effects of the present disclosure.The scope of the present disclosure includes any feature or combination of features (explicit or implied) disclosed herein, or any generalization thereof, regardless of whether it alleviates any or all of the problems solved herein. Therefore, during the application process of this application (or an application claiming priority), new claims can be made for any such feature combination. In particular, with reference to the appended claims, the features from the dependent claims can be combined with those of the independent claims, and can be combined in any suitable manner rather than only in the specific combinations listed in the appended claims. Features of the corresponding independent claims.The following examples refer to other embodiments. Various features of different embodiments can be combined with some included features and excluded other features in various ways to adapt to multiple different applications.Exemplary embodiment 1: An integrated circuit structure including a fin having a lower fin portion and an upper fin portion. The gate stack is above the upper fin portion of the fin, the gate stack having opposite first and second sides. The first source structure or the drain structure includes an epitaxial structure embedded in the fin on the first side of the gate stack. The second source structure or the drain structure includes an epitaxial structure embedded in the fin on the second side of the gate stack. Each epitaxial structure of the first and second source structures or drain structures includes silicon, germanium, and boron, wherein the atomic concentration of boron is in the range of 1E20 atoms/cm3-3E21 atoms/cm3, and the germanium concentration is 10% In the range of 85%, and the first and second source or drain structures have a resistivity less than or equal to 0.3 mOhm·cm.Exemplary embodiment 2: The integrated circuit structure of exemplary embodiment 1, wherein the resistivities of the first and second source or drain structures are in the range of 0.1 mOhm·cm to 0.3 mOhm·cm.Exemplary Embodiment 3: The integrated circuit structure of Exemplary Embodiment 1 or 2, wherein the first and second source or drain structures induce uniaxial compressive strain on the fin.Exemplary embodiment 4: The integrated circuit structure of exemplary embodiments 1, 2, or 3, wherein the first and second source or drain structures are adjacent to the isolation structure.Exemplary Embodiment 5: The integrated circuit structure of Exemplary Embodiment 4, wherein the first and second source or drain structures have a lower surface below the upper surface of the isolation structure.Exemplary Embodiment 6: The integrated circuit structure of Exemplary Embodiments 1, 2, 3, 4, or 5, wherein the lower fin portion includes a portion of the lower bulk single crystal silicon substrate.Exemplary Embodiment 7: The integrated circuit structure of Exemplary Embodiments 1, 2, 3, 4, 5, or 6, further comprising first and second dielectric gates along the first side and the second side of the gate stack, respectively Polar sidewall spacer.Exemplary embodiment 8: The integrated circuit structure of exemplary embodiments 1, 2, 3, 4, 5, 6, or 7, further comprising a first conductive contact on the epitaxial structure of the first source structure or the drain structure And a second conductive contact portion on the epitaxial structure of the second source structure or the drain structure.Exemplary Embodiment 9: The integrated circuit structure of Exemplary Embodiment 8, wherein the first conductive contact portion and the second conductive contact portion are respectively in a partial recess in the epitaxial structure of the first and second source structures or drain structures in.Exemplary embodiment 10: An integrated circuit structure including a fin having a lower fin portion and an upper fin portion. The gate stack is above the upper fin portion of the fin, the gate stack having opposite first and second sides. The first source structure or the drain structure includes an epitaxial structure embedded in the fin on the first side of the gate stack, the epitaxial structure having a lower semiconductor layer and a capping semiconductor layer. The second source structure or the drain structure includes an epitaxial structure embedded in the fin on the second side of the gate stack, the epitaxial structure having a lower semiconductor layer and a capping semiconductor layer. The lower semiconductor layer of each of the epitaxial structure of the first and second source structures or the drain structure includes silicon, germanium, and boron. The cap semiconductor layer of each of the epitaxial structure of the first and second source structures or the drain structure has a germanium concentration greater than that of the lower semiconductor layer. The first and second source structures or drain structures have a resistivity less than or equal to 0.3 mOhm·cm.Exemplary Embodiment 11: The integrated circuit structure of Exemplary Embodiment 10, wherein the lower semiconductor layer of each of the epitaxial structure of the first and second source structure or the drain structure has a value of 1E20 atoms/cm3-3E21 atoms The concentration of boron atoms in the range of /cm3 and the concentration of germanium in the range of 10% to 85%.Exemplary Embodiment 12: The integrated circuit structure of Exemplary Embodiment 10 or 11, wherein the resistivities of the first and second source structures or drain structures are in the range of 0.1 mOhm·cm to 0.3 mOhm·cm.Exemplary Embodiment 13: The integrated circuit structure of Exemplary Embodiment 10, 11, or 12, wherein the first and second source or drain structures induce uniaxial compressive strain on the fin.Exemplary Embodiment 14: The integrated circuit structure of Exemplary Embodiment 10, 11, 12, or 13, wherein the cap semiconductor layer is substantially composed of germanium.Exemplary Embodiment 15: The integrated circuit structure of Exemplary Embodiment 10, 11, 12, 13, or 14, wherein the lower fin portion includes a portion of the lower bulk single crystal silicon substrate.Exemplary Embodiment 16: The integrated circuit structure of Exemplary Embodiments 10, 11, 12, 13, 14, or 15, further comprising first and second dielectric gates along the first and second sides of the gate stack, respectively Polar sidewall spacer.Exemplary Embodiment 17: The integrated circuit structure of Exemplary Embodiment 10, 11, 12, 13, 14, 15, or 16, further comprising a first conductive layer on the cap semiconductor layer of the first source structure or the drain structure The contact portion and the second conductive contact portion on the cap semiconductor layer of the second source structure or the drain structure.Exemplary Embodiment 18: The integrated circuit structure of Exemplary Embodiment 17, wherein the first conductive contact and the second conductive contact are in the capping semiconductor layer of the first and second source structures or drain structures, respectively Partially recessed.Exemplary embodiment 19: An integrated circuit structure including a fin having a lower fin portion and an upper fin portion. The gate stack is above the upper fin portion of the fin, the gate stack having opposite first and second sides. The first source structure or the drain structure includes an epitaxial structure embedded in the fin on the first side of the gate stack, the epitaxial structure having a lower semiconductor layer and a capping semiconductor layer. The second source structure or the drain structure includes an epitaxial structure embedded in the fin on the second side of the gate stack, the epitaxial structure having a lower semiconductor layer and a capping semiconductor layer. The lower semiconductor layer of each of the epitaxial structure of the first and second source structures or the drain structure includes silicon, germanium, and boron. The cap semiconductor layer of each of the epitaxial structure of the first and second source structures or the drain structure has a germanium concentration greater than that of the lower semiconductor layer. The first and second source structures or drain structures have a resistivity less than or equal to 0.3 mOhm·cm. The first conductive contact is on the cap semiconductor layer of the first source structure or the drain structure. The second conductive contact is on the cap semiconductor layer of the second source structure or the drain structure. The first dielectric spacer is along the sidewall of the first conductive contact, and the capping semiconductor layer of the first source structure or the drain structure is confined between the first dielectric spacers. The second dielectric spacer is along the sidewall of the second conductive contact, and the cap semiconductor layer of the second source structure or the drain structure is confined between the second dielectric spacers.Exemplary Embodiment 20: The integrated circuit structure of Exemplary Embodiment 19, further including first and second dielectric gate sidewall spacers along the first and second sides of the gate stack, respectively.Exemplary Embodiment 21: The integrated circuit structure of Exemplary Embodiment 19 or 20, wherein the lower semiconductor layer of each of the epitaxial structure of the first and second source structure or the drain structure has a value of 1E20 atoms/cm3- The concentration of boron atoms in the range of 3E21 atoms/cm3 and the concentration of germanium in the range of 10% to 85%.Exemplary embodiment 22: The integrated circuit structure of exemplary embodiment 19, 20, or 21, wherein the resistivity of the first and second source structures or drain structures is in the range of 0.1 mOhm·cm to 0.3 mOhm·cm .Exemplary Embodiment 23: The integrated circuit structure of Exemplary Embodiment 19, 20, 21, or 22, wherein the first and second source or drain structures induce uniaxial compressive strain on the fin.Exemplary Embodiment 24: The integrated circuit structure of Exemplary Embodiment 19, 20, 21, 22, or 23, wherein the cap semiconductor layer is substantially composed of germanium.Exemplary Embodiment 25: The integrated circuit structure of Exemplary Embodiment 19, 20, 21, 22, 23, or 24, wherein the lower fin portion includes a portion of the lower bulk single crystal silicon substrate. |
Microelectronic components including direct bonding, and related structures and techniques are disclosed herein. For example, in some embodiments, a microelectronic assembly may include a first microelectronic component and a second microelectronic component coupled to the first microelectronic component by a direct bond region, where the direct bond region includes a first sub-region and a second sub-region, and the first sub-region, and the first sub-region has a greater metal density than the second sub-region. In some embodiments, a microelectronic assembly may include a first microelectronic component and a second microelectronic component coupled to the first microelectronic component by a direct bond region, where the direct bond region includes a first metal contact and a second metal contact, the first metal contact having a larger area than the second metal contact, and the second metal contact having a larger area than the second metal contact. And the first metal contact is electrically coupled to a power/ground plane of the first microelectronic component. |
1.A microelectronic assembly comprising:a first microelectronic component; andA second microelectronic component coupled to the first microelectronic component through a direct bond area, wherein the direct bond area includes a first metal contact and a second metal contact, the first metal contact having a larger size than the second metal contact area, and the first metal contact is electrically coupled to the power/ground plane of the first microelectronic component.2.The microelectronic assembly of claim 1, wherein the second metal contact is electrically coupled to a signal path of the first microelectronic component.3.The microelectronic assembly of claim 1, wherein the direct bond region includes a third metal contact having a larger area than the second metal contact, the first metal contact being electrically coupled to the power plane of the first microelectronic component, and the third metal contact is electrically coupled to the ground plane of the first microelectronic component.4.The microelectronic assembly of claim 3, wherein the first metal contact is parallel to the second metal contact.5.The microelectronic assembly of claim 1, wherein the direct bonding region includes a fourth metal contact having a larger area than the second metal contact, the power plane being the first metal contact A first power plane of a microelectronic component, and the fourth metal contact is electrically coupled to a second power plane of the first microelectronic component.6.6. The microelectronic assembly of claim 5, wherein the second power plane is to operate at a different voltage than the voltage at which the first power plane is to operate.7.The microelectronic assembly of claim 1 , wherein the direct bonding region includes a fourth metal contact having a larger area than the second metal contact, and wherein the fourth metal contact is electrically coupled to a power plane of the first microelectronic component.8.The microelectronic assembly of claim 7, wherein the first metal contact is parallel to the second metal contact.9.8. The microelectronic assembly of any of claims 1-8, wherein the first microelectronic component includes an interposer.10.The microelectronic assembly of any of claims 1-8, wherein the first microelectronic component comprises a die.11.The microelectronic assembly of claim 1, wherein the second microelectronic component comprises a die.12.The microelectronic assembly of claim 11, wherein the die of the second microelectronic component is a dummy die.13.The microelectronic assembly of any of claims 1-8, wherein a power/ground plane of the first microelectronic component is in contact with a substrate via.14.The microelectronic assembly of any of claims 1-8, wherein the first metal contact comprises copper.15.8. The microelectronic assembly of any of claims 1-8, wherein the direct bonding region comprises an inorganic dielectric material.16.The microelectronic assembly of claim 1, wherein the microelectronic assembly further comprises a heat sink.17.17. The microelectronic assembly of claim 16, wherein the microelectronic assembly further comprises a thermal interface material between the microelectronic component and the heat spreader.18.A system that includes:circuit boards; andA microelectronic assembly communicatively coupled to the circuit board, wherein the microelectronic assembly includes a first microelectronic component coupled to a second microelectronic component by direct bonding, and direct bonding contacts are for the first microelectronic component The power/ground plane of the electronic component or said second microelectronic component.19.19. The system of claim 18, wherein the circuit board is a motherboard.20.19. The system of claim 18 or 19, wherein the system further comprises a wireless communication device communicatively coupled to the circuit board. |
Direct bonding in microelectronic assembliestechnical fieldThe present application relates to direct bonding in microelectronic assemblies.Background techniqueIntegrated circuit (IC) packages typically include dies that are wire bonded or soldered to a package substrate. In use, electrical signals and power are transferred between the package substrate and the die through wire bonds or solder.Description of drawingsEmbodiments will be readily understood from the following detailed description in conjunction with the accompanying drawings. To facilitate this description, the same reference numerals designate the same structural elements. In the figures of the accompanying drawings, embodiments are illustrated by way of example and not by way of limitation.1 is a cross-sectional side view of an example microelectronic assembly including direct bonding, according to various embodiments.2 is an exploded cross-sectional side view of a portion of the microelectronic assembly of FIG. 1 in accordance with various embodiments.3 and 4 are cross-sectional side views of example direct bonding interfaces in accordance with various embodiments.5-8 are top views of example direct bonding interfaces in accordance with various embodiments.9-12 are cross-sectional side views of example direct bonding interfaces in accordance with various embodiments.13 is a cross-sectional side view of an example microelectronic assembly including direct bonding, according to various embodiments.14-17 are cross-sectional side views of example stages in fabrication of a portion of the microelectronic assembly of FIGS. 1 and 2 in accordance with various embodiments.18-20 are cross-sectional side views of example microelectronic assemblies including direct bonding, according to various embodiments.21-22 are top views of example direct bonding interfaces in accordance with various embodiments.23 is a cross-sectional side view of an example microelectronic assembly including a direct bond region having multiple subregions, according to various embodiments.24-25 are top views of example direct bonding interfaces with multiple sub-regions in accordance with various embodiments.26A-26B are cross-sectional side and top views, respectively, of a microelectronic assembly that includes dummy metal traces in and around direct bond regions, according to various embodiments.27-28 are cross-sectional side views of example microelectronic assemblies including direct bonding, according to various embodiments.29A-29B are cross-sectional side and top views, respectively, of an example portion of a microelectronic assembly having a power/ground plane in a direct bond area, according to various embodiments.30-31 are top views of example portions of a microelectronic assembly having power/ground planes in direct bond areas, according to various embodiments.32 is a cross-sectional side view of an example portion of a microelectronic assembly having a cantilevered power/ground plane in a direct bond region, according to various embodiments.33 is a top view of a die and wafer that may be included in a microelectronic component according to any of the embodiments disclosed herein.34 is a cross-sectional side view of an integrated circuit (IC) device that may be included in a microelectronic component according to any of the embodiments disclosed herein.35 is a cross-sectional side view of an IC device assembly that may include a microelectronic assembly according to any of the embodiments disclosed herein.36 is a block diagram of an example electrical device that may include a microelectronic assembly according to any of the embodiments disclosed herein.detailed descriptionDisclosed herein are microelectronic assemblies, including direct bonding, and related structures and techniques. For example, in some embodiments, a microelectronic assembly may include a first microelectronic component and a second microelectronic component coupled to the first microelectronic component through a direct bond region, wherein the direct bond region includes a first subregion and a second subregion region, and the first subregion has a greater metal density than the second subregion. In some embodiments, a microelectronic assembly can include a first microelectronic component and a second microelectronic component coupled to the first microelectronic component through a direct bond region, wherein the direct bond region includes a first metal contact and a second metal contact, The first metal contact has a larger area than the second metal contact, and the first metal contact is electrically coupled to the power/ground plane of the first microelectronic component.In the following detailed description, reference is made to the accompanying drawings which form a part hereof, wherein like numerals designate like parts throughout, and in which embodiments are shown, by way of illustration, which may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description should not be taken in a limiting sense.Various operations may be described as multiple discrete acts or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed to imply that these operations are necessarily order-dependent. In particular, these operations may be performed out of the order presented. The described operations may be performed in a different order than the described embodiments. Various additional operations may be performed, and/or described operations may be omitted in additional embodiments.For the purposes of this disclosure, the phrases "A and/or B" and "A or B" mean (A), (B), or (A and B). For the purposes of this disclosure, the phrases "A, B and/or C" and "A, B or C" mean (A), (B), (C), (A and B), (A and C) , (B and C) or (A, B and C). The drawings are not necessarily to scale. Although many of the drawings illustrate rectilinear structures with flat walls and right-angled corners, this is for illustration purposes only, and actual devices fabricated using these techniques will exhibit rounded corners, surface roughness, and other features.The description uses the phrases "in one embodiment" or "in an embodiment," each of which may refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used in relation to embodiments of the present disclosure, are synonymous. When used to describe a range of sizes, the phrase "between X and Y" means a range that includes both X and Y. The terms "top," "bottom," etc. may be used herein to explain various features of the drawings, but these terms are used only to facilitate discussion and do not imply a desired or required orientation. Although certain elements may be referred to herein in the singular, such elements may include multiple sub-elements. For example, "dielectric material" may include one or more dielectric materials. As used herein, "conductive contact" can refer to a portion of a conductive material (eg, metal) that serves as an electrical interface between different components; the conductive contact can be recessed in, flush with, or Extends away from the surface of the component and may take any suitable form (eg, conductive pads or sockets, or portions of conductive lines or vias).1 is a cross-sectional side view of a microelectronic assembly 100 in accordance with various embodiments. A number of elements are illustrated in FIG. 1 as being included in microelectronic assembly 100 , although many of these elements may not be present in microelectronic assembly 100 . For example, in various embodiments, heat transfer structure 152, thermal interface material (TIM) 154, molding material 126, microelectronic component 102-2, underfill material 138, and/or support component 182 may not be included. Further, FIG. 1 illustrates a number of elements that are omitted from subsequent figures for ease of illustration, but may be included in any of the microelectronic assemblies 100 disclosed herein. Examples of such elements include heat transfer structure 152 , TIM 154 , molding material 126 , microelectronic component 102 - 2 , underfill material 138 , and/or support component 182 . Many of the elements of the microelectronic assembly 100 of FIG. 1 are included in other figures in the drawings; discussion of these elements is not repeated in discussing these figures, and any of these elements may take the form disclosed herein in any form. In some embodiments, individual microelectronic assemblies of the microelectronic assemblies 100 disclosed herein may be used as a system-in-package (SiP), including multiple microelectronic components 102 with different functions therein. In such an embodiment, the microelectronic assembly 100 may be referred to as a SiP.Microelectronic assembly 100 may include interposer 150 coupled to microelectronic component 102-1 through direct bond (DB) regions 130-1. In particular, as illustrated in FIG. 2, DB region 130-1 may include DB interface 180-1A at the top surface of interposer 150, wherein DB interface 180-1A includes a set of conductive DB contacts 110 and at the DB interface DB dielectric 108 around DB contact 110 of 180-1A. DB region 130-1 may also include a DB interface 180-1B at the bottom surface of microelectronic component 102-1, wherein DB interface 180-1B includes a set of DB contacts 110 and around DB contacts 110 of DB interface 180-1B the DB dielectric 108. The DB contacts 110 of the DB interface 180-1A of the interposer 150 may be aligned with the DB contacts 110 of the DB interface 180-1B of the microelectronic component 102-1, such that in the microelectronic assembly 100, the DB of the microelectronic component 102-1 Contact 110 is in contact with DB contact 110 of interposer 150 . In the microelectronic assembly 100 of FIG. 1, the DB interface 180-1A of the interposer 150 can be joined (eg, electrically and mechanically) with the DB interface 180-1B of the microelectronic component 102-1 to form the coupling interposer 150 and the microelectronic component 102-1. DB area 130-1 of electronic component 102-1, as discussed further below. More generally, the DB regions 130 disclosed herein may include two complementary DB interfaces 180 joined together; for ease of illustration, many of the following figures may omit the identification of the DB interfaces 180 to improve the clarity of the figures .As used herein, the term "direct bonding" is used to include metal-to-metal bonding techniques (eg, copper-to-copper bonding, or other techniques in which the DB contacts 110 of the opposing DB interface 180 are first brought into contact and then subjected to heat and compression ) and hybrid bonding techniques (eg, techniques in which the DB dielectric 108 of the opposing DB interface 180 is first brought into contact and then heated and sometimes compressed, or the DB contact 110 and the DB dielectric 108 of the opposing DB interface 180 are brought into contact substantially simultaneously and then technology that heats and compresses it). In such a technique, DB contact 110 and DB dielectric 108 at one DB interface 180 are brought into contact with DB contact 110 and DB dielectric 108, respectively, at another DB interface 180, and elevated pressure and/or temperature may be applied to The contacting DB contacts 110 and/or the contacting DB dielectrics 108 are bonded. In some embodiments, this bonding can be achieved without the use of an intermediate solder or anisotropic conductive material, while in some other embodiments, a thin solder cap can be used in the DB interconnect to accommodate planarity, and This solder may become an intermetallic compound (IMC) in the DB region 130 during processing. DB interconnects may be able to reliably conduct higher currents than other types of interconnects; for example, some conventional solder interconnects may form a large number of fragile IMCs when current flows, and may limit the supply of maximum current to mitigate mechanical failure.DB dielectric 108 may include one or more dielectric materials, such as one or more inorganic dielectric materials. For example, DB dielectric 108 may include silicon and nitrogen (eg, in the form of silicon nitride); silicon and oxygen (eg, in the form of silicon oxide); silicon, carbon, and nitrogen (eg, in the form of silicon nitride carbon) ; carbon and oxygen (for example, in the form of carbon-doped oxides); silicon, oxygen, and nitrogen (for example, in the form of silicon oxynitride); aluminum and oxygen (for example, in the form of aluminum oxide); titanium and oxygen ( For example, in the form of titanium oxide); hafnium and oxygen (for example, in the form of hafnium oxide); silicon, oxygen, carbon and hydrogen (for example, in the form of tetraethyl orthosilicate (TEOS)); zirconium and oxygen ( For example, in the form of zirconia); niobium and oxygen (for example, in the form of niobium oxide); tantalum and oxygen (for example, in the form of tantalum oxide); and combinations thereof. Some specific embodiments of arrangements of DB dielectrics 108 including various dielectric materials are discussed below with reference to FIG. 4 .DB contacts 110 may include posts, pads, or other structures. DB contacts 110, although depicted in the same manner at both DB interfaces 180 of DB area 130 in the figures, may have the same structure at both DB interfaces 180, or DB contacts at different DB interfaces 180 110 can have different structures. For example, in some embodiments, the DB contacts 110 in one DB interface 180 may include metal pillars (eg, copper pillars), and the complementary DB contacts 110 in the complementary DB interface 180 may include metal pads recessed in the dielectric (eg copper pads). DB contacts 110 may include any one or more conductive materials, such as copper, manganese, titanium, gold, silver, palladium, nickel, copper and aluminum (eg, in the form of copper aluminum alloys), tantalum (eg, tantalum metal, or tantalum nitride and tantalum and nitrogen), cobalt, cobalt, and iron (eg, as a cobalt-iron alloy), or any alloy of any of the foregoing (eg, copper, manganese, and nickel as a manganese-nickel-copper alloy). Some specific arrangements of various materials in DB contact 110 are discussed below with reference to FIG. 3 . In some embodiments, DB dielectric 108 and DB contacts 110 of DB interface 180 may be fabricated using low temperature deposition techniques (eg, techniques in which deposition occurs at temperatures below 250 degrees Celsius or below 200 degrees Celsius), such as low temperature plasma enhanced Chemical Vapor Deposition (PECVD).1 and 2 also illustrate microelectronic component 102-2 coupled to interposer 150 through DB region 130-2 (via DB interfaces 180-2A and 180-2B, shown in FIG. 2). Although FIG. 1 depicts a particular number of microelectronic components 102 coupled to interposer 150 through DB region 130, this number and arrangement is merely illustrative, and microelectronic assembly 100 may include microelectronic components 102 coupled to interposer 150 through DB region 130 Any desired number and arrangement of microelectronic components 102 . Although a single reference number "108" is used to refer to the DB dielectrics of multiple different DB interfaces 180 (and different DB regions 130), this is for ease of illustration only, and the DB dielectrics 108 of different DB interfaces 180 (even within a single DB region 130 ) may have different materials and/or structures (eg, according to any of the embodiments discussed below with reference to FIG. 3 ). Similarly, although a single reference number "110" is used to refer to the DB contacts of multiple different DB interfaces 180 (and different DB areas 130), this is for ease of illustration only, and the DB contacts 110 of different DB interfaces 180 (even in Within a single DB region 130 ) may have different materials and/or structures (eg, according to any of the embodiments discussed below with reference to FIG. 4 ).Interposer 150 may include insulating material 106 (eg, one or more dielectric materials formed in multiple layers, as known in the art) and one or more conductive paths 112 (eg, formed in multiple layers) through insulating material 106 . , including lines 114 and/or vias 116, as shown). In some embodiments, insulating material 106 of interposer 150 may be an organic material, such as polyimide or polybenzoxazole, or may include an organic polymer matrix (eg, , epoxide). In some such embodiments, interposer 150 may be referred to as an "organic interposer." In some embodiments, the insulating material 106 of the interposer 150 may be provided in multiple layers of the organic build-up film. Organic interposer 150 may be less expensive to manufacture than semiconductor or glass-based interposers, and may have electrical performance advantages due to the low dielectric constant of organic insulating material 106 and the use of thicker wires (allowing for improved power delivery, signaling and potential thermal benefits). The organic interposer 150 may also have a larger footprint than can be achieved with semiconductor-based interposers, which is limited by the size of the reticle used for patterning. Further, the organic interposer 150 may be subject to fewer restrictive design rules than those constraining semiconductor- or glass-based interposers, allowing the use of design features such as non-Manhattan wiring (eg, not limited to the use of a layer of for horizontal interconnects and another layer for vertical interconnects) and avoid through-substrate vias (TSVs) such as through-silicon vias or through-glass vias (which may be limited in desired power delivery and signaling performance). Conventional integrated circuit packaging including organic interposers has been limited to solder-based attachment techniques, which may have lower limits on achievable pitches that preclude the use of conventional solder-based interconnects to achieve the fine pitches required for next-generation devices . The use of organic interposer 150 in microelectronic assembly 100 with direct bonding, as disclosed herein, can take advantage of these advantages of organic interposers, which can be achieved by direct bonding (and previously only when semiconductor-based interposers were used) achievable) ultrafine pitches (eg, pitch 128 discussed below), and thus can support the design and fabrication of large and complex die assemblies that can enable packaging systems that cannot be enabled by conventional methods Competitive performance and capability.In other embodiments, the insulating material 106 of the interposer 150 may include a flame retardant class 4 material (FR-4), a bismaleimide triazine (BT) resin, or a low-k or ultra-low-k dielectric (eg, carbon-doped dielectrics, fluorine-doped dielectrics, and porous dielectrics). When the interposer 150 is formed using standard printed circuit board (PCB) processes, the insulating material 106 may comprise FR-4, and the conductive paths 112 in the interposer 150 may be formed of patterned copper sheets separated by build-up layers of FR-4 form. In some such embodiments, the interposer 150 may be referred to as a "package substrate" or "circuit board."In some embodiments, one or more of the conductive paths 112 in the interposer 150 may be a conductive contact (eg, one of the DB contacts 110 ) at the top surface of the interposer 150 and a conductive contact at the bottom surface of the interposer 150 . Conductive contacts 118 extend between. In some embodiments, one or more of the conductive paths 112 in the interposer 150 may be between different conductive contacts at the top surface of the interposer 150 (eg, between different DB contacts that may be located in different DB regions 130 ) 110, as discussed further below). In some embodiments, one or more of the conductive paths 112 in the interposer 150 may extend between different conductive contacts 118 at the bottom surface of the interposer 150 .In some embodiments, interposer 150 may include only conductive paths 112 and may not include active or passive circuitry. In other embodiments, the interposer 150 may include active or passive circuits (eg, transistors, diodes, resistors, inductors, and capacitors, among others). In some embodiments, the interposer 150 may include one or more device layers including transistors.Although FIGS. 1 and 2 (and others in the figures) illustrate a specific number and arrangement of conductive paths 112 in interposer 150, these are merely illustrative and any suitable number and arrangement may be used. The conductive paths 112 disclosed herein (eg, including lines 114 and/or vias 116 ) may be formed of any suitable conductive material, such as, for example, copper, silver, nickel, gold, aluminum, other metals or alloys, or combinations of materials. Examples of some specific arrangements of gasket material 132 that may be part of conductive path 112 are discussed below with reference to FIGS. 9-10 .In some embodiments, the microelectronic component 102 may include an IC die (packaged or unpackaged) or a stack of IC dies (eg, a high bandwidth memory die stack). In some such embodiments, the insulating material of the microelectronic component 102 may include silicon dioxide, silicon nitride, oxynitride, polyimide materials, glass reinforced epoxy matrix materials, or low-k or ultra-low-k Dielectrics (eg, carbon-doped dielectrics, fluorine-doped dielectrics, porous dielectrics, organic polymer dielectrics, photoimageable dielectrics, and/or benzocyclobutene-based polymers). In some additional embodiments, the insulating material of the microelectronic component 102 may include a semiconductor material, such as silicon, germanium, or a III-V material (eg, gallium nitride), as well as one or more additional materials. For example, the insulating material of the microelectronic component 102 may include silicon oxide or silicon nitride. The conductive paths in the microelectronic component 102 may include conductive lines and/or conductive vias, and may connect any conductive contacts in the microelectronic component 102 in any suitable manner (eg, connecting on the same surface of the microelectronic component 102 or on a different multiple conductive contacts on the surface). Example structures that may be included in the microelectronic components 102 disclosed herein are discussed below with reference to FIG. 34 . In particular, the microelectronic components 102 may include active and/or passive circuits (eg, transistors, diodes, resistors, inductors, and capacitors, among others). In some embodiments, the microelectronic component 102 may include one or more device layers including transistors. When microelectronic component 102 includes active circuitry, power and/or ground signals may be routed through interposer 150 and through DB region 130 (and further through intermediate microelectronic component 102 ) to/from microelectronic component 102 routing. In some embodiments, the microelectronic components 102 may take the form of any of the embodiments of the interposer 150 herein. Although the microelectronic components 102 of the microelectronic assembly 100 of FIG. 1 are single-sided assemblies (in the sense that the individual microelectronic components 102 have conductive contacts (eg, DB contacts 110 ) on only a single surface of the individual microelectronic components 102 ), In some embodiments, however, the microelectronic component 102 may be a double-sided (or "multi-level" or "omnidirectional") component with conductive contacts on multiple surfaces of the component. Some specific examples of double-sided microelectronic components 102 are discussed below with reference to FIG. 28 .Additional components (not shown), such as surface mount resistors, capacitors, and/or inductors, may be disposed on the top or bottom surface of interposer 150 , or embedded in interposer 150 . The microelectronic assembly 100 of FIG. 1 also includes a support member 182 coupled to the interposer 150 . In the particular embodiment of FIG. 1 , support member 182 includes conductive contacts 118 that are electrically coupled to complementary conductive contacts 118 of interposer 150 through intervening solder 120 (eg, solder balls in a ball grid array (BGA) arrangement), but Any suitable interconnect structure (eg, pins in a pin grid array arrangement, lands, posts, pads, and posts in a land grid array arrangement, etc.) may be used. The solder 120 used in the microelectronic assemblies 100 disclosed herein may comprise any suitable material, such as lead/tin, tin/bismuth, eutectic tin/silver, ternary tin/silver/copper, eutectic tin/copper, tin /nickel/copper, tin/bismuth/copper, tin/indium/copper, tin/zinc/indium/bismuth or other alloys. In some embodiments, the coupling between the interposer 150 and the support member 182 may be referred to as a second level interconnect (SLI) or a multilevel interconnect (MLI).In some embodiments, support member 182 may be a package substrate (eg, may be fabricated using a PCB process, as discussed above). In some embodiments, support member 182 may be a circuit board (eg, a motherboard), and may have other components (not shown) attached thereto. The support member 182 may include conductive paths and other conductive contacts (not shown) for routing power, ground, and signals through the support member 182 as is known in the art. In some embodiments, support member 182 may include another IC package, an interposer, or any other suitable component. The underfill material 138 may be disposed around the solder 120 coupling the interposer 150 to the support features 182 . In some embodiments, the underfill material 138 may comprise an epoxy material.In some embodiments, support components 182 may be lower density components, while interposer 150 and/or microelectronic components 102 may be higher density components. As used herein, the terms "lower density" and "higher density" are intended to indicate that the conductive paths (eg, including conductive lines and vias) in lower density components are larger and/or larger than in higher density components or relative terms with larger spacing. In some embodiments, microelectronic components 102 may be higher density components and interposer 150 may be a lower density component. In some embodiments, higher density components may be fabricated using dual damascene or single damascene processes (eg, when the higher density components are dies), while semi-additive or modified semi-additive processes may be used ( have small vertical interconnect features formed by advanced laser or photolithographic processes) to fabricate lower density components (for example, when the lower density components are package substrates or interposers). In some other embodiments, semi-additive or modified semi-additive processes may be used to fabricate higher density components (eg, when the higher density components are package substrates or interposers), while semi-additive processes may be used Or subtractive processes (using etch chemistry to remove unwanted metal areas, with rough vertical interconnect features formed by standard laser processes) to fabricate lower density parts (eg, when the lower density parts are PCBs).The microelectronic assembly 100 of FIG. 1 may also include a molding material 126 . The molding material 126 may extend around one or more of the microelectronic components 102 on the interposer 150 . In some embodiments, the molding material 126 may extend between the plurality of microelectronic components 102 on the interposer 150 and around the DB region 130 . In some embodiments, molding material 126 may extend over one or more of microelectronic components 102 on interposer 150 (not shown). The molding material 126 may be an insulating material, such as a suitable epoxy material. The molding material 126 can be selected to have a coefficient of thermal expansion (CTE) that can alleviate or minimize stress between the microelectronic component 102 and the interposer 150 due to uneven thermal expansion in the microelectronic assembly 100 . In some embodiments, the CTE of the molding material 126 may have a value intermediate the CTE of the interposer 150 (eg, the CTE of the insulating material 106 of the interposer 150 ) and the CTE of the microelectronic component 102 . In some embodiments, the molding material 126 used in the microelectronic assembly 100 may be selected, at least in part, for its thermal properties. For example, one or more molding materials 126 used in microelectronic assembly 100 may have low thermal conductivity (eg, conventional manufacturing compounds) to delay heat transfer, or may have high thermal conductivity (eg, including molding materials of metallic or ceramic particles, such as copper, silver, diamond, silicon carbide, aluminum nitride, and boron nitride, among others, to facilitate heat transfer. Any molding material 126 referred to herein may include one or more different materials having different material compositions.The microelectronic assembly 100 of FIG. 1 may also include a TIM 154 . The TIM 154 may include a thermally conductive material (eg, metal particles) in a polymer or other binder. The TIM 154 may be a thermal interface material paste or thermally conductive epoxy (which may be fluid when applied and harden when cured, as known in the art). TIM 154 may provide a path for heat generated by microelectronic component 102 to easily flow to heat transfer structure 152 where it may be propagated and/or dissipated. Some embodiments of microelectronic assembly 100 of FIG. 1 may include sputter metallization (not shown) across molding material 126 and the top surface of microelectronic component 102; TIM 154 (eg, a solder TIM) may be disposed on the on metallization.The microelectronic assembly 100 of FIG. 1 may also include a heat transfer structure 152 . The heat transfer structures 152 can be used to remove heat from one or more of the microelectronic components 102 (eg, so that the heat can be more easily dissipated). Heat transfer structure 152 may comprise any suitable thermally conductive material (eg, metal, suitable ceramic, etc.) and may comprise any suitable feature (eg, heat sink, heat sink including fins, cold plate, etc.). In some embodiments, the heat transfer structure 152 may be or may include an integrated heat sink (IHS).The elements of microelectronic assembly 100 may have any suitable dimensions. Only a subset of the figures are labeled with reference numerals indicating dimensions, but this is for clarity of illustration only, and any microelectronic assembly 100 disclosed herein may have components with the dimensions discussed herein. In some embodiments, the thickness 184 of the interposer 150 may be between 20 microns and 200 microns. In some embodiments, the thickness 188 of the DB region 130 may be between 50 nanometers and 5 micrometers. In some embodiments, the thickness 190 of the microelectronic component 102 may be between 5 microns and 800 microns. In some embodiments, the spacing 128 of the DB contacts 110 in the DB region 130 may be less than 20 microns (eg, between 0.1 and 20 microns).3-32 illustrate additional example microelectronic assemblies 100 and components thereof. Any of the features discussed herein with reference to any of FIGS. 3-32 may be combined with any other feature to form the microelectronic assembly 100 or components thereof. For example, as discussed further below, FIG. 4 illustrates an embodiment of a DB interface 180 wherein the DB contact 110 includes a plurality of distinct material portions, and FIG. 9 illustrates an embodiment of the DB interface 180 wherein the backing material 132 Present between the DB contact 110 and the adjacent DB dielectric 108 . These features of FIGS. 4 and 9 can be combined such that a DB interface 180 in accordance with the present disclosure has a DB contact 110 with a number of different material portions, and a pad between the DB contact 110 and the adjacent DB dielectric 108 Material 132. This particular combination is just an example and any combination can be used.As noted above, the DB dielectric 108 may include one or more materials arranged in any desired manner. For example, FIG. 3 illustrates DB interface 180 (which may be part of interposer 150 or microelectronic component 102 ) that includes DB dielectric 108 surrounding DB contact 110 . In the particular embodiment of FIG. 3 , the DB dielectric 108 may include a first portion 108A and a second portion 108B, wherein the second portion 108B is between the first portion 108A and the bonding surface of the DB interface 180 . The first portion 108A and the second portion 108B may have different material compositions. For example, in some embodiments, the first portion 108A may include silicon and oxygen (eg, in the form of silicon oxide), and the second portion 108B may include silicon, oxygen, carbon, and nitrogen (eg, in the form of silicon oxycarbonitride) ). The thickness 190A of the first portion 108A may be greater than the thickness 190B of the second portion 108B. For example, in some embodiments, thickness 190B may be less than 5 nanometers (eg, less than 3 nanometers), while thickness 190A may be greater than 5 nanometers (eg, between 50 nanometers and 5 microns). When thickness 190A is greater than thickness 190B, first portion 108A may be referred to as the "bulk" material of DB dielectric 108 and second portion 108B may be referred to as the "interface" material of DB dielectric 108 . Although FIG. 3 illustrates an embodiment in which the DB dielectric 108 includes two parts, the DB dielectric 108 may include more than two parts (eg, arranged in layers parallel to the bonding surfaces of the DB interface 180 ).As also noted above, the DB contact 110 may comprise one or more materials arranged in any desired manner. For example, FIG. 4 illustrates DB interface 180 (which may be part of interposer 150 or microelectronic component 102 ) that includes DB dielectric 108 surrounding DB contact 110 . In the particular embodiment of FIG. 4 , the DB contact 110 may include a first portion 110A and a second portion 110B, wherein the second portion 110B is between the first portion 110A and the engagement surface of the DB interface 180 . The first portion 110A and the second portion 110B may have different material compositions. For example, in some embodiments, the first portion 110A may include copper, and the second portion 110B may include a noble metal (eg, silver or gold); in such embodiments, the second portion 110B may be used to increase the strength of the DB contact 110 Corrosion resistance. The thickness 192A of the first portion 110A may be greater than the thickness 192B of the second portion 110B. For example, in some embodiments, thickness 192B may be less than 5 nanometers, while thickness 192A may be greater than 50 nanometers. When thickness 192A is greater than thickness 192B, first portion 110A may be referred to as the "bulk" material of DB contact 110 and second portion 110B may be referred to as the "interface" material of DB contact 110 . Although FIG. 4 illustrates an embodiment in which DB contact 110 includes two parts, DB contact 110 may include more than two parts (eg, arranged in layers parallel to the bonding surface of DB interface 180 ). In some embodiments, the DB interface 180 may include a DB dielectric 108 having multiple sections and a DB contact 110 having multiple sections.The footprint of DB contacts 110 in DB interface 180 may have any desired shape, and a plurality of DB contacts 110 may be arranged within DB interface 180 in any desired manner (eg, by using photolithographic patterning techniques to form DB contacts 110). For example, FIGS. 5-8 are top views of various arrangements of DB contacts 110 in DB dielectric 108 of DB interface 180 . In the embodiment of FIG. 5, the DB contacts 110 have a rectangular (eg, square) footprint and are arranged in a rectangular array. In the embodiment of FIG. 6, the DB contacts 110 have a cross-shaped footprint and are arranged in a triangular array. In the embodiment of Figure 7, the DB contacts 110 are arranged in a rectangular array, and alternating rows of DB contacts 110 have cross-shaped footprints and triangular footprints. In the embodiment of FIG. 8, the DB contacts 110 are arranged in a rectangular array, the DB contacts 110 have circular footprints, and the diameters of the footprints of the DB contacts 110 vary in a checkerboard pattern. The DB contacts 110 included in the DB interface 180 may have any suitable combination of these and other footprint shapes, sizes, and arrangements (eg, hexagonal arrays, elliptical footprints, etc.). In some particular embodiments, the DB contacts 110 in the DB interface 180 may have footprints shaped as convex polygons (eg, squares, rectangles, octagons, crosses, etc.) or circles.As noted above, in some embodiments, a liner material may be present between the DB contact 110 and the adjacent DB dielectric 108 . For example, FIG. 9 illustrates a portion of the interposer 150 and its DB interface 180 . In the embodiment of FIG. 9 , there is a liner material 132 between the DB contact 110 and the adjacent DB dielectric 108 . The liner material 132 may act as a diffusion barrier (eg, to limit diffusion between the DB contact 110 and the adjacent DB dielectric 108 , such as copper diffusion that may occur when the DB contact 110 includes copper and the DB dielectric 108 includes silicon oxide ) and/or as an adhesion promoter (eg, to increase the strength of the mechanical interface between the DB contact 110 and the adjacent DB dielectric 108). In the particular embodiment of FIG. 9 , liner material 132 may not be present around vias 116 and/or lines 114 through insulating material 106 of interposer 150 . In other embodiments, liner material 132 may also be present around vias 116 and/or lines 114 ; such an embodiment is illustrated in FIG. 10 . In some embodiments, liner material 132 may only be present around vias 116 and/or lines 114 and not around DB contacts 110 (not shown). In the embodiment of FIG. 9, the liner material 132 may be a conductive material (eg, may include cobalt, ruthenium, or tantalum and nitrogen (eg, in the form of tantalum nitride)), or a non-conductive material (eg, silicon and nitrogen) (for example, in the form of silicon nitride), or diamond-like carbon). In the embodiment of FIG. 10, the liner material 132 may be a non-conductive material. In still other embodiments, the liner material 132 may not be present in the interposer 150 . Although various embodiments using gasket materials 132 are depicted in FIGS. 9 and 10 and discussed with respect to their presence in interposer 150, this is for illustration only, DB interface 180 of microelectronic component 102 Pad material 132 may also be included (eg, only around DB contacts 110, and/or around lines and vias in the metallization stack of microelectronic component 102).In some embodiments, photolithographic via techniques may be used to form one or more layers of metallization in interposer 150 (eg, in organic interposer 150 ) or in microelectronic component 102 . For example, FIG. 11 illustrates a portion of the interposer 150 and its DB interface 180 . In the embodiment of Figure 11, three different layers of insulating material 106 (labeled 106A, 106B, and 106C) are shown. Within the "top" layer 106A (the layer closest to the DB interface 180 ), the vias 116 may be patterned using photolithographic techniques (eg, "zero misalignment" techniques) such that their sides land on them with their sides The sides of line 114 are aligned. In a "lower" layer (eg, layer 106B), the vias 116 may be patterned using conventional techniques and the sides of the vias 116 may not be aligned with the sides of the lines 114 on which they land. More generally, the photolithographically formed vias 116 may have any desired footprint (eg, a non-circular footprint). In the embodiment of FIG. 11, DB contacts 110 may be "pads" that make conductive contact with vias 116 of layer 106A. The use of lithographic via techniques in the formation of the DB interface 180 can result in an extremely flat DB interface 180 due to planarization (eg, chemical mechanical polishing) operations performed during lithographic via fabrication, and the flat DB interface 180 can A direct bond is more reliably formed than the more "non-uniform" DB interface 180 . Therefore, the DB contacts 110 of the DB interface 180 are formed using photolithographic via techniques to support mechanically and electrically reliable DB regions 130 .In some embodiments, photolithographic via techniques may be used to form DB contacts 110 in interposer 150 (eg, in organic interposer 150 ) or in DB interface 180 of microelectronic component 102 . For example, FIG. 12 illustrates a portion of the interposer 150 and its DB interface 180 . In the embodiment of FIG. 12, DB contacts 110 include vias 116 and lines 114 on which vias 116 land; these vias 116 may be patterned using photolithographic techniques (eg, such that the sides of vias 116 are aligned with their the sides of the line 114 on which it landed are aligned). As shown, DB dielectric 108 may contact vias 116 and lines 114 of DB contact 110 . The metallization in insulating material 106 may be patterned using photolithographic techniques or conventional techniques. Although various embodiments of vias 116/lines 114 are depicted in FIGS. 11 and 12 and discussed with respect to their presence in interposer 150, this is for illustration only and the DB of microelectronic component 102 Interface 180 may also include lithographically patterned vias 116/lines 114 in DB interface 180 and/or other metallizations.In the embodiment of FIGS. 1 and 2 , DB contacts 110 are shown as pads that make contact with vias 116 in the underlying insulating material 106 . In other embodiments, the DB contact 110 may itself be a via. For example, Figure 13 illustrates an embodiment in which DB contacts 110 are vias that contact pads in insulating material 106; as shown, DB contacts 110 may be narrower than the pads they contact.The microelectronic assembly 100 of FIGS. 1 and 2, as well as other microelectronic assemblies 100 disclosed herein, may be fabricated in any suitable manner. For example, FIGS. 14-17 are cross-sectional side views of example stages in the manufacture of a portion of the microelectronic assembly 100 of FIGS. 1 and 2 in accordance with various embodiments. Although the operations discussed with reference to FIGS. 14-17 may be described with reference to specific embodiments of the microelectronic assembly 100 disclosed herein, the fabrication methods discussed with reference to FIGS. 14-17 may be used to form any suitable microelectronic assembly 100 . The operations are illustrated once in each of FIGS. 14-17 and in a particular order, but the operations may be reordered and/or repeated as desired (eg, different operations are performed in parallel when multiple microelectronic assemblies 100 are fabricated simultaneously). The fabrication processes discussed below with reference to FIGS. 14-17 may be particularly advantageous when interposer 150 is an organic interposer, and for glass-based or semiconductor-based interposers (eg, glass-based or silicon-based interposers where any direct It may also be advantageous that the underlying glass or silicon wafer has been thinned and TSVs formed prior to the bonding operation. However, any of the microelectronic assemblies 100 disclosed herein may be fabricated using any suitable fabrication process.FIG. 14 illustrates an assembly including an interposer 150 mounted on a carrier 104 . The interposer 150 includes two exposed DB interfaces 180-1 and 180-2. The carrier 104 may comprise any suitable material and, in some embodiments, may comprise a semiconductor wafer (eg, a silicon wafer) or glass (eg, a glass panel). When interposer 150 is an organic interposer, interposer 150 may advantageously be fabricated on carrier 104, which may provide a mechanically stable surface on which layers of interposer 150 may be formed.15 illustrates the assembly after direct bonding of microelectronic components 102-1 and 102-2 to interposer 150/carrier 104 of FIG. In particular, the DB interface 180 (not labeled) of the microelectronic component 102 can be brought into contact with the DB interface 180 of the interposer 150, and heat and/or pressure can be applied to bond the contacting DB interface 180 to form the DB region 130 (where the DB Areas 130-1 and 130-2 correspond to DB interfaces 180-1 and 180-2, respectively).FIG. 16 illustrates the assembly after the molding material 126 is provided around the microelectronic components 102 of the assembly of FIG. 15 and on the surface of the interposer 150 . In some embodiments, molding material 126 may extend over and remain over microelectronic component 102, while in other embodiments, molding material 126 may be polished back to expose the top surface of microelectronic component 102, as shown.FIG. 17 illustrates the assembly after removing the carrier 104 from the assembly of FIG. 16 and providing solder 120 on the newly exposed conductive contacts 118 . The assembly of Figure 17 may itself be a microelectronic assembly 100, as shown. Further fabrication operations may be performed on the microelectronic assembly 100 of FIG. 17 to form other microelectronic assemblies 100; for example, solder 120 may be used to couple the microelectronic assembly 100 of FIG. A TIM 154 and a heat transfer structure 152 are provided on the top surface of the microelectronic assembly 100 , thereby forming the microelectronic assembly 100 of FIGS. 1 and 2 .Different DB regions 130 in microelectronic assembly 100 may include different DB dielectrics 108 . For example, Figure 18 illustrates microelectronic assembly 100 in which DB region 130-1 includes DB dielectric 108-1 and DB region 130-2 includes a different DB dielectric 108-2. DB dielectrics 108-1 and 108-2 may differ in their material composition and/or their structure. In some embodiments, the DB dielectrics 108 in different DB regions 130 may be selected to have different thermal conductivities in order to facilitate and/or limit heat transfer between the interposer 150 and the microelectronic component 102 . For example, DB dielectric 108-1 may have a higher thermal conductivity than DB dielectric 108-2, resulting in greater heat transfer between microelectronic component 102-1 and interposer 150 than between microelectronic component 102-2 and interposer 150 heat transfer between. In some such embodiments, DB dielectric 108-1 may include silicon and nitrogen (eg, in the form of silicon nitride) and DB dielectric 108-2 may include silicon and oxygen (eg, in the form of silicon oxide); nitrogen Silicon oxide can have a higher thermal conductivity than silicon oxide, and thus using silicon nitride as the DB dielectric 108-1 can enhance local heat transfer from the microelectronic component 102-1 to the interposer 150, while using silicon oxide as the DB Dielectric 108-2 may mitigate thermal crosstalk between microelectronic component 102-1 and microelectronic component 102-2 through interposer 150.In some embodiments, the density of DB contacts 110 (ie, the proportion of the area of the bonding surface of DB interface 180 occupied by DB contacts 110 ) may vary between different DB regions 130 . In some embodiments, this different density may be due to one DB region 130 requiring fewer electrical paths than another DB region 130 . In other embodiments, such different densities may be used to enhance or inhibit heat transfer, with a higher density of DB contacts 110 (and thus a higher proportion of thermally conductive metal) for enhanced heat transfer, and a lower density of DB Contact 110 (and thus the lower portion of the thermally conductive metal) serves to inhibit heat transfer. For example, FIG. 19 illustrates an embodiment in which the density of DB contacts 110 in DB region 130-1 is greater than the density of DB contacts 110 in DB region 130-2 to enhance the relationship between microelectronic component 102-1 and interposer 150 and reduce heat transfer between the microelectronic component 102 - 2 and the interposer 150 . 19 illustrates different densities of DB contacts 110 with the use of different DB dielectrics 108 in different DB regions 130, but in some embodiments two DB regions 130 may have different densities of DB contacts 110 while having the same Material composition of DB dielectric 108.Figure 20 illustrates another embodiment of a microelectronic assembly 100 in which, as with the embodiment of Figure 19, the density of DB contacts 110 in DB region 130-1 is greater than the density of DB contacts 110 in DB region 130-2 , to enhance heat transfer between microelectronic component 102 - 1 and interposer 150 , and to reduce heat transfer between microelectronic component 102 - 2 and interposer 150 . In the embodiment of FIG. 19 , the dimensions (eg, footprint area) of DB contacts 110 of DB region 130-1 may be the same as the dimensions of DB contacts 110 of DB region 130-2; DB region 130-1 may simply be More DB contacts 110 are included than DB area 130-2. In the embodiment of FIG. 20 , the size (eg, the area of the footprint) of the DB contact 110 of the DB region 130-1 may be larger than the size of the DB contact 110 of the DB region 130-2; the DB contact 110 of the DB region 130-1 may be equal to, greater than, or less than the number of DB contacts 110 of DB region 130-2. For example, FIG. 21 is a top view of DB interface 180 that may correspond to DB region 130-1 of microelectronic assembly 100 of FIG. 20, and FIG. 22 is a DB that may correspond to DB region 130-2 of microelectronic assembly 100 of FIG. 20 Top view of interface 180 . In the embodiment of FIG. 21, the DB contacts 110 may have a large rectangular footprint and may be closely spaced relative to the DB contacts 110 of FIG. 22, which may have a smaller circular footprint and may be sparse diversified. Any other suitable combination of size, shape, and distribution of DB contacts 110 may be used in the different DB regions 130 of the microelectronic assembly 100 (eg, to achieve desired thermal characteristics, or for other purposes); other attachment techniques may be used Such configurability may not be possible (such as solder attachment), which other attachment techniques conventionally require significant consistency and regularity in contact location and size in order to achieve a reliable attachment.In some embodiments, a single DB region 130 may have multiple sub-regions with different metal densities; such an embodiment may be beneficial for enabling all connections between different portions of microelectronic component 102 and interposer 150 or microelectronic component 102 desired heat transfer. For example, some parts of the microelectronic component 102 may generate more heat than other areas (eg, a central processing unit (CPU) may have high power areas, such as matrix multipliers and cache areas, and other lower power areas), and Thus, within DB region 130, sub-regions near those portions may have a greater metal density than sub-regions of DB region 130 that are not near those portions (eg, by any suitable combination of size, shape, and distribution of DB contacts 110). accomplish). In another example, some portions of microelectronic component 102 may be more sensitive to temperature increases (eg, temperature increases may lead to significant negative performance consequences), and thus within DB region 130, sub-regions near those portions may be more Sub-regions of DB region 130 that are not close to those portions have less metal density (eg, achieved by any suitable combination of size, shape, and distribution of DB contacts 110). 23 illustrates the microelectronic assembly 100 in which the DB region 130-1 includes a first subregion 130-1A and a second subregion 130-1B, and the first subregion 130-1A has a greater metal density than the second subregion 130 -1B metal density. The microelectronic assembly 100 of FIG. 23 also illustrates a DB region 130-2, which includes a first subregion 130-2A and a second subregion 130-2B, wherein the first subregion 130-2A has a greater metal density than the second subregion Metal density of zone 130-2B. 24 is a top view of DB interface 180-1 that may correspond to DB region 130-1 of microelectronic assembly 100 of FIG. 23, and FIG. 24 is a DB that may correspond to DB region 130-2 of microelectronic assembly 100 of FIG. 23 Top view of interface 180-2. In particular, in FIG. 24, the first sub-area 180-1A of the DB interface 180-1 may correspond to the first sub-area 130-1A of the DB area 130-1 of FIG. 23, and the second sub-area 130-1A of the DB interface 180-1 Subregion 180-1B may correspond to second subregion 130-1B of DB region 130-1 of FIG. 23; similarly, in FIG. 25, first subregion 180-2A of DB interface 180-2 may correspond to FIG. The first sub-region 130-2A of the DB region 130-2 of FIG. 23, and the second sub-region 180-2B of the DB interface 180-2 may correspond to the second sub-region 130-2B of the DB region 130-2 of FIG. In the embodiment of FIG. 24, the second sub-region 180-1B may partially surround the first sub-region 180-1A, and in the embodiment of FIG. 25, the second sub-region 180-2B may surround the first sub-region 180- 2A; these particular arrangements are illustrative only, and the DB region 130/DB interface 180 may include any desired arrangement of subregions with different metal densities. Further, although the embodiment of FIGS. 23-25 depicts the DB area 130/DB interface 180 including 2 subareas, this is merely illustrative and the DB area 130/DB interface 180 may include two or more as desired sub-areas.In some embodiments, microelectronic assembly 100 may include features in one or more DB interfaces 180 that may exhibit anisotropic in-plane thermal conductivity and thus selectively transfer heat around surfaces. For example, FIG. 26 depicts microelectronic assembly 100 in which dummy metal traces 196 (including, for example, copper and/or any of the materials discussed herein with reference to DB contact 110 ) are coplanar with DB contact 110 and may extend from one DB region 130 to Another DB area 130 (ie, between DB areas 130-1 and 130-3 of FIG. 26). 26A is a cross-sectional side view (through section AA of FIG. 26B ) of such a microelectronic assembly 100 , and FIG. 26B is a top view of the microelectronic assembly 100 with the molding material 126 and the microelectronic component 102 removed so that the DB contacts 110 and dummy metal traces 196 are visible; the dashed boxes in Figure 26B indicate the footprints of microelectronic components 102-1, 102-2, and 102-3. The dummy metal traces 196 may not be coupled to any circuitry in the microelectronic component 102 or interposer 150, but may instead exist as heat pipes, allowing heat to move along the dummy metal traces 196 according to their pattern; in In other embodiments, dummy metal traces 196 may be coupled to dummy metal lines/vias in microelectronic component 102 and/or interposer 150 (eg, for additional thermal transfer).In the embodiment of FIG. 26, one or more dummy metal traces 196 may be in DB region 130-1 (eg, "under the microelectronic component 102-1") and DB region 130-3 (eg, "under the microelectronic component 102-1") Electronic components 102-3") extend between. One or more dummy metal traces 196 may not extend under the plurality of microelectronic components 102, but may extend near the plurality of microelectronic components 102 and/or under a single microelectronic component 102; such dummy metal traces 196 An example of is shown in Figure 26B, proximate the footprint of microelectronic component 102-1 and extending below microelectronic component 102-3. In some such embodiments, microelectronic component 102-1 may be a heat generating component, and microelectronic component 102-3 may be a dummy component (eg, without active devices and present to at least partially function as a heat sink); During operation, heat generated by microelectronic component 102-1 may be absorbed by dummy metal traces 196 and transferred to microelectronic component 102-3, thereby cooling microelectronic component 102-1. In some embodiments, dummy metal traces 196 may extend around the footprint of thermally sensitive microelectronic components 102 (eg, memory components, amplifiers, etc.). For example, in the embodiment of Figure 26, dummy metal traces 196 extend around the footprint of microelectronic component 102-2. In some embodiments, DB dielectric 108-2 of DB region 130-2 (associated with microelectronic component 102-2) may be selected to have a ratio of DB dielectric 108-1 and DB region 130-1 to DB region 130-1 3 of the DB dielectric 108-3 has a lower thermal conductivity to further differentiate the microelectronic component 102-2 from the ones generated and/or carried by the microelectronic components 102-1 and 102-3 and the dummy metal traces 196 thermal isolation. The dummy metal traces 196 may be part of one or more of the DB regions 130 (eg, in areas where the dummy metal traces 196 overlap the footprint of the microelectronic component 102 ), and thus some of the dummy metal traces 196 or All can be used for direct bonding.In the embodiment of FIGS. 1 and 2 , DB dielectric 108 extends beyond DB region 130 , covering the remainder of the top surface of interposer 150 . In other embodiments, different materials may be provided at the top surface of the interposer 150 outside the DB region 130 . For example, Figure 27 illustrates a microelectronic assembly 100 in which a different material 134 than DB dielectrics 108-1 and 108-2 is disposed at the top surface of interposer 150 (eg, in contact with molding material 126). In some embodiments, material 134 may include one or more dielectric materials, such as one or more organic or inorganic dielectric materials. For example, material 134 may include an inorganic dielectric material including silicon and nitrogen (eg, in the form of silicon nitride); silicon and oxygen (eg, in the form of silicon oxide); or silicon, carbon, and nitrogen (eg, in the form of silicon carbonitride); or material 134 may include an organic dielectric material such as particle-filled epoxide, polyimide, particle-filled polyimide, or poly(p-phenylene- 2,6-benzobisoxazole) (PBO). In some embodiments, material 134 may be a dielectric material, and additional conductive material (eg, a metal such as aluminum or copper) may be disposed on material 134 .Microelectronic assembly 100 may include multiple "layers" of microelectronic components 102 coupled by direct bonding. For example, Figure 28 illustrates microelectronic assembly 100 in which microelectronic component 102-1 includes two DB interfaces 180 (not labeled) at its top surface, and microelectronic components 102-3 and 102-4 (which themselves DB interface 180 (not labeled at the bottom surface) is coupled to microelectronic component 102-1 via DB regions 130-3 and 130-4, respectively. Similarly, microelectronic component 102-2 includes a DB interface 180 (not labeled) at its top surface, and microelectronic component 102-5 (with its own DB interface 180 (not labeled) at its bottom surface) via the DB region 130-5 is coupled to microelectronic component 102-2. Thus, the microelectronic assembly 100 of FIG. 28 may be described as having two layers of microelectronic components 102 directly bonded. Any of the microelectronic components 102 disclosed herein may include one or more dies and may have different types of through conductive interconnects, such as copper pillars and TSVs (eg, through silicon vias).In some embodiments, microelectronic components 102-1 and 102-2 in the first layer of microelectronic assembly 100 of FIG. 28 may include conductive structures 194 between DB regions 130 at their top and bottom surfaces extending between, providing a conductive path for power, ground, and/or signals to the microelectronic components 102 in the second layer (ie, the microelectronic components 102-3, 102-4, and 102-5). In some embodiments, such conductive structures 194 may include one or more TSVs, including conductive material vias, such as metal vias, isolated from surrounding silicon or other semiconductor material by a blocking oxide, such as through-silicon vias (when Microelectronic components 102-1 and 102-2 include silicon substrates) or through glass vias (when microelectronic components 102-1 and 102-2 include glass substrates). In some embodiments, the microelectronic components 102-1 and 102-2 in the first layer may be passive (eg, not including transistors) or active (eg, including memory circuits and/or power delivery circuits) in the form of transistors).In the embodiment of FIG. 28, the molding material 126 may extend up to the microelectronic components 102 in the second layer and may laterally surround the microelectronic components 102 in the second layer, and in some embodiments (not shown) , the molding material 126 may cover the top surface of the microelectronic component 102 in the second layer. In other embodiments, the top surface of the molding material 126 may be coplanar with the exposed DB interface 180 , or recessed below the exposed DB interface 180 . In some embodiments, microelectronic assemblies 100 including exposed DB interface 180 may have a temporary, removable protective material (eg, adhesive material, not shown) on exposed DB interface 180 to protect them until a direct splicing operation is performed. Microelectronic assemblies 100 including multilayer microelectronic components 102 may be formed in the manner discussed above with reference to FIGS. 14-17 , wherein additional layers of microelectronic components 102 are coupled to preceding assemblies prior to deposition of molding material 126 . In some other embodiments, microelectronics including multilayer microelectronic component 102 may be formed by first assembling the various layers of microelectronic component 102 and then coupling the assembled layers to interposer 150 as discussed above with reference to FIG. 15 . Assembly 100. Microelectronic assembly 100 may not be limited to two layers of microelectronic component 102, but may include three or more layers as desired. Further, although the microelectronic components 102 in the individual layers in FIG. 28 are depicted as having the same height, this is for illustration only, and the microelectronic components 102 in any individual layer in the microelectronic assembly 100 may have different the height of. Further, not every microelectronic component 102 in microelectronic assembly 100 may be part of a stack of multiple microelectronic components 102; for example, in some variations of microelectronic assembly 100 of FIG. 28, microelectronic component 102 -5 may not be present on top of microelectronic component 102-2 (and thus microelectronic component 102-2 may not include conductive structures 194 (eg, may not include TSVs)).In some embodiments, the microelectronic assembly 100 may include one or more DB interfaces 180 exposed at the surface of the microelectronic assembly 100 . Although various of the preceding figures illustrate DB regions 130 at a single surface (eg, top surface) of interposer 150 , microelectronic assembly 100 may include DB regions at multiple surfaces of interposer 150 130. In some embodiments, microelectronic components 102 coupled by direct bonding to the bottom surface of interposer 150 may include conductive contacts on the bottom surface thereof for coupling to another component (eg, support component 182 ).In some embodiments, the metal in DB region 130 may be used to provide power/ground planes for components on either side of DB region 130 . For example, Figure 29A is a cross-sectional side view (through section AA of Figure 29B) of a portion of microelectronic assembly 100, including DB region 130 between two microelectronic components 102-1 and 102-2 (although the same may be used The structure provides a power/ground plane between the interposer 150 and the microelectronic component 102 coupled thereto), and FIG. 29B is of the DB interface 180 associated with the DB region 130 (eg, of the interposer or the microelectronic component 102 ). ) top view. DB interface 180 of DB region 130 may include DB contacts 110, which together (eg, when engaged) provide a power/ground plane 198 that may be used by microelectronic component 102-1 and/or microelectronic component 102-2. In particular, as shown in FIG. 29A, DB contact 110-1 may be coupled to power/ground plane 200-1 (power/ground rail) through interconnect 202 (eg, including one or more vias and/or lines, etc.) . The power/ground plane 200-1 itself may be coupled to a power/ground source (eg, in the support member 182, not shown) via a conductive structure 194 (eg, TSV). Power/ground plane 198 of DB region 130 may provide a thick, laterally expansive area that may be used for power/ground access by microelectronic component 102-1 and/or microelectronic component 102-2, thereby freeing the microelectronic component Regions and/or layers in 102-1 and/or microelectronic component 102-2 for these power/ground planes. Further, the achievable thickness of the power/ground plane 198 may be greater than the thickness achievable in the layers of the microelectronic component 102, and thus, the power/ground plane 198 in the DB region 130 may be more achievable than achievable using conventional methods Has lower resistance and therefore can have better power delivery efficiency.DB region 130 may include multiple power/ground planes 198 . For example, Figure 29B illustrates four different parallel "strip" power/ground planes 200 (labeled 200-1, 200-2, 200-3, and 200-4) over four different parallel "Striped" DB contacts 110 (labeled 110-1, 110-2, 110-3, and 110-4); power/ground plane 200 may be part of microelectronic component 102-1 and may be embedded in microelectronic component 102-1 in the dielectric material. Interconnect 202 may selectively couple different ones of DB contacts 110 to different ones of power/ground planes 200 . For example, DB contact 110-1 may be electrically coupled to power/ground plane 200-1, DB contact 110-2 may be electrically coupled to power/ground planes 200-2 and 200-4, and DB contact 110-3 may be electrically coupled to power /ground plane 200-1, and DB contact 110-4 may be electrically coupled to power/ground plane 200-3. In some embodiments, power/ground plane 200-1 may be the first power plane (eg, operating at the desired Vcc of microelectronic component 102-1), such that DB contacts 110-1 and 110-3 also act as power planes (eg, operating at the desired Vcc of the microelectronic component 102-1). In some embodiments, power/ground planes 200-2 and 200-4 may be ground planes (eg, providing a current return path), such that DB contact 110-1 also acts as a ground plane (eg, providing a current return path). In some embodiments, power/ground plane 200-3 may be a second power plane (eg, operating at a different voltage than the first power plane, eg, at the desired Vcc of microelectronic component 102-2 operation) such that the DB contact 110-4 also acts as a second power plane (eg, operating at the desired Vcc of the microelectronic component 102-2). The different DB contacts 110 in the DB region 130 may be arranged as part of any one or more power/ground planes 198 .In the embodiment of FIG. 29B, the different DB contacts in DB contact 110 (the portion of power/ground plane 198 in DB region 130) are shown arranged in parallel strips. This is illustrative only, and the DB contacts 110 that are part of the power/ground plane 198 in the DB region 130 may have any desired shape and arrangement (eg, rectangular or non-rectangular shapes, and regular or irregular arrangements). For example, FIGS. 30 and 31 illustrate DB contacts 110 at various DB interfaces 180 , having various shapes and arrangements, and being coupled to 200 with various arrangements of interconnects 202 . As noted above, the power/ground plane 198 in the DB region 130 may be used by the microelectronic components 102 at one or both sides of the DB region 130 . For example, FIG. 32 is a side view of a portion of microelectronic assembly 100 in which power/ground plane 198 is used to provide a connection between conductive structure 194 (eg, TSV) and power/ground plane 200-1 (via intermediate interconnect 202). electrical path. In such an arrangement, the power/ground plane 198 can be said to provide a "cantilevered" path between the conductive structure 194 and the power/ground plane 200-1.The microelectronic components 102 and microelectronic assemblies 100 disclosed herein may be included in any suitable electronic components. 33-36 illustrate various examples of devices that may suitably include or be included in any of the microelectronic components 102 and microelectronic assemblies 100 disclosed herein.33 is a top view of wafer 1500 and die 1502, which may be included in any of the microelectronic components 102 disclosed herein. For example, die 1502 may be used as, or may be included in, microelectronic component 102 . Wafer 1500 may be composed of a semiconductor material and may include one or more dies 1502 having IC structures formed on the surface of wafer 1500 . Each of the dies 1502 may be a repeating unit of a semiconductor product including any suitable IC. After fabrication of the semiconductor product is complete, the wafer 1500 may undergo a singulation process in which the dies 1502 are separated from each other to provide discrete "chips" of the semiconductor product. The die 1502 may include one or more transistors (eg, some of the transistors 1640 of FIG. 34 discussed below) and/or support circuitry to route electrical signals to the transistors, as well as any other IC components. In some embodiments, wafer 1500 or die 1502 may include memory devices (eg, random access memory (RAM) devices such as static RAM (SRAM) devices, magnetic RAM (MRAM) devices, resistive RAM (RRAM) devices) , conductive bridge RAM (CBRAM) devices, etc.), logic devices (eg, AND, OR, NAND, or NOR gates), or any other suitable circuit element. Multiple of these devices may be combined on a single die 1502 . For example, a memory array formed of multiple memory devices may be formed on the same die 1502 as a processing device (eg, processing device 1802 of FIG. 36 ) or other logic configured to store information or execute in the memory device Instructions stored in a memory array.34 is a cross-sectional side view of an IC device 1600 that may be included in any of the microelectronic components 102 disclosed herein. For example, IC device 1600 (eg, as part of die 1502 , as discussed above with reference to FIG. 33 ) may be used as, or may be included in, microelectronic component 102 . One or more of IC devices 1600 may be included in one or more dies 1502 (FIG. 33). IC device 1600 may be formed on a substrate 1602 (eg, wafer 1500 of FIG. 33 ) and may be included in a die (eg, die 1502 of FIG. 33 ). Substrate 1602 may be a semiconductor substrate composed of a semiconductor material system including, for example, an n-type or p-type material system (or a combination of both). Substrate 1602 may include, for example, a crystalline substrate formed using bulk silicon or silicon-on-insulator (SOI) substructures. In some embodiments, the substrate 1602 may be formed using alternative materials that may or may not be combined with silicon, including but not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide or gallium antimonide. Substrate 1602 may also be formed using additional materials classified as Groups II-VI, III-V, or IV. Although a few examples of materials from which substrate 1602 may be formed are described herein, any material that may be used as a basis for IC device 1600 may be used. Substrate 1602 may be a singulated die (eg, die 1502 of FIG. 33 ) or part of a wafer (eg, wafer 1500 of FIG. 33 ).IC device 1600 may include one or more device layers 1604 disposed on substrate 1602 . Device layer 1604 may include features of one or more transistors 1640 (eg, metal oxide semiconductor field effect transistors (MOSFETs)) formed on substrate 1602 . Device layer 1604 may include, for example, one or more source and/or drain (S/D) regions 1620 , gate 1622 to control current flow in transistor 1640 between S/D regions 1620 , and to Electrical signals are routed to/from one or more S/D contacts 1624 of S/D region 1620 . Transistor 1640 may include additional features not depicted for clarity, such as device isolation regions, gate contacts, and the like. Transistor 1640 is not limited to the type and configuration depicted in FIG. 34 and may include a variety of other types and configurations, such as, for example, planar transistors, non-planar transistors, or a combination of the two. Planar transistors may include bipolar junction transistors (BJTs), heterojunction bipolar transistors (HBTs), or high electron mobility transistors (HEMTs). Non-planar transistors may include FinFET transistors, such as dual-gate transistors or tri-gate transistors, and wraparound or full wraparound gate transistors, such as nanoribbon and nanowire transistors.Each transistor 1640 may include a gate 1622 formed from at least two layers, a gate dielectric, and a gate electrode. The gate dielectric may comprise one layer or a stack of layers. One or more layers may include silicon oxide, silicon dioxide, silicon carbide, and/or high-k dielectric materials. High-k dielectric materials may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that can be used in gate dielectrics include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum oxide, zirconium oxide, zirconium oxide silicon, tantalum oxide, titanium oxide, barium strontium titanium oxide, Barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide and lead niobate. In some embodiments, when using high-k materials, an annealing process may be performed on the gate dielectric to improve its quality.The gate electrode may be formed on the gate dielectric and may include at least one p-type work function metal or n-type work function metal, depending on whether transistor 1640 is a p-type metal oxide semiconductor (PMOS) or an n-type metal oxide semiconductor ( NMOS) transistor. In some embodiments, the gate electrode may consist of a stack of two or more metal layers, wherein one or more of the metal layers is a work function metal layer and at least one of the metal layers is a fill metal layer. Additional metal layers, such as barrier layers, may be included for other purposes. For PMOS transistors, metals that can be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, conductive metal oxides (eg, ruthenium oxide), and any of the metals discussed below with reference to NMOS transistors (eg, for power function tuning). For NMOS transistors, metals that can be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, carbides of these metals (eg, hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and Aluminum Carbide), and any of the metals discussed above with reference to PMOS transistors (eg, for work function tuning).In some embodiments, when viewed from a cross-section of transistor 1640 along the source-channel-drain direction, the gate electrode may consist of a U-shaped structure including a surface substantially parallel to the substrate a bottom portion and two sidewall portions substantially perpendicular to the top surface of the substrate. In other embodiments, at least one of the metal layers forming the gate electrode may only be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments, the gate electrode may consist of a combination of U-shaped structures and planar non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed on top of one or more planar non-U-shaped layers.In some embodiments, a pair of sidewall spacers may be formed on opposing sides of the gate stack to bracket the gate stack. The sidewall spacers may be formed of materials such as silicon nitride, silicon oxide, silicon carbide, carbon-doped silicon nitride, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and typically include deposition and etching process steps. In some embodiments, multiple spacer pairs may be used; for example, two, three, or four pairs of sidewall spacers may be formed on opposite sides of the gate stack.S/D regions 1620 may be formed within the substrate 1602 adjacent to the gate 1622 of each transistor 1640 . S/D regions 1620 may be formed using, for example, an implant/diffusion process or an etch/deposition process. In the former process, dopants such as boron, aluminum, antimony, phosphorus, or arsenic may be ion-implanted into the substrate 1602 to form S /D area 1620. An annealing process that activates the dopants and causes them to diffuse further into the substrate 1602 may follow an ion implantation process. In the latter process, the substrate 1602 may be first etched to form grooves at the locations of the S/D regions 1620 . An epitaxial deposition process can then be performed to fill the grooves with the material used to fabricate the S/D regions 1620 . In some embodiments, the S/D regions 1620 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In some embodiments, the S/D regions 1620 may be formed using one or more alternative semiconductor materials such as germanium or III-V materials or alloys. In further embodiments, one or more layers of metals and/or metal alloys may be used to form S/D regions 1620 .Electrical signals, such as power and/or input/output (I/O) signals, may be routed through one or more interconnect layers (illustrated in FIG. 34 as interconnect layers 1606-1610) disposed on device layer 1604 Devices (eg, transistors 1640 ) to and/or route from device layer 1604 . For example, conductive features of device layer 1604 (eg, gate 1622 and S/D contacts 1624) may be electrically coupled with interconnect structures 1628 of interconnect layers 1606-1610. One or more interconnect layers 1606 - 1610 may form a metallization stack (also referred to as an "ILD stack") 1619 of IC device 1600 .Interconnect structures 1628 may be arranged within interconnect layers 1606-1610 to route electrical signals according to a variety of designs (in particular, the arrangement is not limited to the particular configuration of interconnect structures 1628 depicted in Figure 34). Although a particular number of interconnect layers 1606-1610 are depicted in FIG. 34, embodiments of the present disclosure include IC devices having more or fewer interconnect layers than depicted.In some embodiments, interconnect structure 1628 may include lines 1628a and/or vias 1628b filled with a conductive material, such as metal. Lines 1628a may be arranged to route electrical signals in the direction of a plane that is substantially parallel to the surface of substrate 1602 on which device layer 1604 is formed. For example, wire 1628a may route electrical signals in the direction entering and leaving the page from the perspective of FIG. 34 . The vias 1628b may be arranged to route electrical signals in the direction of a plane that is substantially perpendicular to the surface of the substrate 1602 on which the device layer 1604 is formed. In some embodiments, vias 1628b may electrically couple together lines 1628a of different interconnect layers 1606-1610.Interconnect layers 1606-1610 may include dielectric material 1626 disposed between interconnect structures 1628, as shown in FIG. In some embodiments, the dielectric material 1626 disposed between the interconnect structures 1628 in different ones of the interconnect layers 1606-1610 may have different compositions; in other embodiments, the different interconnect layers 1606 The composition of the dielectric material 1626 between -1610 may be the same.A first interconnect layer 1606 may be formed over the device layer 1604 . In some embodiments, the first interconnect layer 1606 may include lines 1628a and/or vias 1628b, as shown. Lines 1628a of first interconnect layer 1606 may be coupled with contacts of device layer 1604 (eg, S/D contacts 1624).The second interconnect layer 1608 may be formed over the first interconnect layer 1606 . In some embodiments, the second interconnect layer 1608 may include vias 1628b to couple the lines 1628a of the second interconnect layer 1608 with the lines 1628a of the first interconnect layer 1606 . Although lines 1628a and vias 1628b are structurally delineated with lines within each interconnect layer (eg, within second interconnect layer 1608 ) for clarity, in some embodiments lines 1628a and vias 1628b may be structurally and/or materially continuous (eg, simultaneously filled during a dual damascene process).A third interconnect layer 1610 (and additional interconnect layers, as desired) may be formed in succession on the second interconnect layer 1608 according to similar techniques and configurations described in relation to the second interconnect layer 1608 or the first interconnect layer 1606 ). In some embodiments, interconnect layers that are "higher up" (ie, further away from device layer 1604 ) in metallization stack 1619 in IC device 1600 may be thicker.IC device 1600 may include solder resist material 1634 (eg, polyimide or similar material) and one or more conductive contacts 1636 formed on interconnect layers 1606-1610. In Figure 34, the conductive contacts 1636 are illustrated as taking the form of bond pads. Conductive contacts 1636 may be electrically coupled with interconnect structures 1628 and configured to route electrical signals of transistor(s) 1640 to other external devices. For example, solder bonds may be formed on one or more conductive contacts 1636 to mechanically and/or electrically couple a chip including IC device 1600 to another component (eg, a circuit board). IC device 1600 may include additional or alternative structures to route electrical signals from interconnect layers 1606-1610; for example, conductive contacts 1636 may include other similar features (eg, posts) that route electrical signals to external components.35 is a cross-sectional side view of an IC device assembly 1700, which may include any of the microelectronic components 102 and/or microelectronic assemblies 100 disclosed herein. IC device assembly 1700 includes a number of components disposed on a circuit board 1702, which may be, for example, a motherboard. IC device assembly 1700 includes components disposed on a first side 1740 of circuit board 1702 and an opposing second side 1742 of circuit board 1702; typically, components may be disposed on one or both of sides 1740 and 1742. Any of the IC packages discussed below with reference to IC device assembly 1700 may include any embodiment of microelectronic assembly 100 disclosed herein (eg, may include multiple microelectronic components 102 coupled together by direct bonding).In some embodiments, circuit board 1702 may be a PCB that includes multiple metal layers separated from each other by layers of dielectric material and interconnected by conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals between components coupled to the circuit board 1702 (optionally in combination with other metal layers). In other embodiments, the circuit board 1702 may be a non-PCB substrate.The IC device assembly 1700 illustrated in FIG. 35 includes a package-on-interposer structure 1736 coupled to the first side 1740 of the circuit board 1702 by a coupling feature 1716 . Coupling features 1716 can electrically and mechanically couple package-on-interposer structure 1736 to circuit board 1702 and can include solder balls (as shown in FIG. 35 ), male and female portions of a socket, adhesive, underfill material, and/or or any other suitable electrical and/or mechanical coupling structure.Package-on-interposer structure 1736 may include IC package 1720 coupled to package interposer 1704 by coupling feature 1718 . Coupling member 1718 may take any suitable form for the application, such as the forms discussed above with reference to coupling member 1716 . Although a single IC package 1720 is shown in FIG. 35 , multiple IC packages may be coupled to the package interposer 1704 ; in practice, additional interposers may be coupled to the package interposer 1704 . Package interposer 1704 may provide an intermediate substrate for bridging circuit board 1702 and IC package 1720 . IC package 1720 may be or include, for example, a die (die 1502 of FIG. 33 ), an IC device (eg, IC device 1600 of FIG. 34 ), or any other suitable component. Typically, encapsulation interposer 1704 can expand connections to wider pitches or reroute connections to different connections. For example, package interposer 1704 may couple IC package 1720 (eg, a die) to a set of BGA conductive contacts of coupling feature 1716 for coupling to circuit board 1702 . In the embodiment illustrated in FIG. 35 , the IC package 1720 and the circuit board 1702 are attached to opposite sides of the package interposer 1704 ; in other embodiments, the IC package 1720 and the circuit board 1702 may be attached to the same side. In some embodiments, three or more components may be interconnected by way of package interposer 1704 .In some embodiments, the package interposer 1704 may be formed as a PCB including multiple metal layers separated from each other by layers of dielectric material and interconnected by conductive vias. In some embodiments, the encapsulation interposer 1704 may be formed of epoxy, glass fiber reinforced epoxy, epoxy with inorganic fillers, ceramic materials, or polymeric materials such as polyimide. In some embodiments, package interposer 1704 may be formed of alternating rigid or flexible materials, which may include the same materials described above for use in semiconductor substrates, such as silicon, germanium, and other III-V and IV materials. Package interposer 1704 may include metal lines 1710 and vias 1708 , including but not limited to TSVs 1706 . Package interposer 1704 may also include embedded devices 1714, including both passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) devices, and memory devices. More complex devices, such as radio frequency devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on package interposer 1704 . The package-on-interposer structure 1736 may take the form of any package-on-interposer structure known in the art.IC device assembly 1700 may include IC package 1724 coupled to first side 1740 of circuit board 1702 by coupling member 1722 . Coupling component 1722 may take the form of any of the embodiments discussed above with reference to coupling component 1716 , and IC package 1724 may take the form of any of the embodiments discussed above with reference to IC package 1720 .The IC device assembly 1700 illustrated in FIG. 35 includes a package-on-package structure 1734 coupled to the second side 1742 of the circuit board 1702 by coupling features 1728 . Package-on-package structure 1734 may include IC package 1726 and IC package 1732 coupled together by coupling features 1730 such that IC package 1726 is disposed between circuit board 1702 and IC package 1732 . Coupling components 1728 and 1730 may take the form of any embodiment of coupling component 1716 discussed above, and IC packages 1726 and 1732 may take the form of any embodiment of IC package 1720 discussed above. The package-on-package structure 1734 may be configured according to any package-on-package structure known in the art.36 is a block diagram of an example electrical device 1800 that may include any of the microelectronic components 102 and/or microelectronic assemblies 100 disclosed herein. For example, any suitable of the components of electrical device 1800 may include one or more of IC device assembly 1700 , IC device 1600 , or die 1502 disclosed herein. Various components are illustrated in FIG. 36 as being included in electrical device 1800, but any one or more of these components may be omitted or duplicated as appropriate for the application. In some embodiments, some or all of the components included in electrical device 1800 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated on a single system-on-chip (SoC) die.Additionally, in various embodiments, electrical device 1800 may not include one or more of the components illustrated in Figure 36, but electrical device 1800 may include interface circuitry for coupling to one or more components. For example, electrical device 1800 may not include display device 1806, but may include display device interface circuitry (eg, connector and driver circuitry) to which display device 1806 may be coupled. In another set of examples, electrical device 1800 may not include audio input device 1824 or audio output device 1808, but may include audio input or output device interface circuitry to which audio input device 1824 or audio output device 1808 may be coupled (eg, connecting device and support circuits).Electrical device 1800 may include processing device 1802 (eg, one or more processing devices). As used herein, the term "processing device" or "processor" may refer to a device that processes electronic data from registers and/or memory to convert the electronic data into other electronic data that may be stored in the registers and/or memory any device or part of a device. Processing device 1802 may include one or more digital signal processors (DSPs), application specific integrated circuits (ASICs), CPUs, graphics processing units (GPUs), cryptographic processors (dedicated processors that execute cryptographic algorithms within hardware), servers processor or any other suitable processing device. Electrical device 1800 may include memory 1804, which may itself include one or more memory devices, such as volatile memory (eg, dynamic random access memory (DRAM)), non-volatile memory (eg, read only memory (ROM) )), flash memory, solid state memory and/or hard disk drives. In some embodiments, memory 1804 may include memory that shares a die with processing device 1802 . The memory can be used as cache memory and can include embedded dynamic random access memory (eDRAM) or spin transfer torque magnetic random access memory (STT-MRAM).In some embodiments, the electrical device 1800 may include a communication chip 1812 (eg, one or more communication chips). For example, the communications chip 1812 may be configured to manage wireless communications for transmitting data to and from the electrical device 1800 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc., which can communicate data via non-solid media through the use of modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not.Communication chip 1812 may implement any of a variety of wireless standards or protocols, including but not limited to Institute of Electrical and Electronics Engineers (IEEE) standards, including Wi-Fi (IEEE 802.11 series), IEEE 802.16 standards (eg, IEEE802.16-2005 amendments), the Long Term Evolution (LTE) project together with any amendments, updates and/or revisions (eg, the LTE-Advanced project, the Ultra Mobile Broadband (UMB) project (also known as "3GPP2"), etc.). IEEE 802.16 compliant Broadband Wireless Access (BWA) networks are often referred to as WiMAX networks - an acronym that stands for Worldwide Interoperability for Microwave Access, which is the certification of products that have passed conformance and interoperability testing of the IEEE 802.16 standard logo. The communication chip 1812 may operate according to Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE networks . The communication chip 1812 may operate according to Enhanced Data Evolution for GSM (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN) or Evolved UTRAN (E-UTRAN). The communication chip 1812 can be used according to Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO) and its derivatives and designated as 3G, 4G, 5G and higher than any other wireless protocol to operate. In other embodiments, the communication chip 1812 may operate according to other wireless protocols. The electrical device 1800 may include an antenna 1822 to facilitate wireless communications and/or receive other wireless communications, such as AM or FM radio transmissions.In some embodiments, the communications chip 1812 may manage wired communications, such as electrical, optical, or any other suitable communications protocol (eg, Ethernet). As noted above, the communication chip 1812 may include multiple communication chips. For example, the first communication chip 1812 may be dedicated to short-range wireless communication such as Wi-Fi or Bluetooth, and the second communication chip 1812 may be dedicated to communication such as Global Positioning System (GPS), EDGE, GPRS, CDMA, WiMAX, Longer range wireless communication like LTE, EV-DO or others. In some embodiments, the first communication chip 1812 may be dedicated to wireless communication, and the second communication chip 1812 may be dedicated to wired communication.Electrical device 1800 may include battery/power circuit 1814 . The battery/power circuit 1814 may include one or more energy storage devices (eg, batteries or capacitors) and/or circuits for coupling components of the electrical device 1800 to an energy source (eg, AC line power) separate from the electrical device 1800 circuit.The electrical device 1800 may include a display device 1806 (or corresponding interface circuitry, as discussed above). Display device 1806 may include any visual indicator, such as a heads-up display, computer monitor, projector, touch screen display, liquid crystal display (LCD), light emitting diode display, or flat panel display.The electrical device 1800 may include an audio output device 1808 (or corresponding interface circuitry, as discussed above). Audio output device 1808 may include any device that generates audible indicators, such as speakers, headphones, or earbuds.Electrical device 1800 may include audio input device 1824 (or corresponding interface circuitry, as discussed above). Audio input device 1824 may include any device that generates a signal representing sound, such as a microphone, a microphone array, or a digital instrument (eg, an instrument with a musical instrument digital interface (MIDI) output).Electrical device 1800 may include GPS device 1818 (or corresponding interface circuitry, as discussed above). The GPS device 1818 can communicate with a satellite-based system and can receive the location of the electrical device 1800, as is known in the art.The electrical device 1800 may include other output devices 1810 (or corresponding interface circuitry, as discussed above). Examples of other output devices 1810 may include audio codecs, video codecs, printers, wired or wireless transmitters for providing information to other devices, or additional storage devices.The electrical device 1800 may include other input devices 1820 (or corresponding interface circuitry, as discussed above). Examples of other input devices 1820 may include accelerometers, gyroscopes, compasses, image capture devices, keyboards, cursor control devices such as mice, styluses, touch pads, barcode readers, quick response (QR) code readers, any sensor or radio frequency identification (RFID) readers.Electrical device 1800 may have any desired form factor, such as a handheld or mobile electrical device (eg, cell phone, smartphone, mobile internet device, music player, tablet, laptop, netbook, ultrabook, personal digital Assistants (PDAs, ultra-mobile personal computers, etc.), desktop electrical equipment, server equipment or other networked computing components, printers, scanners, monitors, set-top boxes, entertainment control units, vehicle control units, digital cameras, digital video recorders, or wearables Electrical Equipment. In some embodiments, electrical device 1800 may be any other electronic device that processes data.The following paragraphs provide various examples of the embodiments disclosed herein.Example 1 is a microelectronic assembly comprising: a first microelectronic component; and a second microelectronic component coupled to the first microelectronic component through a direct bond area, wherein the direct bond area includes a first metal contact and a second metal contacts, the first metal contact has a larger area than the second metal contact, and the first metal contact is electrically coupled to the power/ground plane of the first microelectronic component.Example 2 includes the subject matter of Example 1 and further provides that the second metal contact is electrically coupled to the signal path of the first microelectronic component.Example 3 includes the subject matter of any of Examples 1-2, and further provides that the direct bond region includes a third metal contact having a larger area than the second metal contact, the first metal contact being electrically coupled to the first metal contact The power plane of the microelectronic component, and the third metal contact is electrically coupled to the ground plane of the first microelectronic component.Example 4 includes the subject matter of Example 3, and further provides that the first metal contact is parallel to the second metal contact.Example 5 includes the subject matter of any of Examples 1-4, and further provides that the direct bond region includes a fourth metal contact, the fourth metal contact having a larger area than the second metal contact, the power plane being of the first microelectronic component The first power plane, and the fourth metal contact is electrically coupled to the second power plane of the first microelectronic component.Example 6 includes the subject matter of Example 5, and further specifies that the second power plane is to operate at a different voltage than the voltage at which the first power plane is to operate.Example 7 includes the subject matter of any of Examples 1-4, and further provides that the direct bond region includes a fourth metal contact, the fourth metal contact has a larger area than the second metal contact, and the fourth metal contact is electrically coupled to the second metal contact. A power plane for a microelectronic component.Example 8 includes the subject matter of Example 7, and further provides that the first metal contact is parallel to the second metal contact.Example 9 includes the subject matter of any of Examples 1-8, and further provides that the first metal contact has a non-rectangular footprint.Example 10 includes the subject matter of any of Examples 1-9, and further provides that the second metal contact has a non-rectangular footprint.Example 11 includes the subject matter of any of Examples 1-10, and further provides that the first microelectronic component includes an interposer.Example 12 includes the subject matter of Example 11, and further provides that the interposer includes an organic dielectric material.Example 13 includes the subject matter of any of Examples 1-12, and further provides that the first microelectronic component includes a die.Example 14 includes the subject matter of any of Examples 1-13, and further provides that the second microelectronic component includes a die.Example 15 includes the subject matter of Example 14, and further provides that the die of the second microelectronic component is a dummy die.Example 16 includes the subject matter of any of Examples 1-15, and further provides that the power/ground plane of the first microelectronic component is in contact with the substrate via.Example 17 includes the subject matter of any of Examples 1-16, and further provides that the first metal contact includes copper.Example 18 includes the subject matter of Example 17, and further specifies that the first metal contact includes manganese and nickel.Example 19 includes the subject matter of any of Examples 1-18, and further provides that the first metal contact includes manganese, titanium, gold, silver, palladium, nickel, aluminum, tantalum, or cobalt.Example 20 includes the subject matter of Example 19, and further provides that the first metal contact includes tantalum and nitrogen.Example 21 includes the subject matter of any of Examples 19-20, and further provides that the first metal contact includes cobalt and iron.Example 22 includes the subject matter of any of Examples 1-21, and further provides that the first metal contact includes a bulk metal region and an interface metal region, and the interface metal region has a material composition that is different from the material composition of the bulk metal region.Example 23 includes the subject matter of any of Examples 1-22, and further provides that the first metal contact includes a metal pad.Example 24 includes the subject matter of Example 23, and further provides that the metal pad is in contact with the via, and the via is in the buildup material of the first microelectronic component.Example 25 includes the subject matter of any of Examples 1-22, and further provides that the first metal contact includes a metal via.Example 26 includes the subject matter of Example 25, and further provides that the metal via is in the dielectric material of the direct bond area.Example 27 includes the subject matter of any of Examples 25-26, and further provides that the metal via has a non-circular footprint.Example 28 includes the subject matter of any of Examples 25-27, and further provides that the metal via is in contact with a metal line in the first microelectronic component, and at least one side of the metal via is aligned with a side of the metal line.Example 29 includes the subject matter of any of Examples 1-28, and further provides that the direct bond region includes an inorganic dielectric material.Example 30 includes the subject matter of any of Examples 29, and further provides that the inorganic dielectric material includes silicon and oxygen; silicon and nitrogen; silicon, oxygen, and nitrogen; silicon, carbon, and nitrogen; or silicon, oxygen, carbon, and nitrogen.Example 31 includes the subject matter of any of Examples 1-30, and further provides that the microelectronic assembly further includes a heat sink.Example 32 includes the subject matter of Example 31, and further provides that the microelectronic assembly further includes a thermal interface material between the microelectronic component and the heat sink.Example 33 is a system comprising: a circuit board; and any of the microelectronic assemblies disclosed herein, communicatively coupled to the circuit board.Example 34 includes the subject matter of Example 33 and further specifies that the circuit board is a motherboard.Example 35 includes the subject matter of any of Examples 33-34, and further provides that the system is a handheld computing system.Example 36 includes the subject matter of any of Examples 33-35, and further provides that the system is a wearable computing system.Example 37 includes the subject matter of any of Examples 33-34, and further provides that the system is a server computing system.Example 38 includes the subject matter of any of Examples 33-34, and further provides that the system is an in-vehicle computing system.Example 39 includes the subject matter of any of Examples 33-38, and further provides that the system further includes a display communicatively coupled to the circuit board.Example 40 includes the subject matter of any of Examples 33-39, and further provides that the system further includes a wireless communication device communicatively coupled to the circuit board.Example 41 includes the subject matter of any of Examples 33-40, and further provides that the system further includes a housing surrounding the microelectronic assembly and the circuit board. |
Computational load is shifted into or out of a computational array based on one or more metrics associated with power generation associated with power used by the computational array. The computational load is shifted by supplying data associated with the computational load into or away from the computational array. The one or more metrics include change in amount of available power for the computational array. The computational load is shifted from the computational array to a second computational array supplied with power from a different power generation facility, based on an indication of a reduction of the available power for the computational array and sufficient computational capacity of the second computational array. |
WHAT IS CLAIMED IS: 1. A method comprising: shifting computational load into or out of a computational array based on one or more metrics associated with power generation associated with power used by the computational array. 2. The method as recited in claim 1 wherein one of the one or more metrics is associated with cost of the power used by the computational array. 3. The method as recited in claim 1 further comprising shifting the computational load by supplying data associated with the computational load into or away from the computational array. 4. The method as recited in claim 1 wherein the one or more metrics include change in an amount of available power for the computational array. 5. The method as recited in claim 1 further comprising shifting the computational load from the computational array to another computational array supplied with power from a different power generation facility according to an indication of available computational resources at the other computational facility. 6. The method as recited in claim 1 further comprising: supplying the power to the computational array from a fluctuating power generating facility; and shifting the computational load into and out of the computational array based on one of the one or more metrics reflecting a status of the fluctuating power generating facility. 7. The method as recited in claim 6 further comprising: tracking solar intensity for the fluctuating power generating facility that includes a solar array; and providing an indication related to the solar intensity as the status. 8. The method as recited in claim 7 further comprising: generating solar intensity data in concentric rings of photo-detectors around the solar array; and determining a change in solar intensity from the solar intensity data as the indication related to the solar intensity. 9. A system comprising: a first computational array receiving power from a first power generation facility; and a control system responsive to one or more metrics associated with at least one of power supplied to the first computational array and to a second computational array receiving power from a second power generation facility, to shift computational load between the first and the second computational array based on the one or more metrics. 10. The system as recited in claim 9 wherein the one or more metrics include at least one of an indication of available power at the first or second computational arrays or costs associated with the available power. 11. The system as recited in claim 9 wherein the control system is responsive to shift the computational load from the first computational array to the second computational array according to an indication of a reduction of available power for the first computational array. 12. The system as recited in claim 9 wherein the control system is responsive to shift the computational load from the first computational array to the second computational array according to an indication of computational resources being available at the second computational array. |
SHIFTING OF COMPUTATIONAL LOAD BASED ON POWER CRITERIA BACKGROUND Technical Field [1001] This application relates to shifting computational loads between computation centers based on power criteria. Background Art [1002] Computational arrays such as server farms are becoming more common as "cloud" computing, and other forms of remote computing, continue to expand. Remote computer facilities allow offsite functionality for various services such as mail, databases, and web hosting. Large organizations such as governments or corporations often utilize centralized or clusters of centralized computer facilities utilizing large numbers of servers. In addition, large groups of servers provide web services such as search and multi-media content. All these demands lead to an increase in the number of computational arrays of server computers. [1003] Electricity constitutes one of the higher costs to operate computational arrays. Accordingly, substantial investment has been made in hardware and software to operate the servers of the computational arrays efficiently and save power where possible. For example, power savings techniques include matching resources available both on chip and in the computational array to load requirements and reducing power to those resources not needed. Thus, computational arrays are powered according to load requirements. As electricity has been shown to be a significant expense in operating large computational arrays, continued improvement in reducing energy utilization of computational arrays and/or associated cost is desirable. DISCLOSURE OF INVENTION [1004] One aspect of cost savings with respect to energy utilization by computational arrays is the ability to move computational load around based on the availability of energy utilized by the array. Thus, in an embodiment a method is provided that includes shifting computational load into or out of a computational array based on one or more metrics associated with power generation associated with power used by the computational array. The method may further include shifting the computational load by supplying data associated with the computational load into or away from the computational array. In an embodiment the one or more metrics include change in amount of available power for the computational array. In an embodiment the one or more metrics include cost of the power for the computational array. In an embodiment the method further includes shifting the computational load from the computational array to another computational array supplied with power from a different power generation facility, according to an indication of one of the metrics indicating a reduction of the available power for the computational array. [1005] In an embodiment the method further includes supplying power to the computational array from a fluctuating power generating facility; and shifting the computational load into or out of the computational array based on one or more of the metrics reflecting a status of the fluctuating power generating facility. In an embodiment, the fluctuating power generating facility is a solar array and solar intensity data is generated in concentric rings of photo-detectors around the solar array, from which the change in solar intensity is determined as the status. The method may include determining magnitude and duration of the change in solar intensity. Solar intensity changes may be detected using optical imaging to measure size, distance and velocity of objects. [1006] In an embodiment, the method includes supplying as the power, wind generated power to the computational array from the fluctuating power generating facility and utilizing a change in wind for the fluctuating power generating facility as one of the metrics. [1007] In an embodiment the computational array includes a plurality of servers providing remote computing services. In an embodiment a power generation facility generating the power is collocated with the computational array. [1008] In another embodiment an apparatus includes a processing system responsive to indications of power generation conditions to determine an impending change in available power for a computational array based on the indications. [1009] In an embodiment the apparatus further includes concentric rings of photo- detectors disposed around a solar array and coupled to the processing system, the processing system responsive to information from the photodetectors to detect a change in intensity of solar radiation available to the solar array so as to determine the impending change in available power. [1010] In an embodiment the apparatus further includes optical imaging apparatus to measure size of blue sky, and distance and velocity of non-blue objects. [1011] In another embodiment a system includes at least a first computational array receiving power from a first power generation facility. A control system is responsive to one or more metrics associated with power supplied to at least one of the first computational array and a second computational array receiving power from a second power generation facility, to shift computational load between the first and the second computational array based on the one or more metrics. In an embodiment the one or more metrics include at least one of an indication of available power at the first or second computational arrays or costs associated with the supplied power. [1012] In an embodiment the control system responds to shift the computational load from the first computational array to the second computational array based on an indication of a reduction of available power for the first computational array. [1013] In an embodiment the system includes optical communication paths coupling the first and second computational arrays. [1014] In an embodiment the the control system is responsive to shift the computational load from the first computational array to the second computational array according to an indication of computational resources being available at the second computational array. [1015] In an embodiment a system includes a first and a second computational array respectively receiving power from separate power generation facilities. A control system is responsive to computational costs associated with at least one of the first and the second computational arrays to shift computational load from the first to the second computational array based on the computational costs. BRIEF DESCRIPTION OF THE DRAWINGS [1016] The present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings. [1017] Fig. 1 illustrates a high level diagram of a computational system according to an embodiment of the invention. [1018] Fig. 2 illustrates a high level flow diagram showing high level operations associated with transferring of computational load from the perspective of the computational array from which load needs to be transferred. [1019] Fig. 3 illustrates a high level flow diagram showing high level operations associated with transferring of computational load from the perspective of the computational array to which load is transferred. [1020] Fig 4 illustrates an embodiment in which concentric rings of photodetectors are placed outside a solar array to detect upcoming changes in solar intensity available to the solar array. [1021] Fig 5 illustrates an embodiment in which cameras are used to detect upcoming changes in solar intensity for a solar array. [1022] The use of the same reference symbols in different drawings indicates similar or identical items. MODE(S) FOR CARRYING OUT THE INVENTION [1023] Referring to Fig. 1, illustrated is a high level block diagram of a system of computational arrays 101, 103, and 105. Each computational array 101, 103, and 105, is supplied with a power from an associated power generational facility 107, 109, 111. The power generational facilities 107, 109, 111 are typically independent of each other and may generate power from different sources. For example, power generation facility 107 may be a solar array. Power generation facility 109 may utilize wind energy and power generation facility 111 may be a conventional coal, gas, bio-mass, or nuclear power generation facility. Of course, any type of power generation may be utilized in the power generation facilities. [1024] Volatile renewable energy sources (wind and solar) are relatively expensive because of their variable ability to produce power. In addition, connecting these power sources to the electrical grid frequently requires construction of new transmission lines. Construction of new transmission lines requires new towers between which fault prone aluminum cables are strung, and associated clearing of land for power breaks for the transmission towers. The cost of attaching renewable energy to the grid can exceed the cost of the windmills or solar panels being attached- necessitating very large renewable installations to amortize the cost of the tie-in to the electrical grid. [1025] Instead of shipping power from generation facilities to distant computers and losing some percentage of the electrical power in the process through transmission loss, the computational arrays can be located where the power is generated, thereby substantially eliminating transmission loss resulting in cheaper electricity. Losses are approximately 10% in transmission and distribution, but an additional 30% is lost in the data center and another 5-10% of the power is lost in additional AC/DC conversion inefficiencies. These numbers ignore the cost of cooling, which typically runs between 25 and 33% of total power budget. [1026] If computation is considered to be a local power sink, and the result of that computation is expressed as data (applied electricity) and that data is transmitted in optical fibers that functionally have zero (or comparatively reduced) loss, then computation can be used as a significantly more efficient applied electrical transmission and distribution model. That could require an over-build of computational capability, but the power advantages of local consumption and optical fiber data transmission and distribution in place of aluminum cable electrical transmission and distribution can more than cover the cost of idled computational resources. Further, all of these costs associated with power loss can be reduced by the functional conversion of renewable power generators into computer power supplies. With variable power generation for wind and solar based on weather and time of day, the computational load can be shifted to other computational facilities to take advantage of available power supplying a different computational array. [1027] The advantages of utility computing are lower cost and higher efficiency. As used herein, utility computing is the conversion of computing from a user owned and managed computing infrastructure to a service provider owned and managed infrastructure. An electrical outlet represents an abstraction of a very complex and expensive infrastructure for generating, distributing, and regulating electricity. The goal of utility computing is the abstraction of computing to a similar "other managed" resource. For example, relatively few people generate their own electricity. Utility computing attaches the same logic to using a computer that already exists for owning a generator, that is, relatively few people will wholly perform their own computations. Rather, those computations are performed in computational facilities preferably powered by renewable energy with the marginal energy costs of additional computing being near zero until the compute resources are exhausted. Further, the environmental costs are less where compute load can be shifted (or maintained) in those facilities powered by renewable energy. [1028] Further, data mobility is much greater than electrical mobility in the sense it is much easier to transport data than electricity. Many transmission lines are running at their limits. Shifting some of the load off of the electrical lines (even a small amount) and onto fiber optic data lines results in more available electrical transmission margin and greater overall electrical efficiency and lower environmental impact. [1029] Further, the cost of electricity dominates computing costs. Projections are that 75% or more of future server costs will be power. A significant decrease in electrical cost would have an overwhelming impact on computational cost. [1030] Referring again to Fig. 1, computational array 101 executes a computational load and receives power from fluctuating power generator 107 to execute that load. Power generator 107 is shown as a solar array in Fig. 1 but may instead be, e.g., a wind, tidal, or other similarly variable power generator whose power generation capability fluctuates based on external conditions such as solar intensity, wind, or tides. If the available power from power generator 107 is going to be reduced, because of, e.g., weather conditions or nightfall, the computational load can be shifted to another computational array. For example, computational array 101 can shift its computational load to computational array 103 via fiber optic communication lines 120. A power monitoring system 115 can monitor the conditions of power generator 107 and communicate with other power monitors 117 and 119 via the internet or other communications mechanism. The power monitoring systems may also communicate over the fiber optical communication lines 120. While shown as separate functions for ease of illustration, the power monitors 115, 117, and 119 may be part of the computational arrays with which they are associated. The power monitoring system 115 can determine characteristics of available power from the power monitoring stations 117 and 119 and request computational load transfer. The characteristics of available power supplied from power monitoring stations 117 and 119 may include such factors as power availability and price of the available power. In addition, the power monitoring systems 117 and 119 supply an indication of the spare computational capacity available, to indicate the ability to handle the computational load of computational array 101. The power monitor 115 evaluates the information received from the other power monitors and transfers the load based on the information received. The computational load may be split between computational facilities. [1031] One of the criteria to be evaluated can be the cost of operating the computational array, which is typically related to the cost of electricity needed to both power the computational array and cool or heat the array). As the price of electricity can drop at times of off-peak demand, it may be advantageous to shift the computational load to the lowest cost facility (the computational array powered by the lowest cost power generation facility). Thus, one computational array facility may have higher energy costs as compared to another facility. In order to reduce the total computational cost, it is advantageous to shift the computational load to computational facilities able to obtain the lowest cost power. [1032] As the price of electricity can drop after peak demand times, it may be advantageous to shift computational load to facilities based on time of day. For example, as lower costs become available because of time of day on the East Coast due to lower power demands compared to the West Coast, it may be advantageous to shift computational load from the West Coast to the East Coast and vice versa based on time of day and associated cost of electricity. [1033] One of the criteria to determine transfer of computational load is the cost associated with the transfer of the computational load. The cost of transfer has to be less than the difference in the cost of power to make load transfer economically efficient. [1034] In addition to cost of power, availability of power can also be a consideration. With alternative energy sources such as wind, solar, wave, etc., the availability of power from such sources may vary. Accordingly, the computational load for computational facilities being supplied with such power can be adjusted and/or shifted based on availability of such power. For example, cloudy days may cause computational load to be shifted to computational arrays supplied with power from a different power source. On the other hand, if abundant solar (or wave or wind) energy is available at a particular time, computational loads of other computational facilities supplied with conventional power may be shifted to the computational facility supplied with solar generated power. Such transfers may be particularly advantageous where subsidies are provided to solar (and other green) energy generation or taxes are imposed, at least indirectly, on computational arrays supplied with power generated with carbon based fuels. [1035] Referring to Fig. 2, illustrated is a high level flow diagram illustrating high level operations associated with transferring of computational load from the perspective of the computational array from which load needs to be transferred. At 201, the monitor associated with the computational array recognizes a metric of available power changing. That metric may be related to an amount of available power decreasing, e.g., wind decreasing on a wind-based power generation facility, or the cost of available electricity increasing. For example, the computational array normally powered from wind energy needs to switch to a more expensive fossil fuel based energy source. There may be a forecast for rain, or night may be approaching for a solar array. The computational array contacts other computational facilities requesting metrics associated with the computational facility. In one aspect, the contact may be in the form of a request for bid. Those metrics may include the amount of computational power available in, e.g., gigahertz hours, which is the amount of processing a single gigahertz processor can do in an hour, and the cost of each gigahertz hour, which may depend in part on the cost of the power being supplied to the array. Once all relevant information has been received from the computational arrays, the computational array transfers in 207 part or all of its computational load to one or more other computational arrays. [1036] Rather than request information from other computational arrays (from their associated monitors), other computational arrays may push the information on a periodic basis so all the computational arrays know what alternatives are available on a real-time or close to real-time basis. In fact, a common computing facility may monitor all of the computational arrays and move computational load based on availability of handling computational load and the cost associated with handling the computational load. [1037] Fig. 3 illustrates transfer of computational load from the perspective of a computational array receiving computational load. In 301, the computational array recognizes that it has availability in terms of computational load and provides that information to other computational facilities. That information may be pushed on a periodic basis regardless of availability. The receiving computational facility provides information related to cost and size of computational load it can handle. If another computational facility wishes to transfer its load, the transferring computational facility sends a request in 303 to the receiving computational facility, which can be accepted. If accepted, in 303 the computational load is transferred. [1038] Load transfer will be dominated not by the transfer of active processes from one computational resource to another, but by the decision regarding where to start a new process. Load is transferred when a job is started in a second computational facility instead of a first computational facility. [1039] Note that computational load can also be transferred based on a request from a power company. For example, a conventional utility may have an agreement and provide incentives in terms of cost of electricity to a computational array to transfer computational load to another computational facility during power shortages to help alleviate the shortage and provide more electrical generation and transmission capacity to other users. [1040] Suppose that a solar energy installation supplies power directly to a computational array. Further suppose that the computational array has variable ancillary power capabilities and that the preferred mode for power management is to shift computational load into and out of the computational array based on power availability, but that there may also be some limited local power storage capability. [1041] The available power at an array can reliably be determined based on the date and time, except for weather. An overcast day substantially reduces available power and a partly cloudy day can create significant amounts of power variability. A load shifting model functions much better in a predictive mode than in a reactive mode. Thus, an optical system that performs real-time cloud positioning and that provides accurate estimates of sun blockage time and duration can substantially increase the effectiveness of a solar powered computational model. Thus, in an embodiment, an optical imaging system tracks instantaneous solar intensity. Two general embodiments are described herein to track solar intensity, and of course variations of the described embodiments are possible. [1042] In one embodiment, referring to Fig. 4, concentric rings 401 of photodetectors 403 are placed outside a solar array 407 such that one or more of the photodetectors detect change in solar intensity, due, e.g., to clouds, and forwards the event to a centralized processing system 409 that uses the input from the entire array to determine the magnitude and duration of an event. While the processing system 409 is shown coupled to only one photodetector 403 for ease of illustration, the processing system 409 receives input from all of the photodetectors. Based on intensity information from each of the photodetectors, the processing system determines magnitude and direction of solar intensity impacting events, e.g., clouds moving over the array. Processing system 409, while shown separately for ease of illustration, may be incorporated into a collocated computational array. Further, the processing system 409 may be part of power monitor 115. The response characteristics of the associated solar array to changes in solar intensity help determine the diameter of the rings and the spacing of photodetectors. [1043] Referring to Fig. 5, in a second embodiment, a set of optical image capture devices (cameras) 501 measure "blue sky" size and distance and velocity of non-blue objects. While the processing system 409 is shown coupled to only one photodetector camera 501 for ease of illustration, the processing system 409 receives input from all of the cameras 409. That allows the system to track size, distance, and velocity of clouds to determine magnitude of a solar intensity disruption. The number of cameras required depends on the needs of the particular system. [1044] Based on the information available from the first or second embodiment, the centralized processing system uses the input to predict the magnitude and duration of an event that is about to impact the solar array. That prediction is used to tranfer computational load to a different computational array. A solar power generating station can have highly transient power delivery due to clouds. The transient nature of solar power may necessitate the creation of a longer duty cycle energy storage network. For example, storage of power for very short periods of time using capacitors can address certain transient responses of a solar array to cloud conditions. While transferring active load in response to transient changes in power supply can be almost instantaneous, shifting of active load needs more careful management to ensure that computations of the active load resume appropriately at a new location. [1045] As will be appreciated by those of skill in the art, other forms of "costs" (instead of, or in addition to, the cost of electricity and the associated costs of transport of electricity) can also be used to assess and dynamically allocate the computational resources. Such costs could include, for example, some metric associated with the environmental impact of the use of a computational resource (e.g., a metric used to assess greenhouse gas emissions), tax credits (e.g., the costs - which be positive or negative - of generating or consuming carbon tax credits which can be purchased or sold in a market for such financial instruments). [1046] The description of the invention set forth herein is illustrative, and is not intended to limit the scope of the invention as set forth in the following claims. Other variations and modifications of the embodiments disclosed herein may be made based on the description set forth herein, without departing from the scope of the invention as set forth in the following claims. |
A capacitor (100) in an integrated circuit ("IC") has a first plurality of conductive crosses (102, 104) formed in a layer of the IC electrically connected to and forming a portion of a first node of the capacitor and a second plurality of conductive crosses (108, 110) formed in the metal layer of the IC. The conductive crosses in the second plurality of conductive crosses are electrically connected to and form a portion of a second node of the capacitor and capacitively couple to the first node. |
CLAIMS What is claimed is: 1. A capacitor in an integrated circuit ("IC") comprising: a first plurality of conductive crosses formed in a layer of the IC, each of the first plurality of conductive crosses being electrically connected to and forming a first portion of a first node of the capacitor; and a second plurality of conductive crosses formed in the layer of the IC, each of the second plurality of conductive crosses being electrically connected to and forming a first portion of a second node of the capacitor and capacitively coupling to the first node. 2. The capacitor of claim 1 wherein each of the first plurality of conductive crosses is symmetrical. 3. The capacitor of claim 1 or 2 wherein each of the second plurality of conductive crosses is symmetrical. 4. The capacitor of any one of claims 1-3 wherein the conductive crosses in the first plurality of conductive crosses are electrically isolated from each other within the layer by dielectric material. 5. The capacitor of any one of claims 1 -4 further comprising a first buss bar electrically connected to and forming a second portion of the first node and a second buss bar electrically connected to and forming a second portion of the second node, each of the first plurality of conductive crosses electrically connected to the first buss bar in the layer and each of the second plurality of conductive crosses electrically connected to the second buss bar in the layer. 6. The capacitor of claim 5 further comprising a second layer overlying the layer having a third plurality of conductive crosses electrically connected to and forming a third portion of the second node, each of the conductive crosses in the third plurality of conductive crosses overlying conductive crosses in the first plurality of conductive crosses. 7. The capacitor of claim 6 further comprising a fourth plurality of conductive crosses in the second layer electrically connected to and forming a third portion of the first node overlying conductive crosses in the second plurality of conductive crosses. 8. The capacitor of claim 1 wherein a first horizontal member of a first cross of the first plurality of conductive crosses overlaps a portion of a parallel member of a second cross of the second plurality of conductive crosses and overlaps an end of a perpendicular member of a third cross of the second plurality of conductive crosses. 9. The capacitor of claim 1 further comprising an interconnection layer in a second layer of the IC above or below the layer, the interconnection layer having a first node interconnector conductor and a second node interconnector conductor, the first node interconnector conductor being electrically connected to each of the first plurality of conductive crosses and the second node interconnector conductor being electrically connected to each of the second plurality of conductive crosses. 10. The capacitor of claim 9 further comprising a third layer of the IC, the interconnection layer being between the third layer and the layer, a third plurality of conductive crosses formed in the third layer electrically connected to and forming a second portion of the first node interconnector conductor; and a fourth plurality of conductive crosses formed in the third layer electrically connected to and forming a second portion of the second node interconnector conductor. 11. The capacitor of claim 9 wherein the first node interconnector conductor includes an interconnect trace overlying and electrically connecting to a first cross of the first plurality of conductive crosses and at least partially overlying and capacitively coupling to a second cross of the second plurality of conductive crosses. 12. The capacitor of claim 9 wherein the interconnection layer includes a first plurality of interconnect traces of the first node interconnector conductor alternating with a second plurality of interconnect traces of the second node interconnector conductor. 13. The capacitor of claim 11 wherein the first cross in the layer has a vertical element having a first width and each of the first plurality of interconnector traces has a first portion having the first width and a second portion having a second width greater than the first width. 14. The capacitor of claim 1 wherein the layer has a first row of conductive crosses of alternating polarity an a second row of conductive H-elements of alternating polarity proximate to the first row, a first conductive H-element in the second row being diagonally connected to a conductive first cross in the first row, each of the first conductive H-element and the first conductive cross being electrically connected to and forming a second portion of the first node, and a second conductive H-element in the second row being diagonally connected to a second conductive cross in the first row, each of the second conductive H-element and the second conductive cross being electrically connected to and forming a second portion of the second node. 15. The capacitor of claim 14 further comprising a second layer overlying the layer having a third row of conductive H-elements of alternating polarity, a third conductive H-element overlying the first conductive H-element and being electrically connected to and forming a third portion of the second node. |
INTEGRATED CAPACITOR WITH ARRAY OF CROSSES FIELD OF THE INVENTION The present invention relates to capacitors formed in integrated circuits ("ICs") commonly referred to as "integrated capacitors". BACKGROUND Methods of fabricating ICs typically include a front-end sequence of processing, in which various electrical devices such as transistors are formed in a semiconductor substrate, and a back-end sequence of processing, generally including forming alternating layers of dielectric material and patterned conductive material (typically metal) with conductive vias or other techniques being used to interconnect the metal layers to form a three-dimensional wiring structure that connects electrical devices to other electrical devices and to terminals of the IC. Capacitors are used in IC systems for a variety of purposes. In many instances, it is desirable to incorporate (integrate) a capacitor in the IC chip. A simple approach is to form two conductive plates with an intervening dielectric; however, this consumes a relatively large area for the capacitance obtained. One technique for increasing the capacitance of a given area is to use multiple conductive plates, each conductive plate separated from the proximate plate(s) by dielectric. Further techniques use conducting strips, also called conductive lines, conductive fingers, or conductive traces that are alternately connected to the first and second capacitor terminals (nodes). Sidewall coupling between the conductive strips provides capacitance. Layers of conducting strips, either offset or arranged in vertical congruency, can be added to further increase the capacitance of an integrated capacitor structure. One capacitor has a number of conductive strips in successive layers connected to the first node alternating with an equal number of conductive strips connected to the second node of the integrated capacitor. The conductive strips are offset a half cell on successive layers, so that a conductive strip connected to the first node has conductive strips connected to the second node above and on both sides of it. Providing an equal number of conductive strips in a layer for each node balances the coupling of each node to the substrate, which isdesirable in some applications, but undesirable in others, such as switching applications where it is desirable to have less coupling at one node. In order to reduce coupling to the substrate, a thick layer of silicon dioxide is used between the substrate and the first layer of conductive strips. This may be difficult to integrate in a standard CMOS fabrication sequence, and might require additional steps to be added to the standard process flow. The overlapping parallel conductive strips are connected at their ends using buss strips that consume additional surface area Another approach to providing an integrated capacitor is to have conductive strips in a layer connected to alternate nodes of the capacitor with overlapping conductive strips connected to the same node. This forms essentially a curtain of conductive strips and interconnecting vias connected to the first node of the capacitor with adjacent curtains of conductive strips and interconnecting vias connected to the second node. Overlapping conductive strips connected to the same node avoids the lost surface area associated with buss strips; however, inter-layer capacitance is reduced because the upper strip is connected to the same node as the lower strip. This effect is somewhat obviated because, as critical dimensions shrink, inter-strip capacitance becomes more dominant than inter-layer capacitance. In other words, the dielectric layer separation between successive metal layers becomes increasingly greater than the dielectric separation between conductive strips with decreasing critical dimension. It is generally desirable that integrated capacitors have high specific capacitance; however, manufacturability and quality factor ("Q factor") is also a concern in many instances. One manufacturability concern is controlling the final capacitance value of an integrated capacitor, both within a large IC, across a wafer, and lot-to-lot. Thus, integrated capacitors manufacturable to provide a consistent capacitance value are desired. It is further generally desired that integrated capacitors have high capacitance per unit area, low loss (resistance), and low self-inductance, which improves high-frequency applications by increasing self- resonant frequency and the quality of capacitor circuits. In some applications, it is further desirable to shield integrated capacitors from electrical noise.SUMMARY A capacitor in an integrated circuit ("IC") has a first plurality of conductive crosses formed in a layer of the IC electrically connected to and forming a portion of a first node of the capacitor and a second plurality of conductive crosses formed in the metal layer of the IC. The conductive crosses in the second plurality of conductive crosses are electrically connected to and form a portion of a second node of the capacitor and capacitively couple to the first node. BRIEF DESCRIPTION OF THE DRAWINGS Accompanying drawing(s) show exemplary embodiment(s) in accordance with one or more aspects of the invention; however, the accompanying drawing(s) should not be taken to limit the invention to the embodiment(s) shown, but are for explanation and understanding only. FIG. 1A is plan view of a layer of an integrated capacitor with a repeating pattern of overlapping crosses according to an embodiment. FIG. 1 B is a cross section of the layer of FIG. 1A. FIG. 2A is a plan view of an interconnection layer according to an embodiment. FIG. 2B is a cross section of the layer of FIG. 2A between layers in accordance with FIG. 1A FIG. 2C is a plan view of the layer of FIG. 2A superimposed over a layer in accordance with FIG. 1A. FIG. 3A is a plan view of a layer of an integrated capacitor with an array of crosses having intra-layer interconnects according to another embodiment. FIG. 3B is a side view of an integrated capacitor incorporating layers in accordance with FIG. 3A FIG. 4 is a plan view of a layer of an integrated capacitor with an array of crosses and H-elements having intra-layer interconnects according to another embodiment. FIG. 5 is a plan view of an FPGA incorporating an integrated capacitor according to an embodiment.DETAILED DESCRIPTION Complex ICs, such as programmable logic devices, often have several patterned metal layers separated by layers of dielectric material formed over a semiconductor substrate that are used for wiring connections and other functions. Some embodiments of the invention are adaptable to existing CMOS process sequences by using masks that form the desired patterns in the appropriate metal layers and vias through the inter-metal dielectric ("IMD") layers or inter-layer dielectric ("ILD"). The vias are formed using any of several known techniques, such as contact plug, damascene, or dual damascene techniques. Similarly, the conductive strips are formed using any of several known techniques, such as thin-film metal etch, thin-film metal lift-off, damascene, and dual damascene techniques. In some embodiments, one of the conductive layers is a polysilicon or suicide layer. In a further embodiment, a conductive well in the semiconductor substrate forms a portion of a capacitor plate or a shield. Integrated capacitors are used in a variety of applications. While high specific capacitance is generally desirable to reduce the surface area of the IC devoted to the integrated capacitor, the resultant capacitance value is also very important in many applications, such as tuning applications. In other words, the capacitance value across an IC chip, across a wafer, and lot-to-lot is important enough to sacrifice specific capacitance in some applications. Integrated capacitors that rely primarily on intra-layer (lateral) capacitance show relatively low variance compared to integrated capacitors that rely heavily on inter-layer (vertical) capacitance because the dimensional accuracy is more controllable within a layer than from layer-to-layer. The terms "top" node and "bottom" node do not necessarily relate to the physical orientation of the nodes relative to the IC or other structure, but are used as terms of convenience. In some circuit applications, the top node of a capacitor indicates the node that is connected to a high-impedance or high-gain port of an amplifier or other device. In a system-on-chip ("SoC"), the accuracy on an analog-to-digital converter ("ADC") is dependent on the ratio of the parasitic capacitance at the top node (Ctop) to all other nodes except the bottom node and the capacitance (Csιg) that is the useful floating signal capacitance between both nodes. It is desirable to shield the top plate from ground currentsor voltage supply fluctuations so that CtOp remains low. Note that a capacitor is generally thought of as a two terminal device, and the "top" and "bottom" nodes as described herein generally correspond to these two terminals of the capacitor. Thus, the structures described below may be thought of as connecting (e.g., electrically) to one or the other node, or forming portions of a node. A node is not separate from the capacitive structures connected to it, but those structures may form portions of a node. FIG. 1A is a plan view of a layer of an integrated capacitor 100 with a repeating pattern of overlapping crosses according to an embodiment. Conductive (e.g., metal, polysilicon, or suicide) crosses of one polarity (i.e., connected to a first node of the integrated capacitor and shown with stippling) 102, 104, 106 alternate along a shallow diagonal with crosses of a second polarity 108, 110 (shown without stippling). If a section is taken parallel to an edge, such as along section line A-A, the cross section of the conductive crosses alternate. While the illustrated crosses are symmetrical (i.e., each of the vertical members of the cross are essentially the same length as each of the horizontal members), alternative embodiments incorporate crosses that are not symmetrical, including embodiments where one of the horizontal and/or vertical members is longer than the other. The layer includes a perimeter shield 112 that surrounds the conductive elements (conductors) 106, 114 (crosses and partial crosses) of the opposite polarity. In a particular embodiment, the perimeter shield and associated crosses and partial crosses are connected to the bottom node of the integrated capacitor and the conductive elements of the opposite polarity are connected to the top node of the integrated capacitor. Interior crosses of each polarity are electrically isolated from each other within the layer by dielectric material, such as silicon dioxide. Electrical connection is made to the interior crosses using vias from a layer above or below the layer illustrated in FIG. 1 A (see, e.g., FIGs. 2A-2C), such as vias formed using a dual damascene process, extending from the metal traces in the metal layer illustrated in FIG. 1 A to a lower layer, or from an upper layer down to the metal traces in the layer of FIG. 1 A. Bringing electrical connections to the interior crosses from metal layers above or below the layer 100 allows the crosses to be defined at or near the minimum (critical) dimension. In other words, the crosses can be made very small and on verysmall spacings to optimize lateral capacitance between the conductive elements of the top node and the conductive elements of the bottom node, achieving high specific capacitance. In an alternative embodiment, the crosses are not made at the minimum spacing and feature size, allowing for alternative interconnection techniques. In conventional integrated capacitors using long filament conductors, the maximum length of a metal trace (filament) is restricted by its width. In other words, a filament having the minimum width has an associated maximum length. If a longer filament is desired, the width is increased to maintain process reliability. Increasing width decreases the number of filaments that can be defined across a given layer, which reduces the lateral filament-to-filament capacitance in that layer. Using an array of crosses as shown in FIG. 1 A or alternative pattern of crosses (see, e.g., FIG. 4) allows minimum metal line width and minimum spacing between metal features to be maintained across a large area. This provides enhanced lateral capacitance per unit area compared to a conventional filament-type layer in which the filaments have to be widened to maintain design and fabrication rules. Another issue that can arise with filament-type layers is aliasing during photolithography. Aliasing occurs as a result of interference when closely spaced lines are imaged. Arrays of conductive crosses or other conductive elements do not develop the aliasing associated with long, closely spaced filaments. In one embodiment, a layer above or below the layer of FIG. 1 A overlaps with essentially the same pattern. In an alternative embodiment, a layer having essentially the same pattern partially overlaps the layer of FIG. 1A. In yet another embodiment, a layer having a different pattern (see, e.g., FIG. 3) overlaps the layer of FIG. 1A. Conductive vias electrically connect the conductive elements of a first node conductive matrix in the first layer to the conductive elements of the first node conductive matrix in the other layer, and other conductive vias electrically connect the conductive elements of the second node conductive matrix in the first layer to the conductive elements of the second node conductive matrix in the second layer. A conductive matrix of a node is essentially the conductive elements that are electrically connected to the node that form a three-dimensional conductive matrix in patterned metal layers.The top and bottom node conductors are formed in dielectric material, such as deposited silicon dioxide or other dielectric materials well known in the art of IC manufacturing. In a particular embodiment, trenches are formed in the dielectric material and then the trenches are filled with metal to form metal traces. To maximize lateral capacitance, the trenches are preferably deep and closely spaced. In a particular embodiment, the metal traces are deeper than they are wide, which promotes lateral capacitance and close-packing for high specific capacitance. In an exemplary embodiment, the metal traces are manufactured to have a minimum metal line width allowed in the manufacturing technology node process for the metal layer in which the traces are formed, and have the minimum metal trace spacing (i.e., dielectric sidewall thickness) allowed. In another embodiment, both the metal trace width and the metal trace spacing are typically about 10% over the minimum allowable values for the metal layer, which may provide more reliable manufacturability. An integrated capacitor that develops a short circuit between the nodes is usually fatal to the operation of the circuit and possibly to the entire IC. Thus, in some embodiments, integrated capacitors are designed to higher manufacturing and reliability standards at the sacrifice of maximum specific capacitance (e.g., manufacturing integrated capacitors at the minimum metal line width for each layer). FIG. 1 B is a cross section 120 of the layer of FIG. 1 A taken along section line A-A. Bottom node perimeter shield sections 122, 124 at each end of the layer form a conductive perimeter that isolate interior conductors of the top node 126, 128, 130, 132 from electrical noise or from the top node conductors capacitively coupling with other nodes in the layer. The top node conductors 126, 128, 130, 132 alternate with conductors of the bottom node 134, 136, 138, 139. For purposes of convenient discussion only, the crosses will be described as having two vertical members extending up and down from the center of the cross, and two horizontal members extending right and left. A cross section through both horizontal members and the center (e.g., 134, 130) includes the length of each horizontal member and the width of a vertical member. For purposes of discussion, a cross section along the entire width of a cross will be referred to as a "full cross" section. The arrangement of the array of conductive crosses in the layer of FIG. 1A result in a full cross section 134 of a first polarity(e.g., bottom node) being followed by a first vertical member cross section 128 of a second polarity (e.g., top node), a second vertical member cross section 136 of the first polarity, and a second full cross section 130 of the second polarity. Referring to the array of crosses in FIG. 1 A, it is seen that generally each member (e.g., horizontal member 140) of an interior cross overlaps a portion of a parallel member 142 of an adjacent cross of opposite polarity, overlaps an end 144 of a perpendicular member of a superior or inferior member of another adjacent cross of opposite polarity, and end-couples to a perpendicular member 146 of the first adjacent cross. Thus, the member 140 laterally couples to conductive elements of the opposite node on three sides. FIG. 1A is not drawn to scale, and dimensions are exaggerated for clarity of illustration. In some physical devices according to embodiments of FIG. 1 A, the inter-cross spacings are relatively smaller (i.e., the crosses are very close together), and the lateral coupling between crosses is very high from the high fill-factor of the layer. As the separation between crosses shrinks, each interior cross is essentially surrounded by members of other crosses of the opposite polarity. In a layer having minimum or near-minimum line widths and spacings, a high fill factor and high capacitance per unit area is achieved. FIG. 2A is a plan view of an interconnection layer 200 according to an embodiment. The layer 200 is suitable for use in conjunction with layers above or below the layer 200 generally in accordance with FIG. 1 A or other layers according to alternative embodiments that have electrically isolated conductive node elements in the layer. The layer 200 includes a top node interconnector conductor 202 and a bottom node interconnector conductor 204 formed in a metal layer. Conductive vias extending from the top node interconnector conductor 202 to the top node crosses and partial crosses in a metal layer below, or from the top node crosses and partial crosses in a metal layer above, to the top node interconnector conductor 202, electrically interconnects the top node elements of the integrated capacitor to form a top node conductive matrix (see, e.g., FIG. 2B). The top node interconnector conductor 202 includes a number of staggered interconnect traces 206, 208 that trend in a slanted fashion across the layer so as to interconnect conductive crosses (see, e.g., FIG. 1A and FIG. 2C) in an upper or lower metal layer. Each staggered interconnect trace has wider sections alternating withnarrower sections. The wider sections offset the staggered trace in the X direction about one half of a full cross section, and the narrower sections drop the staggered trace in the Y direction. In a particular embodiment, the width of the wider sections is increased to bring adjacent traces close together, which shortens the narrower sections until the staggered trace becomes essentially a series of truncated diamond shapes. The staggered interconnect traces 206, 212 capacitively couple across a gap 210 that is typically filled with dielectric material, as described above for the layer 100 in FIG. 1A, providing intralayer capacitance and adding to the specific capacitance of the integrated capacitor. The wider sections enhance interlayer capacitance, as explained below in reference to FIG. 2B, and also enhance intralayer capacitance in the interconnector layer 200 by bringing the staggered traces of one node close to the staggered traces of the opposite node. In a particular embodiment, the staggered traces are defined to at least partially overlay and electrically connect to a series of conductive crosses having the same polarity, and to also at least partially overlay and capacitively couple to a series of conductive crosses having the opposite polarity. In a particular embodiment, this separation between traces is at or near the minimum spacing specification for the metal layer in which the interconnector layer is patterned, promoting intralayer capacitance in the interconnector layer. Alternatively, an interconnection layer has straight-sided traces that slant along the angle of crosses of a polarity with electrical connections being made to the conductive crosses below, however, staggered traces increase the perimeter length of the trace compared to a straight-sided trace, providing increased lateral capacitance between traces in the interconnection layer. FIG. 2B is a cross section of the layer of FIG. 2A between layers in accordance with FIG. 1 A. A first layer of alternating crosses generally in accordance with the techniques of FIG. 1A is fabricated in a first metal layer M1 , an interconnector layer in accordance with FIG. 2A is fabricated in a second metal layer M2, and a second layer of alternating crosses in accordance with FIG. 1A is fabricated in a third metal layer M3. In layers M1 and M3, the sections of conductive elements alternate between nodes (see, FIG. 1 B). In the interconnector layer M2, metal interconnect trace 220, which is electrically connected to a first node of the integrated capacitor, overlaps metal elements222 and 224, which are in layers M1 and M3 and which are connected to the second node of the integrated capacitor, providing interlayer capacitance 225, 227. The metal element 220 is a portion of a staggered trace that has a width greater than the width of vertical members 226, 228 of the same polarity to which it electrically connects through vias 230, 232. FIG. 2C is a plan view of the layer of FIG. 2A superimposed over a layer in accordance with FIG. 1A. The staggered trace 212 connects to conductive crosses 240, 242 through vias 244, 246 to form a bottom node conductive matrix, and the staggered trace 206 similarly connects conductive crosses 248, 250 to form a top node conductive matrix. In a further embodiment, a second layer of otherwise isolated crosses is superimposed on the interconnect layer to produce top and bottom node conductive matrices essentially in accordance with FIG. 2B. The staggered traces are sufficiently wide to produce inter-layer capacitance with conductive elements of the opposite node. For example, a wide portion of staggered trace 246 overlaps a portion 252 of cross 248. FIG. 3A is a plan view of a layer 300 of an integrated capacitor with an array of crosses having intra-layer interconnects 302, 304 according to another embodiment. The patterned layer 300 has an array of conductive crosses, some of which are interconnected to the bottom node, while the others are interconnected to the top node within the layer. The patterned layer 300 is useful in several embodiments of integrated capacitors. In some embodiments, the patterned layer 300 is used above or below a layer in accordance with FIG. 1A, wherein conductive vias electrically connect the isolated crosses in one layer to the interconnected crosses of patterned layer 300. In such embodiments, the isolated crosses are larger than minimum dimension, although in some embodiments they are larger than the interconnected crosses to bring the sidewalls of the conductive isolated crosses closer together. Using conductive crosses allows a designer to use the minimum line width for that metal layer, as the vertical and horizontal legs of the crosses are relatively short. Typically, the minimum line width allowed for a feature in a metal layer depends in part on the length of the line. Long conductive traces have a wider minimum width to avoid a break in the trace. In other embodiments, multiple layers in accordance with FIG. 3A are stacked with alternating layers having the opposite polarity, in other words, a conductive cross in the Nth metal layer has the opposite polarity froman overlying or underlying cross in the N+1 or N-1 metal layer (see FIG. 3B). Diagonal interconnects 302, 304 interconnect crosses 306, 308 and partial crosses to buss bars 310, 312. The integrated capacitor layer includes optional shield bars 314, 316. The shield bars 314, 316 and bottom node buss bars 310, 318 essentially surround the conductive elements of the top node in the layer 300, including the top node buss bars 312, 320, limiting capacitive coupling. The first top node buss bar 320 extends along a first edge of the layer 300 and the second top node buss bar 312 extends from the first top node buss bar 320 along a first perpendicular edge of the layer. Similarly, the first bottom node buss bar 310 extends along a second edge of the layer and the second bottom node buss bar 318 extends from the first bottom node buss bar 310 along a second perpendicular edge of the layer. FIG. 3B is a side view of an integrated capacitor 330 incorporating layers in accordance with FIG. 3A formed in metal layers M1 , M2, M3. The outer elements 332, 334, 336 are connected to the bottom node of the integrated capacitor within the metal layers M3, M2, M1 of the integrated capacitor and are optionally connected layer-to-layer with conductive vias 338, 340. The outer elements are a bottom node buss bar or shield bar, for example. Conductive elements T1 , T2, T3, T4 are connected to the top node and alternate with conductive elements B1 , B2, B3, B4, which are connected to the bottom node. The conductive elements in the M2 layer T5, T6, 17, T8 alternate with conductive elements B5, B6, B7, B8 and are of the opposite polarity from the corresponding elements in the M3 layer, providing interlayer capacitance. Similarly, conductive elements B9, B10, B11 , B12 alternate with conductive elements T9, T10, T11 , T12 in M1 and are of the opposite polarity from the overlying conductive elements, lntralayer connections (see FIG. 3A, ref. nums. 302, 304) connect interior conductive elements of each node within the layers M1 , M2, M3, avoiding the need for conductive vias between metal layers to connect conductive elements of the node matrices together (compare, FIG. 2B). The integrated capacitor optionally includes a first bottom node shield plate 342 formed in a polysilicon or suicide ("poly") layer, and a second bottom node shield plate 344 formed in the M4 layer. The first and second bottom node shield plates, in conjunction with the outer bottom node elements 332, 334, 336 and vias 338, 340 form essentially a Faraday cage around the top node conductivematrix, shielding the top node from coupling to other nodes (i.e., other than the bottom node) in the IC. Additionally shielding, such as a ground shield plate in an M5 layer (not shown), ground shield matrix, or power supply (e.g., VDD) shield matrix is optionally included to shield or essentially surround the integrated capacitor. FIG. 4 is a plan view of a layer 400 of an integrated capacitor with an array of crosses and H-elements having intra-layer interconnects according to another embodiment. A bottom node conductor 402 includes H-elements 404 (i.e., elements shaped like an "H") interconnected to cross elements 406 (i.e., elements shaped like a "+") along a diagonal using interconnects 408. The patterned layer 400 has rows of H-elements alternating with rows of cross elements. In the rows of H-elements, H-elements connected to the bottom node alternate with H-elements connected to the top node. Similarly, in the rows of cross elements, cross elements connected to the top node alternate with cross elements connected to the bottom node. Vertical conductive members of the cross elements overlap with vertical conductive members of the H-elements to provide lateral coupling between cross elements and H-elements of opposite polarity. The array of conductive elements provides good fill density (intra-layer capacitance) and the repetitive nature of the elements avoids long runs of metal traces that might be restricted to minimum widths or cause aliasing during photolithography, as discussed above in reference to FIG. 1A. Bottom node buss bars 410, 412 extending along perpendicular edges are provide electrical connection to the interior cross elements, H-elements, and partial elements of the bottom node conductor. Top node buss bars 414, 416 extending along opposite perpendicular edges similarly provide electrical connection to the interior cross elements, H-elements, and partial elements of the top node conductor 418. In a particular embodiment, layers according to FIG. 4 are stacked with each layer having the polarity of the conductive elements reversed. The interconnects running diagonally from the buss bar to a cross element and then to alternating H-elements and cross elements electrically connect the conductive elements in the layer to the desired node. If another layer in accordance with FIG. 4 is formed and the polarities of the buss bars reversed, the conductive elements in opposing layers provide inter-layer (vertical) capacitance.Note that the types of and number of layers described are merely examples, and in some embodiments other suitable layers may be used, and any number of layers may be used. For example, the layers used may depend on the types and numbers of layers that are available in the manufacturing process, and other arrangements will be apparent to those of skill in the art. In general, any suitable layer, and an arbitrary number of layers may be used in accordance with embodiments of the present invention. FIG. 5 is a plan view of an FPGA 500 semiconductor device incorporating an integrated capacitor according to an embodiment. The FPGA 500 includes CMOS portions in several of the functional blocks, such as in RAM and logic, and is fabricated using a CMOS fabrication process. One or more integrated capacitors 555 according to one or more embodiments of the invention are incorporated in any of several functional blocks of the FPGA, such as a clock circuit 505, a multi-gigabit transceivers 501 , or other functional block; within many functional blocks; or within a physical section or segment of the FPGA 500. Integrated capacitors 555 are particularly desirable in applications where one or both terminals of the capacitor are switched, and embodiments including top node shielding are further desirable in applications wherein the top node is connected to or switched to a high-impedance or high-gain node of a circuit in the FPGA 500. Capacitors are generally useful in a wide variety of integrated circuits and in a wide variety of applications. For instance, one or more capacitors may be useful for a switched capacitor network, such as in an analog- to-digital converter, or as a decoupling or filtering capacitor for AC signaling (e.g., in an MGT). In general, the capacitor structure described herein may be useful in any application requiring capacitance. The FPGA architecture includes a large number of different programmable tiles including multi-gigabit transceivers (MGTs 501 ), configurable logic blocks (CLBs 502), random access memory blocks (BRAMs 503), input/output blocks (lOBs 504), configuration and clocking logic (CONFIG/CLOCKS 505), digital signal processing blocks (DSPs 506), specialized input/output blocks (I/O 507) (e.g., configuration ports and clock ports), and other programmable logic 508 such as digital clock managers, analog-to-digital converters, system monitoring logic, and so forth. Some FPGAs also include dedicated processor blocks (PROC 510).In some FPGAs, each programmable tile includes a programmable interconnect element (INT 511 ) having standardized connections to and from a corresponding interconnect element in each adjacent tile. Therefore, the programmable interconnect elements taken together implement the programmable interconnect structure for the illustrated FPGA. The programmable interconnect element (INT 511 ) also includes the connections to and from the programmable logic element within the same tile, as shown by the examples included at the top of FIG. 5. For example, a CLB 502 can include a configurable logic element (CLE 512) that can be programmed to implement user logic plus a single programmable interconnect element (INT 511 ). A BRAM 503 can include a BRAM logic element (BRL 513) in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured embodiment, a BRAM tile has the same height as four CLBs, but other numbers (e.g., five) can also be used. A DSP tile 506 can include a DSP logic element (DSPL 514) in addition to an appropriate number of programmable interconnect elements. An IOB 504 can include, for example, two instances of an input/output logic element (IOL 515) in addition to one instance of the programmable interconnect element (INT 511 ). As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 515 are manufactured using metal layered above the various illustrated logic blocks, and typically are not confined to the area of the input/output logic element 515. In the pictured embodiment, a columnar area near the center of the die (shown shaded in FIG. 5) is used for configuration, clock, and other control logic. Some FPGAs utilizing the architecture illustrated in FIG. 5 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic. For example, the processor block PROC 510 shown in FIG. 5 spans several columns of CLBs and BRAMs. Note that FIG. 5 is intended to illustrate only an exemplary FPGA architecture. The numbers of logic blocks in a column, the relative widths of the columns, the number and order of columns, the types of logic blocks included in the columns, the relative sizes of the logic blocks, and the interconnect/logicimplementations included at the top of FIG. 5 are purely exemplary. For example, in an actual FPGA more than one adjacent column of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic. While the foregoing describes exemplary embodiment(s) in accordance with one or more aspects of the present invention, other and further embodiment(s) in accordance with the one or more aspects of the present invention may be devised without departing from the scope thereof, which is determined by the claim(s) that follow and equivalents thereof. Claim(s) listing steps do not imply any order of the steps. Trademarks are the property of their respective owners. |
This invention relates to the field of semiconductor integrated circuits and, particularly to stand-alone and embedded memory chips fabricated on Silicon-on-Insulator (SOI) substrates and devices. Partially depleted (PD) and fully depleted (FD) devices are utilized on the same chip. The invention is a process flow utilizing fully depleted SOI devices in one area of the chip and partially depleted SOI devices in selected other areas of the chip. The choice of fully depleted or partially depleted is solely determined by the circuit application in that specific area of the chip. The invention is able to be utilized in accordance with DRAM processing, and especially embedded DRAMs with their large proportion of associated logic circuitry. |
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method of forming a semiconductor device, comprising:thinning an upper silicon layer in at least one region of a silicon-on-insulator substrate by removing an oxidized portion of said upper silicon layer at said region; and forming at least one partially depleted region and at least one fully depleted region in said upper silicon layer, said at least one fully depleted region being formed at said region where said oxidized portion of said upper silicon layer has been removed. 2. The method of claim 1, wherein said portion of upper silicon layer is removed by:forming a non-oxidizable layer over said upper silicon layer; removing at least one portion of said non-oxidizable layer to expose a portion of said upper silicon layer; oxidizing said exposed portion of said upper silicon layer; and removing said oxidized portion of said upper silicon layer. 3. The method of claim 2, wherein said act of forming at least one partially depleted region and at least one fully depleted region comprises:implanting said upper silicon layer to form said at least one fully depleted region in said upper silicon layer where said oxidized portion was removed and to form said at least one partially depleted region where said upper silicon layer was not exposed. 4. The method of maim 3, wherein the acts of forming and removing a portion of said non-oxidizable layer comprise:forming an insulating layer over said upper silicon layer; forming a nitride layer over said insulating layer; and removing a portion of said nitride layer and said underlying insulating layer to expose said portion of said upper silicon layer. 5. The method of claim 4, further comprising forming at least one access transistor of a memory array over said fully depleted region and forming at least one periphery device transistor over said partially depleted region.6. The method of claim 5, wherein said at least one access transistor and said at least one periphery device transistor are part of a memory device circuit.7. The method of claim 4, wherein said insulating layer is a stress absorbing layer.8. The method of claim 7, wherein said stress absorbing layer is an oxide layer.9. The method of claim 3, wherein said implanting is performed using Boron ions.10. The method of claim 8, wherein said oxide layer has a thickness of about 90 Angstroms and said nitride layer has a thickness in the range of about 100-2000 Angstroms.11. The method of claim 2, wherein said upper silicon layer is oxidized to a thickness of at least 100 nm.12. The method of claim 2, wherein said upper silicon layer has a thickness not greater than 100 nm where said oxidized portion is removed.13. The method of claim 1, further comprising forming at least one access transistor of a memory array over said fully depleted region and forming at least one periphery device transistor over said partially depleted region.14. The method of claim 13, further comprising forming at least one periphery device transistor over said fully depleted region.15. The method of claim 13, wherein said at least one access transistor and said at least one periphery device transistor are part of a memory device circuit.16. The method of claim 1, wherein said provided upper silicon layer has an initial thickness of approximately 200 nm prior to said thinning of said layer.17. The method of claim 1, wherein said fully depleted region of said upper silicon layer is no more than 100 nm thick.18. The method of claim 1, wherein said partially depleted region of said upper silicon layer is between about 100 nm and about 200 nm thick.19. A method of forming a semiconductor device, comprising:forming at least one partially depleted low-threshold voltage gate device and at least one fully depleted high-threshold voltage gate device over an upper silicon layer of a silicon-on-insulator substrate, said at least one fully depleted high-threshold gate device being formed over a region of said upper silicon layer that has been thinned by removing an oxidized portion of said upper silicon layer. 20. The method of claim 19, wherein said portion of upper silicon layer is removed by:forming a non-oxidizable layer over said upper silicon layer; removing at least one portion of said non-oxidizable layer to expose a portion of said upper silicon layer; oxidizing said exposed portion of said upper silicon layer; and removing said oxidized portion of said upper silicon layer. 21. The method of claim 20, wherein said act of forming said at least one partially depleted low-threshold voltage gate device and at least one fully depleted high-threshold voltage gate device comprises:implanting said upper silicon layer to form a fully depleted region in said upper silicon layer where said oxidized portion was removed and a partially depleted region where said upper silicon layer was not exposed. 22. The method of claim 21, wherein the acts of forming and removing a portion of said non-oxidizable layer comprise:forming an insulating layer over said upper silicon layer; forming a nitride layer over said insulating layer; and removing a portion of said nitride layer and said underlying insulating layer to expose said portion of said upper silicon layer. 23. The method of claim 22, wherein said high-threshold gate is of an access transistor of a memory array over said fully depleted region and said low-threshold gate is of a periphery device transistor over said partially depleted region.24. The method of claim 23, wherein said access transistor and said periphery device transistor are part of a memory device circuit.25. The method of claim 22, wherein said insulating layer is a stress absorbing layer.26. The method of claim 25, wherein said stress absorbing layer is an oxide layer.27. The method of claim 26, wherein said oxide layer has a thickness of about 90 Angstroms and said nitride layer has a thickness in the range of about 100-2000 Angstroms.28. The method of claim 21, wherein said implanting is performed using Boron ions.29. The method of claim 20, wherein said upper silicon layer is oxidized to a thickness of at least 100 nm.30. The method of claim 19, wherein region of said upper silicon layer under said partially depleted low-threshold voltage gate device is between about 100 nm and about 200 nm thick.31. The method of claim 19, wherein said high-threshold voltage gate device is of an access transistor of a memory array over said fully depleted region and said low-threshold voltage gate device is of a periphery device transistor over said partially depleted region.32. The method of claim 31, wherein said access transistor and said periphery device transistor are part of a memory device.33. The method of claim 19, wherein provided said upper silicon layer has an initial thickness of approximately 200 nm prior to said thinning of said layer.34. The method of claim 19, wherein said region of said upper silicon layer under said fully depleted high-threshold voltage gate device is no more than 100 nm thick.35. The method of claim 20, wherein said upper silicon layer has a thickness not greater than 100 nm where said oxidized portion is removed.36. A method of forming a memory device, comprising:forming at least one partially depleted region and at least one fully depleted region in an upper silicon layer of a silicon-on-insulator substrate, said at least one fully depleted region being defined by an oxidized portion of said upper silicon layer that has been thinned by removing said oxidized portion of said upper silicon layer; and forming an access transistor of a memory cell over said fully depleted region of said upper silicon layer and forming a periphery device transistor over said partially depleted region of said upper silicon layer. 37. The method of claim 36, wherein said act of thinning comprises:forming a non-oxidizable layer over said upper silicon layer; removing at least a portion of said non-oxidizable layer to expose a portion of said upper silicon layer; oxidizing said exposed portion of said upper silicon layer; and removing said oxidized portion of said upper silicon layer. 38. The method of claim 37, wherein the act of forming said at least one partially depleted region and at least one fully depleted region comprises:implanting said upper silicon layer to form said at least one partially depleted region in said upper silicon layer where said upper silicon layer was not exposed and said at least one fully depleted region in said upper silicon layer where said oxidized portion was removed. 39. The method of claim 38, wherein the acts of forming and removing a portion of said non-oxidizable layer comprise:forming an insulating layer over said upper silicon layer; forming a nitride layer over said insulating layer; and removing a portion of said nitride layer and said underlying insulating layer to expose said portion of said upper silicon layer. 40. The method of claim 39, wherein said insulating layer is a stress absorbing layer.41. The method of claim 40, wherein said stress absorbing layer is an oxide layer.42. The method of claim 41, wherein said oxide layer has a thickness of about 90 Angstroms and said nitride layer has a thickness in the range of about 100-2000 Angstroms.43. The method of claim 38, wherein implanting is performed using Boron ions.44. The method of claim 37, wherein said upper silicon layer is oxidized to a thickness of at least 100 nm.45. The method of claim 37, wherein said upper silicon layer has a thickness not greater than 100 nm where said oxidized portion is removed.46. The method of claim 36, wherein said upper silicon layer has an initial thickness of approximately 200 nm prior to said thinning of said layer.47. The method of claim 36, wherein said fully depleted region of said upper silicon layer is no more than 100 nm thick.48. The method of claim 36, wherein said partially depleted region of said upper silicon layer is between about 100 nm and about 200 nm thick.49. A method of forming a DRAM chip, comprising:forming at least one partially depleted region and at least one fully depleted region in an upper silicon layer of a silicon-on-insulator substrate, said at least one fully depleted region being formed in region of said upper silicon layer that has been thinned by removing a portion of said upper silicon layer prior to forming said fully depleted region; forming an access transistor of a DRAM memory device over said fully depleted region; and forming a periphery device transistor of said DRAM device over said partially depleted region. 50. The method of claim 49, further comprising forming a periphery device transistor of said DRAM device over said fully depleted region.51. The method of claim 49, further comprising:forming a non-oxidizable layer over said upper silicon layer; removing at least a portion of said non-oxidizable layer to expose a portion of said upper silicon layer; oxidizing said exposed portion of said upper silicon layer; and removing said oxidized portion of said upper silicon layer. 52. The method of claim 51, wherein said act of forming said at least one partially depleted region and at least one fully depleted region comprises:implanting said upper silicon layer to form said at least one partially depleted region in said upper silicon layer where said upper silicon layer was not exposed and said at least one fully depleted region in said upper silicon layer where said oxidized portion was removed. 53. The method of claim 52, wherein said acts of forming and removing a portion of said non-oxidizable layer comprise:forming an insulating layer over said upper silicon layer; forming a nitride layer over said insulating layer; and removing a portion of said nitride layer and said underlying insulating layer to expose said portion of said upper silicon layer. 54. The method of claim 53, wherein said insulating layer is a stress absorbing layer.55. The method of claim 54, wherein said stress absorbing layer is an oxide layer.56. The method of claim 55, wherein said oxide layer has a thickness of about 90 Angstroms and said nitride layer has a thickness in the range of about 100-2000 Angstroms.57. The method of claim 52, wherein implanting is performed using Boron ions.58. The method of claim 51, wherein said upper silicon layer is oxidized to a thickness of at least 100 nm.59. The method of claim 51, wherein said upper silicon layer has a thickness not greater than 100 nm where said oxidized portion is removed.60. The method of claim 49, wherein said upper silicon layer has an initial thickness of approximately 200 nm prior to said thinning of said layer.61. The method of claim 49, wherein said fully depleted region of said upper silicon layer is no more than 100 nm thick.62. The method of claim 49, wherein said partially depleted region of said upper silicon layer is between about 100 nm and about 200 nm thick.63. A method of forming a semiconductor device, comprising:providing a silicon-on-insulator substrate having an upper silicon layer having a thickness suitable for forming a partially depleted region; forming an oxide layer over said upper silicon layer; forming a nitride layer over said oxide layer; removing at least a portion of said nitride layer and said oxide layer to expose a portion of said upper silicon layer; oxidizing said portion of said upper silicon layer to a thickness of at least 100 nm; removing said oxidized portion of said upper silicon layer to thin said upper silicon layer; after said removing said oxidized portion, subsequently implanting said upper silicon layer with a dopant to simultaneously form at least one partially depleted region in said upper silicon layer where defined by said remaining nitride layer and at least one fully depleted region in said upper silicon layer where said oxidized portion was removed; forming at least one access transistor of a memory cell over said fully depleted region; and forming at least one periphery device transistor over said partially depleted region. 64. The method of claim 63, wherein said implanting is performed using Boron ions.65. The method of claim 63, wherein said oxide layer has a thickness of about 90 Angstroms and said nitride layer has a thickness in the range of about 100-2000 Angstroms.66. The method of claim 63, wherein said semiconductor device includes a DRAM.67. A method of forming a semiconductor device, comprising:selectively oxidizing a silicon layer of a silicon-on-insulator substrate to form an oxide region and a non-oxidized region on the surface of said silicon layer; removing the oxide from said oxide region to thin said silicon layer in said oxide region, thereby forming thinned regions and non-thinned regions of said silicon layer; and placing a dopant in said silicon layer to form a fully depleted region in said thinned region of silicon layer and a partially depleted region in said non-thinned region of silicon layer. 68. A method of forming a semiconductor device, comprising:selectively thinning a silicon layer of a silicon-on-insulator substrate to form a thinned region and a non-thinned region, said thinned region being defined by an oxidized portion of said silicon layer; and subsequent to said thinning, placing a dopant into said silicon layer to form a fully depleted region in said thinned region of silicon layer and a partially depleted region in said non-thinned region of silicon layer. 69. A method of forming a semiconductor device, comprising:processing a silicon layer of a silicon-on-insulator substrate to produce a first and a second region having different thicknesses; and subsequent to said producing said first and second regions, placing a dopant into said silicon layer to form a fully depleted region where said silicon layer has a smaller thickness and a partially depleted region where said silicon layer has a larger thickness. |
FIELD OF THE INVENTIONThis invention relates to the field of semiconductor integrated circuits and, particularly to stand-alone and embedded memory chips fabricated using Silicon-on-Insulator (SOI) substrates having partially depleted (PD) and fully depleted (FD) devices fabricated on the same chip.BACKGROUND OF THE INVENTIONSilicon-on-Insulator (SOI) technology employs a layer of semiconductor material formed over an insulating layer on a supporting bulk wafer. The structure can be formed by different well-known techniques in the art, for example, separation by implanted oxygen (SIMOX), bonding and etching back (BESOI), and zone melting and re-crystallization (ZMR), among others. Typically, the structure comprises a film of monocrystalline silicon formed over a buried layer of silicon oxide, which is formed over a monocrystalline silicon substrate.SOI substrates are being used to manufacture everything from microprocessors to SRAMs. SOI substrates offer increased drive current, lower parasitic capacitance, improved sub-threshold behavior, and improved packing density for integrated circuit processing. These qualities of SOI technology combine to provide distinct speed advantages for devices utilizing such substrates.DRAM memory circuits are currently the most popular type of memory circuits used as the main memory of processor-based systems. Efforts have been made to apply SOI technology to DRAM chips. However, because of the floating body effects present in partially depleted SOI devices, widespread application has been impractical due to the negative impact on access device performance caused by these effects.Floating body effects are caused by a build up of charge in the silicon region under the channel depletion region. This charge build up alters the I/V curve causing "kinks" or sharp irregularities in the current-voltage curve, lowers threshold voltage (Vt), and causes transistor performance to have a history dependence. The body effect can cause serious degradation to SOI transistor performance in certain applications. Due to this degradation, DRAM circuits have largely been limited to being fabricated on fully depleted SOI substrates where the depletion region under the gate extends to the insulating buried oxide (BOX). Despite the discussed drawbacks, in some circumstances the floating body of partially depleted SOI devices may provide certain advantages over fully depleted devices. For example, a partially depleted device may provide higher drive current through the channel region, which will allow for faster operation of the integrated circuit. This characteristic of partially depleted devices is useful in the periphery devices of a DRAM chip because of their need for increased operation speed and drive current.There is a need for a simplified method of forming a memory circuit on a SOI substrate where the transistor devices may be formed over both partially depleted and fully depleted regions so that the advantages of each transistor type, and the advantages of the SOI substrate, may be utilized in combination in a single memory chip. A memory circuit formed by such a method would achieve increased drive current by incorporating the partially depleted devices as discussed above, thereby allowing the IC to run faster and more efficiently. It would also achieve the advantages of the improved device behavior and refresh characteristics available to fully depleted SOI devices. It would be optimal if such a method for forming both fully depleted and partially depleted devices on a single SOI substrate could utilize the currently available techniques for fabricating a semiconductor device and limit the necessary steps for forming such a device to as few as possible, thereby saving time and costs.SUMMARY OF THE INVENTIONThis invention provides a simple method for forming both partially depleted (PD) and fully depleted (FD) devices on a single memory chip. By utilizing the process of this invention, memory chips may be obtained that utilize both the device behavior advantages of fully depleted devices and the drive and speed advantages of partially depleted devices. Additionally, by utilizing the process of this invention, Silicon-on-Insulator (SOI) substrates may be used so as to obtain the performance advantages of such a dual-depletion region substrate.Additionally, this invention utilizes common process steps used in current DRAM manufacturing. The dual-depletion regions may be formed simultaneously, thereby reducing the number of steps and required time of processing. Additionally, multiple steps may be performed using a single mask, resulting in a self-aligned process. The process of this invention results in a simple dual-depletion region SOI structure that is less expensive to manufacture.The above-described and other advantages and features of the invention will be more clearly understood from the following detailed description of an exemplary embodiment, which is provided in connection with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows connected portions of a SOI semiconductor substrate used in an exemplary embodiment of the invention, having a BOX layer, an upper silicon layer and a pad oxide and nitride mask.FIG. 2 shows the structure depicted in FIG. 1 at a subsequent stage of processing wherein the upper silicon layer has been oxidized.FIG. 3 shows the structure depicted in FIG. 2 at a subsequent stage of processing wherein an oxidized portion of the upper silicon has been removed.FIG. 4 shows the structure depicted in FIG. 3 at a subsequent stage of processing wherein a portion of the silicon layer is implanted by ionization and depleted regions are formed.FIG. 5 shows the structure depicted in FIG. 4 at a subsequent stage of processing wherein the pad oxide and nitride mask are removed.FIG. 6 shows the structure depicted in FIG. 5 wherein integrated circuit devices, e.g., transistors, are formed over the substrate.FIG. 7 depicts a processor system including a semiconductor device formed in accordance with the present invention.DETAIL DESCRIPTION OF THE PREFERRED EMBODIMENTSIn the following detailed description, reference is made to various specific embodiments in which the invention may be practiced. These embodiments are described with sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be employed, and that various structural, logical, and electrical changes may be made without departing from the spirit or scope of the invention.Also, the terms "wafer" and "substrate" are used interchangeably and are to be understood as including Silicon-on-Insulator (SOI) technology, with doped and undoped semiconductors. Furthermore, references to a "wafer" or "substrate" in the following description, do not exclude previous processing steps utilized to form regions or junctions in, on, or over the base semiconductor structure or foundation.No particular order is required for the method steps described below, with the exception of those logically requiring the results of prior steps. Accordingly, while many of the steps discussed below are discussed as being performed in an exemplary order, this order may be altered.SOI technology offers many advantages, including those associated with providing full isolation for overlying devices. SOI may offer simpler fabrication processes and sequences compared to circuits fabricated on bulk silicon. SOI provides reduced capacitive coupling between various circuit elements over the entire integrated circuit (IC) chip. It may also reduce the chip size and allow for increased IC density. Minimum device separation is determined only by the limitations of current lithography techniques. SOI also offers increased circuit speed, due to reductions in parasitic capacitance and chip size.This invention provides a simple method for forming both partially depleted (PD) and fully depleted (FD) devices on a single SOI semiconductor chip. For simplicity, the invention will be discussed in the environment of a memory chip, but such a discussion represents only an exemplary embodiment and other types of circuits using both partially depleted and fully depleted devices may be formed with a process of the invention on a single semiconductor chip.By utilizing the process of this invention, memory chips may be obtained that utilize both the device behavior advantages of fully depleted devices and the drive and speed advantages of partially depleted devices. Additionally, by utilizing the process of this invention, a single Silicon-on-Insulator (SOI) substrate may be fabricated, which has the performance advantages of both individual FD SOI and PD SOI substrates. The invention utilizes a process flow whereby FD SOI devices are utilized in one area of the chip and PD SOI devices are utilized in other areas of the chip. The choice of FD or PD is determined by the circuit application in the specific area of the chip.By using the process of the invention, a SOI device having both fully depleted and partially depleted devices may be fabricated having better SOI surface uniformity in terms of flatness, thickness control, and manufacturability, than is found in the prior art. The SOI surface that may be obtained from this invention is flatter and smoother than those available in the prior art, as well as being free from the impurities, defects, or contaminants that could result from current technologies. An example of such improvements is in the lack of residuals (process contaminants) at the interface of the buried oxide and the silicon substrate of the SOI wafer. The reduction of residuals at this interface makes the structure resulting from the invention's process more uniform an consistent, thereby also making it simpler and much cheaper to manufacture.In an exemplary embodiment of the invention, which will be further described below, FD devices are used in a memory array of a memory device to allow for improved access device behavior and improved refresh. PD devices are used in the periphery circuit of the memory device to allow for increased drive current and faster circuit operation. This invention may be applied to DRAMs in general, and is particularly pertinent for Embedded-DRAMs because of their large proportion of associated logic circuitry. While the embodiment describes a DRAM flow, this process can also be applied to any memory or other semiconductor chip (e.g., SRAM, Flash, etc.) where the differing modes of SOI operation (FD, PD, and NFD) would be usefull in different regions of the circuit.Referring to the drawings, where like elements are designated by like reference numerals, FIGS. 1-6 illustrate a method for the fabrication of a memory chip having a SOI substrate 14 with fully depleted regions 24 and partially depleted regions 26, resulting in devices being formed over the appropriate regions so as to obtain the advantages of each region on a single chip (FIG. 6).Referring to FIG. 1, an upper silicon layer is formed over a buried oxide layer (BOX) 12. The BOX 12 is formed over the original silicon substrate 10. The upper silicon layer 14 is initially chosen to have a thickness designed to be partially depleted. A thickness of approximately 200 nm is appropriate for the requirements of current technology, but may be changed as the technology evolves.A pad oxide 16 is deposited over the entire upper silicon layer 14. This may be accomplished by a dry oxidation/TLC oxidation process to form a layer of oxide across the upper silicon layer 14 surface at 957[deg.] C. Oxygen (O2) introduced into an atmospheric furnace reacts with the silicon wafer to produce a layer of silicon dioxide (SiO2). The purpose of the pad oxide layer 16 is to cushion the transition of stress between the silicon layer and the nitride mask 18 to be deposited next, and to also act as an etch stop. Therefore, a thickness of the pad oxide 16 that will avoid the formation of dislocations in the overlying nitride layer (to be deposited next) or in the underlying silicon layer may be used, for example, about 90 Angstroms.A nitride mask layer 18 is formed over the entire pad oxide 16. This may be accomplished by any method known in the art, one method being to deposit a layer of nitride (or Si3N4) on the pad oxide 16 surface using dichlorosilane (SiCl2H2) and ammonia (NH3) in a low pressure chemical vapor deposition (LPCVD) furnace at 765[deg.] C. This nitride mask layer 18 is a sacrificial material layer which will be removed in later processing steps. Where this layer remains over selected areas of the substrate, it will prevent the oxidation of the underlying upper silicon layer 14. This nitride mask 18 is effective because oxygen and water vapor diffuse very slowly through it, thus preventing oxidation of the silicon surface below the mask. The thickness of the nitride layer/mask 18 can vary, but a range of between about 100-2000 Angstroms is preferred, with the thinner range being more useful because it inflicts less stress on the underlying silicon.The nitride mask layer 18 and pad oxide layer 16 are patterned by photolithographic techniques using an array Vt mask, which is open in the array-area and blocked in the periphery-area for various implant levels. The purpose of utilizing the array mask is to define the memory array area over the wafer. It is this area where the nitride mask 18 and the pad oxide 16 must later be removed to expose the underlying silicon layer 14. A resist pattern is normally used to protect the areas desired not to be removed (here the periphery areas). This includes the portions of the nitride mask 18 and pad oxide 16 needed to protect specific regions of the upper silicon layer 14 during the next step of oxidation (see FIG. 2). These areas protected by the nitride mask 18 and the pad oxide 16 will become the partially depleted regions formed later (see FIG. 4).The nitride mask 18 and pad oxide 16 are next selectively etched down to the upper silicon layer 14, resulting in selected portions of the upper silicon layer 14 being exposed, leaving the structure shown in FIG. 1. In this etching step, the nitride layer 18 may first be removed down to the pad oxide 16 by a plasma etch using Cl2 and NH3. The nitride may also be etched using a hot phosphoric acid solution. An over-etch may be used to ensure that all nitride is removed out of the desired areas. The pad oxide 16 is next removed by a HF etch where over-etching will ensure removal of all unwanted pad oxide. Alternatively, a general dry-etch may be used to etch the entire stack of nitride layer 18 and pad oxide 16 to form the structure shown in FIG. 1. All such removal methods are well known. The resist used to pattern the nitride mask 18 and pad oxide 16 is removed next. After etching, the nitride mask 18 and underlying pad oxide 16 remain over portions of the wafer over which periphery devices will later be formed (see FIG. 6).Now referring to FIG. 2, the exposed areas of the upper silicon layer 14 are next oxidized using a process similar to LOCOS. This oxidation may be performed in an oxygen or water vapor ambient, at temperatures dependant upon the desired oxidation rate; however, a dry-oxidation process is more controllable. To grow the thick layer of oxide 20, a steam oxidation process may also be used, by which oxygen and hydrogen are pumped into an atmospheric furnace at about 1000[deg.] C. for the desired time to produce this oxidized layer 20. No matter what process is utilized, the oxide will grow where there is no nitride mask 18. The silicon oxide layer 20 may be formed of, for example, silicon dioxide, up to a thickness of about 100 nm or greater. However, as well known in the art, the thickness of the silicon oxide layer 20 may vary greatly, depending on the processing requirements and desired device characteristics, and most importantly depending on the ultimate desired thickness of the upper silicon layer 14 desired in the array area. The invention is not limited to silicon dioxide as the silicon oxide layer, other oxide types as known in the art may be utilized as well. Any known method of forming a SOI substrate may be used for this invention and any derivative oxide thereof is acceptable.As shown in FIG. 3, the grown oxide 20 is striped using an oxide etch. This oxide layer 20 may be etched using a solution of ammonium fluoride and hydrofluoric acid, preferably by an HF dip. This step thins the array region of the upper silicon layer 14 to a fully depleted thickness, preferably 100 nm or less. It is this thinning of the upper silicon layer that differentiates the fully depleted regions from the partially depleted regions. It is at this point in the processing that both fully and partially depleted thickness regions are formed on the substrate.At this point in the processing, as shown in FIG. 4, Array threshold voltage ion (Vt) implants 22 are produced using Boron ions. This sets the threshold voltage of the access devices. This is a self-aligning process as the same mask used in the prior steps to thin the upper silicon layer 14 are now used to set the threshold voltage. The regions where the upper silicon layer 14 was oxidized and subsequently thinned by removal of the oxidized silicon 20, are now implanted with the Boron ions for form fully depleted regions 24 down to the BOX 12 layer. The regions under the nitride mask 18 and pad oxide 16 are thicker and are now partially depleted regions 26. The fully depleted regions will have a high-threshold voltage associated with any subsequently formed gate and the partially depleted regions will have a low-threshold voltage associated with any subsequently formed gate (see FIG. 6). As show in FIG. 4, both fully depleted and partially depleted regions are on a single chip, selected by the location of the nitride mask 18. In the fully depleted regions 24, the transistor depletion region penetrates through the entire remaining upper silicon layer 14 to the BOX layer 12, but in the partially depleted regions 26, the transistor depletion regions do not fully penetrate because of the greater thickness of the upper silicon layer 14 and the remaining nitride mask 18, leaving an undoped float region.Note that in standard DRAM processing, an Array Vt implant is typically performed later in the processing. Because of the implantation at this early stage, no additional later implant or associated photo masking step is required. Also note that for fully depleted SOI regions, the Vt implant is equivalent to well implants. Because the ultimate SOI devices will be isolated from one another, there will be no common "well" that is shared by all. Thus, for SOI, a well implant is equivalent to a Vt, implant.Referring to FIG. 5, the next step in processing is to strip the nitride mask 18 and the pad oxide 16. This step may be performed by using hot phosphoric acid with water to strip the nitride mask 18 and using a HF dip to etch away the pad oxide 16. This HF dip may also remove any remaining oxide left from the preceding step.As shown in FIG. 6, after the remaining SOI upper silicon layer 14 is exposed, conventional DRAM processing may continue over this substrate layer. Such processing includes the forming of memory cells over the fully depleted regions for improved access device behavior and improved refresh, among other advantages. Thus, over the fully depleted regions of the array, the processing includes the forming of source/drain areas 50, forming gate oxides 52, forming wordline gates 54, forming capacitor plugs 56 and bit line plugs 58, forming capacitors 60, forming bit lines 62, and other conventional processing steps as know in the art, including cell metalization, to form a completed memory cell.Also shown in FIG. 6, conventional IC processing continues over the partially depleted regions 26 as well. Over these partially depleted regions 26 the periphery devices are formed to take advantage of the increased drive current and faster circuit operation properties of partially depleted devices, among other advantages. Thus, over these partially depleted regions, devices such as sense amplifiers, control logic circuits, and address registers, for example, or any other devices which would benefit from increased drive current or faster operation, may be formed as known in the art. Such devices incorporate peripheral transistors having peripheral source and drain regions 80, peripheral transistor gates 82 over peripheral gate oxides 86, peripheral metal contacts 84 to the source/drain regions 80 for current input and output, and other circuit elements known in the art.FIG. 7 illustrates a processor system (e.g., a computer system), with which a memory having a memory cell area and a periphery logic area as described above may be used. The processor system comprises a central processing unit (CPU) 102, a memory circuit 104, and an input/output device (I/O) 100. The memory circuit 104 contains a DRAM, or other memory device, including semiconductor devices constructed in accordance with the present invention. Also, the CPU 102 may itself be an integrated processor, which utilizes semiconductor devices constructed in accordance with the present invention, and both the CPU 102 and the memory circuit 104 may be integrated on a single chip, so as to fully utilize the advantages of the invention.In an alternative embodiment, the invention may also be used to form near fully depleted (NFD) devices, another type of partially depleted device. A near fully depleted device is one that operates in either the fully depleted mode or in the partially depleted mode, depending upon the bias conditions of the transistor associated with the depletion region. Though the structure of the SOI substrate in a NFD region is of a partially depleted region26, the device operates as a borderline case between a FD and a PD device.Again, the key to forming a near fully depleted region on the SOI substrate depends upon the thickness of the upper silicon layer 14 when implanted with ions 22, which is between that of a FD 24 and a PD 26 region. Here, the initial thickness (refer to FIG. 1) of the upper silicon layer 14 must be somewhat thinner than if forming the partially depleted regions 26 as described above. This is because the difference between the resulting thickness of the fully depleted 24 and near fully depleted regions is smaller than that between the FD 24 and PD 26 regions described above.Though the thicknesses of the resulting structures may be different, the processing to achieve the NFD regions and the FD regions 24 on the same SOI substrate is the same as described above and illustrated in FIGS. 1 to 5, but must compensate for the desired thinner upper silicon layer 14 as described. After the formation of the NFD and FD 24 regions in the upper silicon layer 14, the processing continues as described above and shown in FIGS. 6 and 7, where periphery devices will be formed over the NFD regions and memory devices will be formed over the fully depleted regions 24.The above description and accompanying drawings are only illustrative of exemplary embodiments, which can achieve the features and advantages of the present invention. It is not intended that the invention be limited to the embodiments shown and described in detail herein. The invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. The invention is only limited by the scope of the following claims. |
The present disclosure provides a graphics processing unit comprising one or more multiprocessors, at least one of the one or more multiprocessors including a register file to store a plurality of different types of operands and a plurality of processing units, including a first set of execution units of a first type to process matrix instructions on a first set of operands stored in a first set of registers of the register file, wherein the first set of operands including one or more 64-bit operands and a second set of execution units of a second type, the second set of execution units being different from the first set of execution units, the second set of execution units to perform general purpose graphics processing unit, GPGPU, instructions on a second set of operands stored in a second set of registers of the register file. |
A graphics processing unit (214, 410-413) comprising one or more multiprocessors (234, 325), at least one of the one or more multiprocessors (234, 325) including:a register file (258, 334A, 334B, 1606) to store a plurality of different types of operandsa plurality of processing units (700(a)-700(n)), including:a first set of execution units (705(a)-705(n)) of a first type to process matrix instructions on a first set of operands stored in a first set of registers of the register file (258, 334A, 334B, 1606),wherein the first set of operands including one or more 64-bit operands; anda second set of execution units (706(a)-706(n)) of a second type, the second set of execution units (706(a)-706(n)) being different from the first set of execution units (705(a)-705(n)),the second set of execution units (706(a)-706(n)) to perform general purpose graphics processing unit, GPGPU, instructions on a second set of operands stored in a second set of registers of the register file (258, 334A, 334B, 1606).The graphics processing unit (214, 410-413) according to claim 1, wherein the second set of execution units (706(a)-706(n)) comprises:a set of floating point units, FPUs, to execute instructions to perform floating point operations,the set of FPUs comprising a first subset of FPUs to perform 32-bit floating point, FP32, operations and a second subset of FPUs to perform 16-bit floating, FP16, operations; anda set of integer arithmetic logic units, ALUs, to execute instructions to perform integer operations.The graphics processing unit (214, 410-413) as in claim 2, wherein the set of FPUs includes first FPUs to perform 32-bit floating point, FP32, operations and second FPUs to perform 16-bit floating point, FP16, operations.The graphics processing unit (214, 410-413) as in claim 1, wherein the first set of execution units (705(a)-705(n)) of the first type is configured to perform an in-place matrix to vector transformation for a first type of operand stored in the register file (258, 334A, 334B, 1606).The graphics processing unit (214, 410-413) as in claim 4, wherein the in-place matrix to vector transformation includes a set of operations having a source and destination within the register file (258, 334A, 334B, 1606).The graphics processing unit (214, 410-413) as in claim 5, wherein the source includes a register address start limit, stride of an array, number of elements, and element size.The graphics processing unit (214, 410-413) as in claim 1, wherein the at least one of the one or more multiprocessors (234, 325) further comprises an instruction cache (252) to store a first instruction associated with the first set of operands and a second instruction associated with the second set of operands.The graphics processing unit (214, 410-413) as in claim 1, wherein the first set of execution units (705(a)-705(n)) of the first type are associated with a first memory channel and the second set of execution units (706(a)-706(n)) of the second type are associated with a second memory channel.The graphics processing unit (214, 410-413) as in claim 1, wherein the one or more multiprocessors (234, 325) have a single instruction multiple thread, SIMT, architecture.The graphics processing unit (214, 410-413) as in claim 1, wherein the first set of execution units (705(a)-705(n)) of the first type includes circuitry to execute instructions to perform matrix operations on the first set of operands in the first set of registers of the register file (258, 334A, 334B, 1606). |
FIELDEmbodiments relate generally to data processing and more particularly to data processing via a general-purpose graphics processing unit.BACKGROUND OF THE DESCRIPTIONCurrent parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data; however, more recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data.To further increase performance, graphics processors typically implement processing techniques such as pipelining that attempt to process, in parallel, as much graphics data as possible throughout the different parts of the graphics pipeline. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. A general overview of software and hardware for SIMT architectures can be found in Shane Cook, CUDA Programming Chapter 3, pages 37-51 (2013 ).BRIEF DESCRIPTION OF THE DRAWINGSSo that the manner in which the above recited features of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments and are therefore not to be considered limiting of its scope.Figure 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the embodiments described herein;Figure 2A-2D illustrate a parallel processor components, according to an embodiment;Figures 3A-3B are block diagrams of graphics multiprocessors, according to embodiments;Figures 4A-4F illustrate an exemplary architecture in which a plurality of GPUs are communicatively coupled to a plurality of multi-core processors;Figure 5 illustrates a graphics processing pipeline, according to an embodiment;Figure 6 illustrates a computing device employing a compute mechanism, according to an embodiment;Figures 7A & 7B illustrate embodiments of processing units;Figure 7C illustrates a matrix to vector transformation;Figure 8 illustrates a machine learning software stack, according to an embodiment;Figure 9 illustrates a highly-parallel general-purpose graphics processing unit, according to an embodiment;Figure 10 illustrates a multi-GPU computing system, according to an embodiment;Figure 11A-11B illustrate layers of exemplary deep neural networks;Figure 12 illustrates an exemplary recurrent neural network;Figure 13 illustrates training and deployment of a deep neural network;Figure 14 is a block diagram illustrating distributed learning;Figure 15 illustrates an exemplary inferencing system on a chip (SOC) suitable for performing inferencing using a trained model;Figure 16 is a block diagram of a processing system, according to an embodiment;Figure 17 is a block diagram of a processor according to an embodiment;Figure 18 is a block diagram of a graphics processor, according to an embodiment;Figure 19 is a block diagram of a graphics processing engine of a graphics processor in accordance with some embodiments;Figure 20 is a block diagram of a graphics processor provided by an additional embodiment;Figure 21 illustrates thread execution logic including an array of processing elements employed in some embodiments;Figure 22 is a block diagram illustrating a graphics processor instruction formats according to some embodiments;Figure 23 is a block diagram of a graphics processor according to another embodiment;Figure 24A-24B illustrate a graphics processor command format and command sequence, according to some embodiments;Figure 25 illustrates exemplary graphics software architecture for a data processing system according to some embodiments;Figure 26 is a block diagram illustrating an IP core development system, according to an embodiment;Figure 27 is a block diagram illustrating an exemplary system on a chip integrated circuit, according to an embodiment;Figure 28 is a block diagram illustrating an additional exemplary graphics processor; andFigure 29 is a block diagram illustrating an additional exemplary graphics processor of a system on a chip integrated circuit, according to an embodiment.DETAILED DESCRIPTIONIn embodiments, mechanisms for optimizing computing of a graphics processor is disclosed. In some embodiments, the compute mechanism includes a plurality of processing units each comprising a plurality of execution units (EUs), wherein the plurality of EUs comprise a first EU type and a second EU type. In a further embodiment, the processing units may be included in a memory device. In yet a further embodiment, compute mechanism performs matrix-vector transformations using a register file or a shared local memory (SLM).In the following description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of skill in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described to avoid obscuring the details of the present embodiments.System OverviewFigure 1 is a block diagram illustrating a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processor(s) 102 and a system memory 104 communicating via an interconnection path that may include a memory hub 105. The memory hub 105 may be a separate component within a chipset component or may be integrated within the one or more processor(s) 102. The memory hub 105 couples with an I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input device(s) 108. Additionally, the I/O hub 107 can enable a display controller, which may be included in the one or more processor(s) 102, to provide outputs to one or more display device(s) 110A. In one embodiment, the one or more display device(s) 110A coupled with the I/O hub 107 can include a local, internal, or embedded display device.In one embodiment, the processing subsystem 101 includes one or more parallel processor(s) 112 coupled to memory hub 105 via a bus or other communication link 113. The communication link 113 may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. In one embodiment, the one or more parallel processor(s) 112 form a computationally focused parallel or vector processing system that an include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. In one embodiment, the one or more parallel processor(s) 112 form a graphics processing subsystem that can output pixels to one of the one or more display device(s) 110A coupled via the I/O Hub 107. The one or more parallel processor(s) 112 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 110B.Within the I/O subsystem 111, a system storage unit 114 can connect to the I/O hub 107 to provide a storage mechanism for the computing system 100. An I/O switch 116 can be used to provide an interface mechanism to enable connections between the I/O hub 107 and other components, such as a network adapter 118 and/or wireless network adapter 119 that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s) 120. The network adapter 118 can be an Ethernet adapter or another wired network adapter. The wireless network adapter 119 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios.The computing system 100 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, may also be connected to the I/O hub 107. Communication paths interconnecting the various components in Figure 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NV-Link high-speed interconnect, or interconnect protocols known in the art.In one embodiment, the one or more parallel processor(s) 112 incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, the one or more parallel processor(s) 112 incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. In yet another embodiment, components of the computing system 100 may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s), 112 memory hub 105, processor(s) 102, and I/O hub 107 can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system 100 can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment, at least a portion of the components of the computing system 100 can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system.It will be appreciated that the computing system 100 shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s) 102, and the number of parallel processor(s) 112, may be modified as desired. For instance, in some embodiments, system memory 104 is connected to the processor(s) 102 directly rather than through a bridge, while other devices communicate with system memory 104 via the memory hub 105 and the processor(s) 102. In other alternative topologies, the parallel processor(s) 112 are connected to the I/O hub 107 or directly to one of the one or more processor(s) 102, rather than to the memory hub 105. In other embodiments, the I/O hub 107 and memory hub 105 may be integrated into a single chip. Some embodiments may include two or more sets of processor(s) 102 attached via multiple sockets, which can couple with two or more instances of the parallel processor(s) 112.Some of the particular components shown herein are optional and may not be included in all implementations of the computing system 100. For example, any number of add-in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated in Figure 1 . For example, the memory hub 105 may be referred to as a Northbridge in some architectures, while the I/O hub 107 may be referred to as a Southbridge.Figure 2A illustrates a parallel processor 200, according to an embodiment. The various components of the parallel processor 200 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor 200 is a variant of the one or more parallel processor(s) 112 shown in Figure 1 , according to an embodiment.In one embodiment, the parallel processor 200 includes a parallel processing unit 202. The parallel processing unit includes an I/O unit 204 that enables communication with other devices, including other instances of the parallel processing unit 202. The I/O unit 204 may be directly connected to other devices. In one embodiment, the I/O unit 204 connects with other devices via the use of a hub or switch interface, such as memory hub 105. The connections between the memory hub 105 and the I/O unit 204 form a communication link 113. Within the parallel processing unit 202, the I/O unit 204 connects with a host interface 206 and a memory crossbar 216, where the host interface 206 receives commands directed to performing processing operations and the memory crossbar 216 receives commands directed to performing memory operations.When the host interface 206 receives a command buffer via the I/O unit 204, the host interface 206 can direct work operations to perform those commands to a front end 208. In one embodiment, the front end 208 couples with a scheduler 210, which is configured to distribute commands or other work items to a processing cluster array 212. In one embodiment, the scheduler 210 ensures that the processing cluster array 212 is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array 212.The processing cluster array 212 can include up to "N" processing clusters (e.g., cluster 214A, cluster 214B, through cluster 214N). Each cluster 214A-214N of the processing cluster array 212 can execute a large number of concurrent threads. The scheduler 210 can allocate work to the clusters 214A-214N of the processing cluster array 212 using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler 210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array 212.In one embodiment, different clusters 214A-214N of the processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.The processing cluster array 212 can be configured to perform various types of parallel processing operations. In one embodiment, the processing cluster array 212 is configured to perform general-purpose parallel compute operations. For example, the processing cluster array 212 can include logic to execute processing tasks including filtering of video and/or audio data, and/or modeling operations, including physics operations, and performing data transformations.In one embodiment, the processing cluster array 212 is configured to perform parallel graphics processing operations. In embodiments in which the parallel processor 200 is configured to perform graphics processing operations, the processing cluster array 212 can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array 212 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit 202 can transfer data from system memory via the I/O unit 204 for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory 222) during processing, then written back to system memory.In one embodiment, when the parallel processing unit 202 is used to perform graphics processing, the scheduler 210 can be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters 214A-214N of the processing cluster array 212. In some embodiments, portions of the processing cluster array 212 can be configured to perform different types of processing. For example, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.During operation, the processing cluster array 212 can receive processing tasks to be executed via the scheduler 210, which receives commands defining processing tasks from front end 208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler 210 may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end 208. The front end 208 can be configured to ensure the processing cluster array 212 is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.Each of the one or more instances of the parallel processing unit 202 can couple with parallel processor memory 222. The parallel processor memory 222 can be accessed via the memory crossbar 216, which can receive memory requests from the processing cluster array 212 as well as the I/O unit 204. The memory crossbar 216 can access the parallel processor memory 222 via a memory interface 218. The memory interface 218 can include multiple partition units (e.g., partition unit 220A, partition unit 220B, through partition unit 220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 222. In one implementation the number of partition units 220A-220N is configured to be equal to the number of memory units, such that a first partition unit 220A has a corresponding first memory unit 224A, a second partition unit 220B has a corresponding memory unit 224B, and an Nth partition unit 220N has a corresponding Nth memory unit 224N. In other embodiments, the number of partition units 220A-220N may not be equal to the number of memory devices.In various embodiments, the memory units 224A-224N can include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In one embodiment, the memory units 224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units 224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units 224A-224N, allowing partition units 220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory 222. In some embodiments, a local instance of the parallel processor memory 222 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.In one embodiment, any one of the clusters 214A-214N of the processing cluster array 212 can process data that will be written to any of the memory units 224A-224N within parallel processor memory 222. The memory crossbar 216 can be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or to another cluster 214A-214N, which can perform additional processing operations on the output. Each cluster 214A-214N can communicate with the memory interface 218 through the memory crossbar 216 to read from or write to various external memory devices. In one embodiment the memory crossbar 216 has a connection to the memory interface 218 to communicate with the I/O unit 204, as well as a connection to a local instance of the parallel processor memory 222, enabling the processing units within the different processing clusters 214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit 202. In one embodiment the memory crossbar 216 can use virtual channels to separate traffic streams between the clusters 214A-214N and the partition units 220A-220N.While a single instance of the parallel processing unit 202 is illustrated within the parallel processor 200, any number of instances of the parallel processing unit 202 can be included. For example, multiple instances of the parallel processing unit 202 can be provided on a single add-in card, or multiple add-in cards can be interconnected. The different instances of the parallel processing unit 202 can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, and in one embodiment, some instances of the parallel processing unit 202 can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit 202 or the parallel processor 200 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.Figure 2B is a block diagram of a partition unit 220, according to an embodiment. In one embodiment, the partition unit 220 is an instance of one of the partition units 220A-220N of Figure 2A . As illustrated, the partition unit 220 includes an L2 cache 221, a frame buffer interface 225, and a ROP 226 (raster operations unit). The L2 cache 221 is a read/write cache that is configured to perform load and store operations received from the memory crossbar 216 and ROP 226. Read misses and urgent write-back requests are output by L2 cache 221 to frame buffer interface 225 for processing. Dirty updates can also be sent to the frame buffer via the frame buffer interface 225 for opportunistic processing. In one embodiment, the frame buffer interface 225 interfaces with one of the memory units in parallel processor memory, such as the memory units 224A-224N of Figure 2 (e.g., within parallel processor memory 222).In graphics applications, the ROP 226 is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP 226 then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP 226 includes compression logic to compress z or color data that is written to memory and decompress z or color data that is read from memory. In some embodiments, the ROP 226 is included within each processing cluster (e.g., cluster 214A-214N of FIG. 2 ) instead of within the partition unit 220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar 216 instead of pixel fragment data.The processed graphics data may be displayed on display device, such as one of the one or more display device(s) 110 of Figure 1 , routed for further processing by the processor(s) 102, or routed for further processing by one of the processing entities within the parallel processor 200 of Figure 2A .Figure 2C is a block diagram of a processing cluster 214 within a parallel processing unit, according to an embodiment. In one embodiment, the processing cluster is an instance of one of the processing clusters 214A-214N of Figure 2 . The processing cluster 214 can be configured to execute many threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular set of input data. In some embodiments, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime.Operation of the processing cluster 214 can be controlled via a pipeline manager 232 that distributes processing tasks to SIMT parallel processors. The pipeline manager 232 receives instructions from the scheduler 210 of Figure 2 and manages execution of those instructions via a graphics multiprocessor 234 and/or a texture unit 236. The illustrated graphics multiprocessor 234 is an exemplary instance of an SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster 214. One or more instances of the graphics multiprocessor 234 can be included within a processing cluster 214. The graphics multiprocessor 234 can process data and a data crossbar 240 can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager 232 can facilitate the distribution of processed data by specifying destinations for processed data to be distributed vis the data crossbar 240.Each graphics multiprocessor 234 within the processing cluster 214 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete.. The functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In one embodiment the same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present.The instructions transmitted to the processing cluster 214 constitutes a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 234. A thread group may include fewer threads than the number of processing engines within the graphics multiprocessor 234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor 234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor 234, processing can be performed over consecutive clock cycles. In one embodiment multiple thread groups can be executed concurrently on a graphics multiprocessor 234.In one embodiment, the graphics multiprocessor 234 includes an internal cache memory to perform load and store operations. In one embodiment, the graphics multiprocessor 234 can forego an internal cache and use a cache memory (e.g., L1 cache 308) within the processing cluster 214. Each graphics multiprocessor 234 also has access to L2 caches within the partition units (e.g., partition units 220A-220N of Figure 2 ) that are shared among all processing clusters 214 and may be used to transfer data between threads. The graphics multiprocessor 234 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit 202 may be used as global memory. Embodiments in which the processing cluster 214 includes multiple instances of the graphics multiprocessor 234 can share common instructions and data, which may be stored in the L1 cache 308.Each processing cluster 214 may include an MMU 245 (memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU 245 may reside within the memory interface 218 of Figure 2 . The MMU 245 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile (talk more about tiling) and optionally a cache line index. The MMU 245 may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor 234 or the L1 cache or processing cluster 214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss.In graphics and computing applications, a processing cluster 214 may be configured such that each graphics multiprocessor 234 is coupled to a texture unit 236 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within graphics multiprocessor 234 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor 234 outputs processed tasks to the data crossbar 240 to provide the processed task to another processing cluster 214 for further processing or to store the processed task in an L2 cache, local parallel processor memory, or system memory via the memory crossbar 216. A preROP 242 (pre-raster operations unit) is configured to receive data from graphics multiprocessor 234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 220A-220N of Figure 2 ). The preROP 242 unit can perform optimizations for color blending, organize pixel color data, and perform address translations.It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor 234, texture units 236, preROPs 242, etc., may be included within a processing cluster 214. Further, while only one processing cluster 214 is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster 214. In one embodiment, each processing cluster 214 can be configured to operate independently of other processing clusters 214 using separate and distinct processing units, L1 caches, etc.Figure 2D shows a graphics multiprocessor 234, according to one embodiment. In such embodiments, the graphics multiprocessor 234 couples with the pipeline manager 232 of the processing cluster 214. The graphics multiprocessor 234 has an execution pipeline including but not limited to an instruction cache 252, an instruction unit 254, an address mapping unit 256, a register file 258, one or more general purpose graphics processing unit (GPGPU) cores 262, and one or more load/store units 266. The GPGPU cores 262 and load/store units 266 are coupled with cache memory 272 and shared memory 270 via a memory and cache interconnect 268.In one embodiment, the instruction cache 252 receives a stream of instructions to execute from the pipeline manager 232. The instructions are cached in the instruction cache 252 and dispatched for execution by the instruction unit 254. The instruction unit 254 can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core 262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit 256 can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units 266.The register file 258 provides a set of registers for the functional units of the graphics multiprocessor 324. The register file 258 provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores 262, load/store units 266) of the graphics multiprocessor 324. In one embodiment, the register file 258 is divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file 258. In one embodiment, the register file 258 is divided between the different warps being executed by the graphics multiprocessor 324.The GPGPU cores 262 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor 324. The GPGPU cores 262 can be similar in architecture or can differ in architecture, according to embodiments. For example, and in one embodiment, a first portion of the GPGPU cores 262 include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. In one embodiment, the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor 324 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In one embodiment one or more of the GPGPU cores can also include fixed or special function logic.The memory and cache interconnect 268 is an interconnect network that connects each of the functional units of the graphics multiprocessor 324 to the register file 258 and to the shared memory 270. In one embodiment, the memory and cache interconnect 268 is a crossbar interconnect that allows the load/store unit 266 to implement load and store operations between the shared memory 270 and the register file 258. The register file 258 can operate at the same frequency as the GPGPU cores 262, thus data transfer between the GPGPU cores 262 and the register file 258 is very low latency. The shared memory 270 can be used to enable communication between threads that execute on the functional units within the graphics multiprocessor 234. The cache memory 272 can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit 236. The shared memory 270 can also be used as a program managed cached. Threads executing on the GPGPU cores 262 can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory 272.Figures 3A-3B illustrate additional graphics multiprocessors, according to embodiments. The illustrated graphics multiprocessors 325, 350 are variants of the graphics multiprocessor 234 of Figure 2C . The illustrated graphics multiprocessors 325, 350 can be configured as a streaming multiprocessor (SM) capable of simultaneous execution of a large number of execution threads.Figure 3A shows a graphics multiprocessor 325 according to an additional embodiment. The graphics multiprocessor 325 includes multiple additional instances of execution resource units relative to the graphics multiprocessor 234 of Figure 2D . For example, the graphics multiprocessor 325 can include multiple instances of the instruction unit 332A-332B, register file 334A-334B, and texture unit(s) 344A-344B. The graphics multiprocessor 325 also includes multiple sets of graphics or compute execution units (e.g., GPGPU core 336A-336B, GPGPU core 337A-337B, GPGPU core 338A-338B) and multiple sets of load/store units 340A-340B. In one embodiment, the execution resource units have a common instruction cache 330, texture and/or data cache memory 342, and shared memory 346. The various components can communicate via an interconnect fabric 327. In one embodiment, the interconnect fabric 327 includes one or more crossbar switches to enable communication between the various components of the graphics multiprocessor 325.Figure 3B shows a graphics multiprocessor 350 according to an additional embodiment. The graphics processor includes multiple sets of execution resources 356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated in Figure 2D and Figure 3A . The execution resources 356A-356D can work in concert with texture unit(s) 360A-360D for texture operations, while sharing an instruction cache 354, and shared memory 362. In one embodiment, the execution resources 356A-356D can share an instruction cache 354 and shared memory 362, as well as multiple instances of a texture and/or data cache memory 358A-358B. The various components can communicate via an interconnect fabric 352 similar to the interconnect fabric 327 of Figure 3A .Persons skilled in the art will understand that the architecture described in Figures 1 , 2A-2D , and 3A-3B are descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit 202 of Figure 2 , as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein.In some embodiments, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.Techniques for GPU to Host Processor InterconnectionFigure 4A illustrates an exemplary architecture in which a plurality of GPUs 410-413 are communicatively coupled to a plurality of multi-core processors 405-406 over high-speed links 440-443 (e.g., buses, point-to-point interconnects, etc.). In one embodiment, the high-speed links 440-443 support a communication throughput of 4GB/s, 30GB/s, 80GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles of the invention are not limited to any particular communication protocol or throughput.In addition, in one embodiment, two or more of the GPUs 410-413 are interconnected over high-speed links 444-445, which may be implemented using the same or different protocols/links than those used for high-speed links 440-443. Similarly, two or more of the multi-core processors 405-406 may be connected over high speed link 433 which may be symmetric multi-processor (SMP) buses operating at 20GB/s, 30GB/s, 120GB/s or higher. Alternatively, all communication between the various system components shown in Figure 4A may be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles of the invention are not limited to any particular type of interconnect technology.In one embodiment, each multi-core processor 405-406 is communicatively coupled to a processor memory 401-402, via memory interconnects 430-431, respectively, and each GPU 410-413 is communicatively coupled to GPU memory 420-423 over GPU memory interconnects 450-453, respectively. The memory interconnects 430-431 and 450-453 may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories 401-402 and GPU memories 420-423 may be volatile memories such as dynamic random access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In one embodiment, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy).As described below, although the various processors 405-406 and GPUs 410-413 may be physically coupled to a particular memory 401-402, 420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the "effective address" space) is distributed among all of the various physical memories. For example, processor memories 401-402 may each comprise 64GB of the system memory address space and GPU memories 420-423 may each comprise 32GB of the system memory address space (resulting in a total of 256GB addressable memory in this example).Figure 4B illustrates additional details for an interconnection between a multi-core processor 407 and a graphics acceleration module 446 in accordance with one embodiment. The graphics acceleration module 446 may include one or more GPU chips integrated on a line card which is coupled to the processor 407 via the high-speed link 440. Alternatively, the graphics acceleration module 446 may be integrated on the same package or chip as the processor 407.The illustrated processor 407 includes a plurality of cores 460A-460D, each with a translation lookaside buffer 461A-461D and one or more caches 462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the invention (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches 462A-462D may comprise level 1 (LI) and level 2 (L2) caches. In addition, one or more shared caches 426 may be included in the caching hierarchy and shared by sets of the cores 460A-460D. For example, one embodiment of the processor 407 includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor 407 and the graphics accelerator integration module 446 connect with system memory 441, which may include processor memories 401-402Coherency is maintained for data and instructions stored in the various caches 462A-462D, 456 and system memory 441 via inter-core communication over a coherence bus 464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus 464 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus 464 to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles of the invention.In one embodiment, a proxy circuit 425 communicatively couples the graphics acceleration module 446 to the coherence bus 464, allowing the graphics acceleration module 446 to participate in the cache coherence protocol as a peer of the cores. In particular, an interface 435 provides connectivity to the proxy circuit 425 over high-speed link 440 (e.g., a PCIe bus, NVLink, etc.) and an interface 437 connects the graphics acceleration module 446 to the link 440.In one implementation, an accelerator integration circuit 436 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 431, 432, N of the graphics acceleration module 446. The graphics processing engines 431, 432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines 431, 432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines 431-432, N or the graphics processing engines 431-432, N may be individual GPUs integrated on a common package, line card, or chip.In one embodiment, the accelerator integration circuit 436 includes a memory management unit (MMU) 439 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 441. The MMU 439 may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache 438 stores commands and data for efficient access by the graphics processing engines 431-432, N. In one embodiment, the data stored in cache 438 and graphics memories 433-434, N is kept coherent with the core caches 462A-462D, 456 and system memory 411. As mentioned, this may be accomplished via proxy circuit 425 which takes part in the cache coherency mechanism on behalf of cache 438 and memories 433-434, N (e.g., sending updates to the cache 438 related to modifications/accesses of cache lines on processor caches 462A-462D, 456 and receiving updates from the cache 438).A set of registers 445 store context data for threads executed by the graphics processing engines 431-432, N and a context management circuit 448 manages the thread contexts. For example, the context management circuit 448 may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit 448 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. In one embodiment, an interrupt management circuit 447 receives and processes interrupts received from system devices.In one implementation, virtual/effective addresses from a graphics processing engine 431 are translated to real/physical addresses in system memory 411 by the MMU 439. One embodiment of the accelerator integration circuit 436 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 446 and/or other accelerator devices. The graphics accelerator module 446 may be dedicated to a single application executed on the processor 407 or may be shared between multiple applications. In one embodiment, a virtualized graphics execution environment is presented in which the resources of the graphics processing engines 431-432, N are shared with multiple applications or virtual machines (VMs). The resources may be subdivided into "slices" which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications.Thus, the accelerator integration circuit acts as a bridge to the system for the graphics acceleration module 446 and provides address translation and system memory cache services. In addition, the accelerator integration circuit 436 may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management.Because hardware resources of the graphics processing engines 431-432, N are mapped explicitly to the real address space seen by the host processor 407, any host processor can address these resources directly using an effective address value. One function of the accelerator integration circuit 436, in one embodiment, is the physical separation of the graphics processing engines 431-432, N so that they appear to the system as independent units.As mentioned, in the illustrated embodiment, one or more graphics memories 433-434, M are coupled to each of the graphics processing engines 431-432, N, respectively. The graphics memories 433-434, M store instructions and data being processed by each of the graphics processing engines 431-432, N. The graphics memories 433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint or Nano-Ram.In one embodiment, to reduce data traffic over link 440, biasing techniques are used to ensure that the data stored in graphics memories 433-434, M is data which will be used most frequently by the graphics processing engines 431-432, N and preferably not used by the cores 460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines 431-432, N) within the caches 462A-462D, 456 of the cores and system memory 411.Figure 4C illustrates another embodiment in which the accelerator integration circuit 436 is integrated within the processor 407. In this embodiment, the graphics processing engines 431-432, N communicate directly over the high-speed link 440 to the accelerator integration circuit 436 via interface 437 and interface 435 (which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit 436 may perform the same operations as those described with respect to Figure 4B, but potentially at a higher throughput given its close proximity to the coherency bus 462 and caches 462A-462D, 426.One embodiment supports different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit 436 and programming models which are controlled by the graphics acceleration module 446.In one embodiment of the dedicated process model, graphics processing engines 431-432, N are dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines 431-432, N, providing virtualization within a VM/partition.In the dedicated-process programming models, the graphics processing engines 431-432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines 431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines 431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines 431-432, N to provide access to each process or application.For the shared programming model, the graphics acceleration module 446 or an individual graphics processing engine 431-432, N selects a process element using a process handle. In one embodiment, process elements are stored in system memory 411 and are addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine 431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list.Figure 4D illustrates an exemplary accelerator integration slice 490. As used herein, a "slice" comprises a specified portion of the processing resources of the accelerator integration circuit 436. Application effective address space 482 within system memory 411 stores process elements 483. In one embodiment, the process elements 483 are stored in response to GPU invocations 481 from applications 480 executed on the processor 407. A process element 483 contains the process state for the corresponding application 480. A work descriptor (WD) 484 contained in the process element 483 can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD 484 is a pointer to the job request queue in the application's address space 482.The graphics acceleration module 446 and/or the individual graphics processing engines 431-432, N can be shared by all or a subset of the processes in the system. Embodiments of the invention include an infrastructure for setting up the process state and sending a WD 484 to a graphics acceleration module 446 to start a job in a virtualized environment.In one implementation, the dedicated-process programming model is implementation-specific. In this model, a single process owns the graphics acceleration module 446 or an individual graphics processing engine 431. Because the graphics acceleration module 446 is owned by a single process, the hypervisor initializes the accelerator integration circuit 436 for the owning partition and the operating system initializes the accelerator integration circuit 436 for the owning process at the time when the graphics acceleration module 446 is assigned.In operation, a WD fetch unit 491 in the accelerator integration slice 490 fetches the next WD 484 which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module 446. Data from the WD 484 may be stored in registers 445 and used by the MMU 439, interrupt management circuit 447 and/or context management circuit 446 as illustrated. For example, one embodiment of the MMU 439 includes segment/page walk circuitry for accessing segment/page tables 486 within the OS virtual address space 485. The interrupt management circuit 447 may process interrupt events 492 received from the graphics acceleration module 446. When performing graphics operations, an effective address 493 generated by a graphics processing engine 431-432, N is translated to a real address by the MMU 439.In one embodiment, the same set of registers 445 are duplicated for each graphics processing engine 431-432, N and/or graphics acceleration module 446 and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice 490. Exemplary registers that may be initialized by the hypervisor are shown in Table 1.Table 1 - Hypervisor Initialized Registers1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator Utilization Record Pointer9Storage Description RegisterExemplary registers that may be initialized by the operating system are shown in Table 2.Table 2 - Operating System Initialized Registers1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization Record Pointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptorIn one embodiment, each WD 484 is specific to a particular graphics acceleration module 446 and/or graphics processing engine 431-432, N. It contains all the information a graphics processing engine 431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed.Figure 4E illustrates additional details for one embodiment of a shared model. This embodiment includes a hypervisor real address space 498 in which a process element list 499 is stored. The hypervisor real address space 498 is accessible via a hypervisor 496 which virtualizes the graphics acceleration module engines for the operating system 495.The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module 446. There are two programming models where the graphics acceleration module 446 is shared by multiple processes and partitions: time-sliced shared and graphics directed shared.In this model, the system hypervisor 496 owns the graphics acceleration module 446 and makes its function available to all operating systems 495. For a graphics acceleration module 446 to support virtualization by the system hypervisor 496, the graphics acceleration module 446 may adhere to the following requirements: 1) An application's job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module 446 must provide a context save and restore mechanism. 2) An application's job request is guaranteed by the graphics acceleration module 446 to complete in a specified amount of time, including any translation faults, or the graphics acceleration module 446 provides the ability to preempt the processing of the job. 3) The graphics acceleration module 446 must be guaranteed fairness between processes when operating in the directed shared programming model.In one embodiment, for the shared model, the application 480 is required to make an operating system 495 system call with a graphics acceleration module 446 type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module 446 type describes the targeted acceleration function for the system call. The graphics acceleration module 446 type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module 446 and can be in the form of a graphics acceleration module 446 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module 446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating system is similar to an application setting the AMR. If the accelerator integration circuit 436 and graphics acceleration module 446 implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor 496 may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element 483. In one embodiment, the CSRP is one of the registers 445 containing the effective address of an area in the application's address space 482 for the graphics acceleration module 446 to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory.Upon receiving the system call, the operating system 495 may verify that the application 480 has registered and been given the authority to use the graphics acceleration module 446. The operating system 495 then calls the hypervisor 496 with the information shown in Table 3.Table 3 - OS to Hypervisor Call Parameters1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)Upon receiving the hypervisor call, the hypervisor 496 verifies that the operating system 495 has registered and been given the authority to use the graphics acceleration module 446. The hypervisor 496 then puts the process element 483 into the process element linked list for the corresponding graphics acceleration module 446 type. The process element may include the information shown in Table 4.Table 4 - Process Element Information1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)8Interrupt vector table, derived from the hypervisor call parameters.9A state register (SR) value10A logical partition ID (LPID)11A real address (RA) hypervisor accelerator utilization record pointer12The Storage Descriptor Register (SDR)In one embodiment, the hypervisor initializes a plurality of accelerator integration slice 490 registers 445.As illustrated in Figure 4F, one embodiment of the invention employs a unified memory addressable via a common virtual memory address space used to access the physical processor memories 401-402 and GPU memories 420-423. In this implementation, operations executed on the GPUs 410-413 utilize the same virtual/effective memory address space to access the processors memories 401-402 and vice versa, thereby simplifying programmability. In one embodiment, a first portion of the virtual/effective address space is allocated to the processor memory 401, a second portion to the second processor memory 402, a third portion to the GPU memory 420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) is thereby distributed across each of the processor memories 401-402 and GPU memories 420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.In one embodiment, bias/coherence management circuitry 494A-494E within one or more of the MMUs 439A-439E ensures cache coherence between the caches of the host processors (e.g., 405) and the GPUs 410-413 and biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry 494A-494E are illustrated in Figure 4F, the bias/coherence circuitry may be implemented within the MMU of one or more host processors 405 and/or within the accelerator integration circuit 436.One embodiment allows GPU-attached memory 420-423 to be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU-attached memory 420-423 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor 405 software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory 420-423 without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU 410-413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload.In one implementation, the selection of between GPU bias and host processor bias is driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories 420-423, with or without a bias cache in the GPU 410-413 (e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU.In one implementation, the bias table entry associated with each access to the GPU-attached memory 420-423 is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU 410-413 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor 405 (e.g., over a high speed link as discussed above). In one embodiment, requests from the processor 405 that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU 410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page.The bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism.One mechanism for changing the bias state employs an API call (e.g. OpenCL), which, in turn, calls the GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor 405 bias to GPU bias, but is not required for the opposite transition.In one embodiment, cache coherency is maintained by temporarily rendering GPU-biased pages uncacheable by the host processor 405. In order to access these pages, the processor 405 may request access from the GPU 410 which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the processor 405 and GPU 410 it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor 405 and vice versa.Graphics Processing PipelineFigure 5 illustrates a graphics processing pipeline 500, according to an embodiment. In one embodiment, a graphics processor can implement the illustrated graphics processing pipeline 500. The graphics processor can be included within the parallel processing subsystems as described herein, such as the parallel processor 200 of Figure 2 , which, in one embodiment, is a variant of the parallel processor(s) 112 of Figure 1 . The various parallel processing systems can implement the graphics processing pipeline 500 via one or more instances of the parallel processing unit (e.g., parallel processing unit 202 of Figure 2 ) as described herein. For example, a shader unit (e.g., graphics multiprocessor 234 of Figure 3 ) may be configured to perform the functions of one or more of a vertex processing unit 504, a tessellation control processing unit 508, a tessellation evaluation processing unit 512, a geometry processing unit 516, and a fragment/pixel processing unit 524. The functions of data assembler 502, primitive assemblers 506, 514, 518, tessellation unit 510, rasterizer 522, and raster operations unit 526 may also be performed by other processing engines within a processing cluster (e.g., processing cluster 214 of Figure 3 ) and a corresponding partition unit (e.g., partition unit 220A-220N of Figure 2 ). The graphics processing pipeline 500 may also be implemented using dedicated processing units for one or more functions. In one embodiment, one or more portions of the graphics processing pipeline 500 can be performed by parallel processing logic within a general-purpose processor (e.g., CPU). In one embodiment, one or more portions of the graphics processing pipeline 500 can access on-chip memory (e.g., parallel processor memory 222 as in Figure 2 ) via a memory interface 528, which may be an instance of the memory interface 218 of Figure 2 .In one embodiment, the data assembler 502 is a processing unit that collects vertex data for surfaces and primitives. The data assembler 502 then outputs the vertex data, including the vertex attributes, to the vertex processing unit 504. The vertex processing unit 504 is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit 504 reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space.A first instance of a primitive assembler 506 receives vertex attributes from the vertex processing unit 50. The primitive assembler 506 readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit 508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs).The tessellation control processing unit 508 treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch's bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit 512. The tessellation control processing unit 508 can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit 510 is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit 512. The tessellation evaluation processing unit 512 operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives.A second instance of a primitive assembler 514 receives vertex attributes from the tessellation evaluation processing unit 512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit 516. The geometry processing unit 516 is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler 514 as specified by the geometry shader programs. In one embodiment the geometry processing unit 516 is programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives.In some embodiments the geometry processing unit 516 can add or delete elements in the geometry stream. The geometry processing unit 516 outputs the parameters and vertices specifying new graphics primitives to primitive assembler 518. The primitive assembler 518 receives the parameters and vertices from the geometry processing unit 516 and constructs graphics primitives for processing by a viewport scale, cull, and clip unit 520. The geometry processing unit 516 reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit 520 performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer 522. The rasterizer 522 can perform depth culling and other depth-based optimizations. The rasterizer 522 also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit 524. The rasterizer 522 scan converts the new graphics primitives and outputs fragment and coverage data to the fragment/pixel processing unit 524.The fragment/pixel processing unit 524 is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit 524 transforming fragments or pixels received from rasterizer 522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit 524 may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit 526. The fragment/pixel processing unit 524 can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units.The raster operations unit 526 is a processing unit that performs raster operations including, but not limited to stencil, z test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory 222 as in Figure 1 , to be displayed on the one or more display device(s) 110 or for further processing by one of the one or more processor(s) 102 or parallel processor(s)112. In some embodiments, the raster operations unit 526 is configured to compress z or color data that is written to memory and decompress z or color data that is read from memory.Figure 6 illustrates one embodiment of a computing device 600 employing a compute optimization (compute) mechanism. Computing device 600 (e.g., smart wearable devices, virtual reality (VR) devices, head-mounted display (HMDs), mobile computers, Internet of Things (IoT) devices, laptop computers, desktop computers, server computers, etc.) may be the same as data processing system 100 of Figure 1 and accordingly, for brevity, clarity, and ease of understanding, many of the details stated above with reference to Figures 1-5 are not further discussed or repeated hereafter. As illustrated, in one embodiment, computing device 600 is shown as hosting a compute mechanism 610.As illustrated, in one embodiment, compute mechanism 610 may be hosted by graphics driver 616. However in other embodiments, compute mechanism 610 may be hosted solely in GPU 614. In yet other embodiments, compute mechanism 610 may be hosted by or part of firmware of central processing unit ("CPU" or "application processor") 612. For brevity, clarity, and ease of understanding, throughout the rest of this document, compute mechanism 610 may be discussed as part of graphics driver 616; however, embodiments are not limited as such.In yet another embodiment, compute mechanism 610 may be hosted as software or firmware logic by operating system 606. In yet a further embodiment, compute mechanism 610 may be partially and simultaneously hosted by multiple components of computing device 600, such as one or more of graphics driver 616, GPU 614, GPU firmware, CPU 612, CPU firmware, operating system 606, and/or the like. It is contemplated that compute mechanism 610 or one or more of their components may be implemented as hardware, software, and/or firmware.Throughout the document, term "user" may be interchangeably referred to as "viewer", "observer", "person", "individual", "end-user", and/or the like. It is to be noted that throughout this document, terms like "graphics domain" may be referenced interchangeably with "graphics processing unit", "graphics processor", or simply "GPU" and similarly, "CPU domain" or "host domain" may be referenced interchangeably with "computer processing unit", "application processor", or simply "CPU".Computing device 600 may include any number and type of communication devices, such as large computing systems, such as server computers, desktop computers, etc., and may further include set-top boxes (e.g., Internet-based cable television set-top boxes, etc.), global positioning system (GPS)-based devices, etc. Computing device 600 may include mobile computing devices serving as communication devices, such as cellular phones including smartphones, personal digital assistants (PDAs), tablet computers, laptop computers, e-readers, smart televisions, television platforms, wearable devices (e.g., glasses, watches, bracelets, smartcards, jewelry, clothing items, etc.), media players, etc. For example, in one embodiment, computing device 600 may include a mobile computing device employing a computer platform hosting an integrated circuit ("IC"), such as system on a chip ("SoC" or "SOC"), integrating various hardware and/or software components of computing device 600 on a single chip.As illustrated, in one embodiment, computing device 600 may include any number and type of hardware and/or software components, such as (without limitation) GPU 614, graphics driver (also referred to as "GPU driver", "graphics driver logic", "driver logic", user-mode driver (UMD), UMD, user-mode driver framework (UMDF), UMDF, or simply "driver") 616, CPU 612, memory 608, network devices, drivers, or the like, as well as input/output (I/O) sources 604, such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, ports, connectors, etc.Computing device 600 may include operating system (OS) 606 serving as an interface between hardware and/or physical resources of the computer device 600 and a user. It is contemplated that CPU 612 may include one or more processors, such as processor(s) 102 of Figure 1 , while GPU 614 may include one or more graphics processors (or multiprocessors).It is to be noted that terms like "node", "computing node", "server", "server device", "cloud computer", "cloud server", "cloud server computer", "machine", "host machine", "device", "computing device", "computer", "computing system", and the like, may be used interchangeably throughout this document. It is to be further noted that terms like "application", "software application", "program", "software program", "package", "software package", and the like, may be used interchangeably throughout this document. Also, terms like "job", "input", "request", "message", and the like, may be used interchangeably throughout this document.It is contemplated and as further described with reference to Figures 1-5 , some processes of the graphics pipeline as described above are implemented in software, while the rest are implemented in hardware. A graphics pipeline may be implemented in a graphics coprocessor design, where CPU 612 is designed to work with GPU 614 which may be included in or co-located with CPU 612. In one embodiment, GPU 614 may employ any number and type of conventional software and hardware logic to perform the conventional functions relating to graphics rendering as well as novel software and hardware logic to execute any number and type of instructions.As aforementioned, memory 608 may include a random access memory (RAM) comprising application database having object information. A memory controller hub, such as memory hub 105 of Figure 1 , may access data in the RAM and forward it to GPU 614 for graphics pipeline processing. RAM may include double data rate RAM (DDR RAM), extended data output RAM (EDO RAM), etc. CPU 612 interacts with a hardware graphics pipeline to share graphics pipelining functionality.Processed data is stored in a buffer in the hardware graphics pipeline, and state information is stored in memory 608. The resulting image is then transferred to I/O sources 604, such as a display component for displaying of the image. It is contemplated that the display device may be of various types, such as Cathode Ray Tube (CRT), Thin Film Transistor (TFT), Liquid Crystal Display (LCD), Organic Light Emitting Diode (OLED) array, etc., to display information to a user.Memory 608 may comprise a pre-allocated region of a buffer (e.g., frame buffer); however, it should be understood by one of ordinary skill in the art that the embodiments are not so limited, and that any memory accessible to the lower graphics pipeline may be used. Computing device 600 may further include input/output (I/O) control hub (ICH) 107 as referenced in Figure 1 , as one or more I/O sources 604, etc.CPU 612 may include one or more processors to execute instructions in order to perform whatever software routines the computing system implements. The instructions frequently involve some sort of operation performed upon data. Both data and instructions may be stored in system memory 608 and any associated cache. Cache is typically designed to have shorter latency times than system memory 608; for example, cache might be integrated onto the same silicon chip(s) as the processor(s) and/or constructed with faster static RAM (SRAM) cells whilst the system memory 608 might be constructed with slower dynamic RAM (DRAM) cells. By tending to store more frequently used instructions and data in the cache as opposed to the system memory 608, the overall performance efficiency of computing device 600 improves. It is contemplated that in some embodiments, GPU 614 may exist as part of CPU 612 (such as part of a physical CPU package) in which case, memory 608 may be shared by CPU 612 and GPU 614 or kept separated.System memory 608 may be made available to other components within the computing device 600. For example, any data (e.g., input graphics data) received from various interfaces to the computing device 600 (e.g., keyboard and mouse, printer port, Local Area Network (LAN) port, modem port, etc.) or retrieved from an internal storage element of the computer device 600 (e.g., hard disk drive) are often temporarily queued into system memory 608 prior to being operated upon by the one or more processor(s) in the implementation of a software program. Similarly, data that a software program determines should be sent from the computing device 600 to an outside entity through one of the computing system interfaces, or stored into an internal storage element, is often temporarily queued in system memory 608 prior to its being transmitted or stored.Further, for example, an ICH may be used for ensuring that such data is properly passed between the system memory 608 and its appropriate corresponding computing system interface (and internal storage device if the computing system is so designed) and may have bi-directional point-to-point links between itself and the observed I/O sources/devices 604. Similarly, an MCH may be used for managing the various contending requests for system memory 608 accesses amongst CPU 612 and GPU 614, interfaces and internal storage elements that may proximately arise in time with respect to one another.I/O sources 604 may include one or more I/O devices that are implemented for transferring data to and/or from computing device 600 (e.g., a networking adapter); or, for a large scale non-volatile storage within computing device 600 (e.g., hard disk drive). User input device, including alphanumeric and other keys, may be used to communicate information and command selections to GPU 614. Another type of user input device is cursor control, such as a mouse, a trackball, a touchscreen, a touchpad, or cursor direction keys to communicate direction information and command selections to GPU 614 and to control cursor movement on the display device. Camera and microphone arrays of computer device 600 may be employed to observe gestures, record audio and video and to receive and transmit visual and audio commands.Computing device 600 may further include network interface(s) to provide access to a network, such as a LAN, a wide area network (WAN), a metropolitan area network (MAN), a personal area network (PAN), Bluetooth, a cloud network, a mobile network (e.g., 3rd Generation (3G), 4th Generation (4G), etc.), an intranet, the Internet, etc. Network interface(s) may include, for example, a wireless network interface having antenna, which may represent one or more antenna(e). Network interface(s) may also include, for example, a wired network interface to communicate with remote devices via network cable, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable.Network interface(s) may provide access to a LAN, for example, by conforming to IEEE 802.11b and/or IEEE 802.11g standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols, including previous and subsequent versions of the standards, may also be supported. In addition to, or instead of, communication via the wireless LAN standards, network interface(s) may provide wireless communication using, for example, Time Division, Multiple Access (TDMA) protocols, Global Systems for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocols.Network interface(s) may include one or more communication interfaces, such as a modem, a network interface card, or other well-known interface devices, such as those used for coupling to the Ethernet, token ring, or other types of physical wired or wireless attachments for purposes of providing a communication link to support a LAN or a WAN, for example. In this manner, the computer system may also be coupled to a number of peripheral devices, clients, control surfaces, consoles, or servers via a conventional network infrastructure, including an Intranet or the Internet, for example.It is to be appreciated that a lesser or more equipped system than the example described above may be preferred for certain implementations. Therefore, the configuration of computing device 600 may vary from implementation to implementation depending upon numerous factors, such as price constraints, performance requirements, technological improvements, or other circumstances. Examples of the electronic device or computer system 600 may include (without limitation) a mobile device, a personal digital assistant, a mobile computing device, a smartphone, a cellular telephone, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combinations thereof.Embodiments may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a parentboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). The term "logic" may include, by way of example, software or hardware and/or combinations of software and hardware.Embodiments may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments described herein. A machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), and magneto-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions.Moreover, embodiments may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of one or more data signals embodied in and/or modulated by a carrier wave or other propagation medium via a communication link (e.g., a modem and/or network connection).In conventional GPUs, computation units comprise identical execution units (EUs) that are replicated to produce an array of EUs. The identical configuration requires the EUs to have a superset of all potential features for various usages. However, different usages dictate different core infrastructure from a micro-architecture perspective. For example, 3D applications require large number of threads in the system with smaller thread register space (∼128), while media applications require a small number of threads with larger register space. Similarly, different compute applications will have their own unique requirements.According to one embodiment, GPU 614 is configured to include processing units having different types of execution units. Figure 7A illustrates one embodiment of GPU 614. As shown in Figure 7A , GPU 614 includes processing units 700 (e.g., 700(a) - 700(n)). According to one embodiment, each of processors 700 comprise different types of EUs. In such an embodiment, processor 700(a) may include EUs 705 (e.g., 705(a) - 705(n)) which comprise a first type, while processor 700(b) may include EUs 705 (e.g., 706(a) - 706(n)) comprising a second type. Similarly, processor 700(c) may include EUs 707 (e.g., 705(a) - 705(n)) comprising a third type.In other embodiments, processing units 700 may be identical, with different types of EUs. For instance, EUs 705(0), 706(0), 706(0) comprise a first type, while 705(1), 706(1), 706(1) and 705(n), 706(n), 706(n) comprise second and third types, respectively. The EU types may differ in the number of threads that may be processed, the number of registers per thread, or any other processing characteristic. For example, 3D applications may require a larger number of threads and smaller thread register space (e.g., ∼128), while media applications may require a small number of threads with larger register space. Thus, different EU types may be implemented to perform those applications.In other embodiments, the EUs may of the same type, however configured with different capabilities. In such embodiments, each EU type (or configuration) is designed for a specific deep learning use model (e.g., training, convolution, bias, Rectified Linear Units (ReLU), pooling, etc.).During an application, when GPU 614 is invoked, compute mechanism 610 selects the EUs that are to be implemented to execute a workload. In one embodiment, compute mechanism 610 may include a compiler that statically selects the EUs prior to execution of the application such that the EU configuration remains the same during the lifetime of a specific application. In other embodiments, compute mechanism 610 may be implemented as a thread dispatcher that optimally configures the EUs for each invocation of GPU 614 execution. In yet other embodiments, the thread dispatcher may dynamically change the EU configuration per thread group during dispatch.According to one embodiment, compute mechanism 610 may also transmit software hints to GPU 614. In such an embodiment, the hints may indicate that GPU 614 is to power down, or bypass, higher power cores if processing requires less processing intensive EUs. Thus, compute mechanism provides computational and power benefits that enable the dispatch of threads to particular EUs, while EUs that are not needed are transitioned to lower power states.GPU 614 may also be implemented to process matrix operations in deep learning applications. Such matrix applications require the transfer of large quantities of data from memory to GPU 614. According to one embodiment, processing units 700 may be included within memory to eliminate such data transfers. Figure 7B illustrates one embodiment of a memory 750 including including processing units 700.In one embodiment, memory 750 is a high bandwidth memory (HBM). In such an embodiment, a HBM is included in the same package as GPU 614. Thus, processing units 700 may be physically attached to the HBM controller 760. Although discussed herein as an HBM, other embodiments may implement different types of memory devices having a high-performance RAM interface.As shown in Figure 7B, memory 750 includes channels 752 (0, 1, ...N). In one embodiment, each memory channel 752 includes a processing unit 700, such that processing unit 700(a) is included within channel 0, processing unit 700(b) is included within channel 1 and processing unit 700(n) is included within channel N. In a further embodiment, processing units 700 may also include different types of EUs, as discussed above.In yet another embodiment, compute mechanism 610 may efficiently perform matrix-vector (e.g., matrix to vector and vector to matrix) transformations. Figure 7C illustrates an exemplary linear transformation of a 2D array (M+1) x (M+1). Matrix-vector transformations are compute intensive in current GPUs due to a requirement of having to move large quantities of data during the operations to perform the transformations. According to one embodiment, compute mechanism 610 implements a register file within GPU 614 to efficiently perform matrix to vector transformations and vector to matrix transformations.In such an embodiment, compute mechanism 610 modifies the register content of the register file without having to actually move the data. Thus, the matrix/vector data is stored in contiguous register blocks. In one embodiment, instructions to perform the operations include a source and a destination. In this embodiment, the source includes the register address start limit, stride of an array, number of elements and element size. Once operations are performed the results are stored in the destination register. Transformations with the register files may be implemented with any data type (e.g., 4-bit, 8-bit, 16-bit), or with machine learning algorithms. Although described above with register files, the transformations may be performed in a shared local memory (SLM) according to other embodiments.Machine Learning OverviewA machine learning algorithm is an algorithm that can learn based on a set of data. Embodiments of machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories to which a given input belong; regression algorithms can output a numerical value given an input; and pattern recognition algorithms can be used to generate translated text or perform text to speech and/or speech recognition.An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; a simple type of neural network is a feedforward network. A feedforward network may be implemented as an acyclic graph in which the nodes are arranged in layers. Typically, a feedforward network topology includes an input layer and an output layer that are separated by at least one hidden layer. The hidden layer transforms input received by the input layer into a representation that is useful for generating output in the output layer. The network nodes are fully connected via edges to the nodes in adjacent layers, but there are no edges between nodes within each layer. Data received at the nodes of an input layer of a feedforward network are propagated (i.e., "fed forward") to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients ("weights") respectively associated with each of the edges connecting the layers. Depending on the specific model being represented by the algorithm being executed, the output from the neural network algorithm can take various forms.Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the "correct" labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated, and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered "trained" when the errors for each of the outputs generated from the instances of the training data set are minimized.The accuracy of a machine learning algorithm can be affected significantly by the quality of the data set used to train the algorithm. The training process can be computationally intensive and may require a significant amount of time on a conventional general-purpose processor. Accordingly, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, as the computations performed in adjusting the coefficients in neural networks lend themselves naturally to parallel implementations. Specifically, many machine learning algorithms and software applications have been adapted to make use of the parallel processing hardware within general-purpose graphics processing devices.Figure 8 is a generalized diagram of a machine learning software stack 800. A machine learning application 802 can be configured to train a neural network using a training dataset or to use a trained deep neural network to implement machine intelligence. The machine learning application 802 can include training and inference functionality for a neural network and/or specialized software that can be used to train a neural network before deployment. The machine learning application 802 can implement any type of machine intelligence including but not limited to image recognition, mapping and localization, autonomous navigation, speech synthesis, medical imaging, or language translation.Hardware acceleration for the machine learning application 802 can be enabled via a machine learning framework 804. The machine learning framework 804 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework 804, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 804. Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework 804 can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations.The machine learning framework 804 can process input data received from the machine learning application 802 and generate the appropriate input to a compute framework 806. The compute framework 806 can abstract the underlying instructions provided to the GPGPU driver 808 to enable the machine learning framework 804 to take advantage of hardware acceleration via the GPGPU hardware 810 without requiring the machine learning framework 804 to have intimate knowledge of the architecture of the GPGPU hardware 810. Additionally, the compute framework 806 can enable hardware acceleration for the machine learning framework 804 across a variety of types and generations of the GPGPU hardware 810.GPGPU Machine Learning AccelerationFigure 9 illustrates a highly-parallel general-purpose graphics processing unit 900, according to an embodiment. In one embodiment, the general-purpose processing unit (GPGPU) 900 can be configured to be particularly efficient in processing the type of computational workloads associated with training deep neural networks. Additionally, the GPGPU 900 can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks.The GPGPU 900 includes a host interface 902 to enable a connection with a host processor. In one embodiment, the host interface 902 is a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU 900 receives commands from the host processor and uses a global scheduler 904 to distribute execution threads associated with those commands to a set of compute clusters 906A-H. The compute clusters 906A-H share a cache memory 908. The cache memory 908 can serve as a higher-level cache for cache memories within the compute clusters 906A-H.The GPGPU 900 includes memory 914A-B coupled with the compute clusters 906A-H via a set of memory controllers 912A-B. In various embodiments, the memory 914A-B can include various types of memory devices including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In one embodiment, the memory units 224A-N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM).In one embodiment, each compute cluster GPLAB06A-H includes a set of graphics multiprocessors, such as the graphics multiprocessor 400 of Figure 4A . The graphics multiprocessors of the compute cluster multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example, and in one embodiment at least a subset of the floating point units in each of the compute clusters 906A-H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating point units can be configured to perform 64-bit floating point operations.Multiple instances of the GPGPU 900 can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. In one embodiment, the multiple instances of the GPGPU 900 communicate over the host interface 902. In one embodiment, the GPGPU 900 includes an I/O hub 908 that couples the GPGPU 900 with a GPU link 910 that enables a direct connection to other instances of the GPGPU. In one embodiment, the GPU link 910 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of the GPGPU 900. In one embodiment, the GPU link 910 couples with a high speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In one embodiment, the multiple instances of the GPGPU 900 are located in separate data processing systems and communicate via a network device that is accessible via the host interface 902. In one embodiment, the GPU link 910 can be configured to enable a connection to a host processor in addition to or as an alternative to the host interface 902.While the illustrated configuration of the GPGPU 900 can be configured to train neural networks, one embodiment provides alternate configuration of the GPGPU 900 that can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration the GPGPU 900 includes fewer of the compute clusters 906A-H relative to the training configuration. Additionally, memory technology associated with the memory 914A-B may differ between inferencing and training configurations. In one embodiment, the inferencing configuration of the GPGPU 900 can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks.Figure 10 illustrates a multi-GPU computing system 1000, according to an embodiment. The multi-GPU computing system 1000 can include a processor 1002 coupled to multiple GPGPUs 1006A-D via a host interface switch 1004. The host interface switch 1004, in one embodiment, is a PCI express switch device that couples the processor 1002 to a PCI express bus over which the processor 1002 can communicate with the set of GPGPUs 1006A-D. Each of the multiple GPGPUs 1006A-D can be an instance of the GPGPU 900 of Figure 9 . The GPGPUs 1006A-D can interconnect via a set of high-speed point to point GPU to GPU links 1016. The high-speed GPU to GPU links can connect to each of the GPGPUs 1006A-D via a dedicated GPU link, such as the GPU link 910 as in Figure 9 . The P2P GPU links 1016 enable direct communication between each of the GPGPUs 1006A-D without requiring communication over the host interface bus to which the processor 1002 is connected. With GPU-to-GPU traffic directed to the P2P GPU links, the host interface bus remains available for system memory access or to communicate with other instances of the multi-GPU computing system 1000, for example, via one or more network devices. While in the illustrated embodiment the GPGPUs 1006A-D connect to the processor 1002 via the host interface switch 1004, in one embodiment the processor 1002 includes direct support for the P2P GPU links 1016 and can connect directly to the GPGPUs 1006A-D.Machine Learning Neural Network ImplementationsThe computing architecture provided by embodiments described herein can be configured to perform the types of parallel processing that is particularly suited for training and deploying neural networks for machine learning. A neural network can be generalized as a network of functions having a graph relationship. As is well-known in the art, there are a variety of types of neural network implementations used in machine learning. One exemplary type of neural network is the feedforward network, as previously described.A second exemplary type of neural network is the Convolutional Neural Network (CNN). A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of "filters" (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.Recurrent neural networks (RNNs) are a family of feedforward neural networks that include feedback connections between layers. RNNs enable modeling of sequential data by sharing parameter data across different parts of the neural network. The architecture for a RNN includes cycles. The cycles represent the influence of a present value of a variable on its own value at a future time, as at least a portion of the output data from the RNN is used as feedback for processing subsequent input in a sequence. This feature makes RNNs particularly useful for language processing due to the variable nature in which language data can be composed.The figures described below present exemplary feedforward, CNN, and RNN networks, as well as describe a general process for respectively training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and non-limiting as to any specific embodiment described herein and the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general.The exemplary neural networks described above can be used to perform deep learning. Deep learning is machine learning using deep neural networks. The deep neural networks used in deep learning are artificial neural networks composed of multiple hidden layers, as opposed to shallow neural networks that include only a single hidden layer. Deeper neural networks are generally more computationally intensive to train. However, the additional hidden layers of the network enable multistep pattern recognition that results in reduced output error relative to shallow machine learning techniques.Deep neural networks used in deep learning typically include a front-end network to perform feature recognition coupled to a back-end network which represents a mathematical model that can perform operations (e.g., object classification, speech recognition, etc.) based on the feature representation provided to the model. Deep learning enables machine learning to be performed without requiring hand crafted feature engineering to be performed for the model. Instead, deep neural networks can learn features based on statistical structure or correlation within the input data. The learned features can be provided to a mathematical model that can map detected features to an output. The mathematical model used by the network is generally specialized for the specific task to be performed, and different models will be used to perform different task.Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then learn from those errors using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network.Figures 11A & 11B illustrate an exemplary convolutional neural network. Figure 11A illustrates various layers within a CNN. As shown in Figure 11A, an exemplary CNN used to model image processing can receive input 1102 describing the red, green, and blue (RGB) components of an input image. The input 1102 can be processed by multiple convolutional layers (e.g., convolutional layer 1104, convolutional layer 1106). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers 1108. Neurons in a fully connected layer have full connections to all activations in the previous layer, as previously described for a feedforward network. The output from the fully connected layers 1108 can be used to generate an output result from the network. The activations within the fully connected layers 908 can be computed using matrix multiplication instead of convolution. Not all CNN implementations are make use of fully connected layers 1108. For example, in some implementations the convolutional layer 1106 can generate output for the CNN.The convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers 1108. Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. However, the convolutional layers are sparsely connected because the output of the convolution of a field is input (instead of the respective state value of each of the nodes in the field) to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to scale to process large images.Figure 11B illustrates exemplary computation stages within a convolutional layer of a CNN. Input to a convolutional layer 1112 of a CNN can be processed in three stages of a convolutional layer 1114. The three stages can include a convolution stage 1116, a detector stage 1118, and a pooling stage 1120. The convolution layer 1114 can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification value for the input to the CNNIn the convolution stage 1116 performs several convolutions in parallel to produce a set of linear activations. The convolution stage 1116 can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformations include rotations, translations, scaling, and combinations of these transformations. The convolution stage computes the output of functions (e.g., neurons) that are connected to specific regions in the input, which can be determined as the local region associated with the neuron. The neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage 1116 defines a set of linear activations that are processed by successive stages of the convolutional layer 1114.The linear activations can be processed by a detector stage 1118. In the detector stage 1118, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as f(x) = max (0, x), such that the activation is thresholded at zero.The pooling stage 1120 uses a pooling function that replaces the output of the convolutional layer 1106 with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage 1120, including max pooling, average pooling, and 12-norm pooling. Additionally, some CNN implementations do not include a pooling stage. Instead, such implementations substitute and additional convolution stage having an increased stride relative to previous convolution stages.The output from the convolutional layer 1114 can then be processed by the next layer 1122. The next layer 1122 can be an additional convolutional layer or one of the fully connected layers 1108. For example, the first convolutional layer 1104 of Figure 11A can output to the second convolutional layer 1106, while the second convolutional layer can output to a first layer of the fully connected layers 1108.Figure 12 illustrates an exemplary recurrent neural network 1200. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. For example, an RNN may be used to perform statistical language modeling to predict an upcoming word given a previous sequence of words. The illustrated RNN 1200 can be described has having an input layer 1202 that receives an input vector, hidden layers 1204 to implement a recurrent function, a feedback mechanism 1205 to enable a 'memory' of previous states, and an output layer 1206 to output a result. The RNN 1200 operates based on time-steps. The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism 1205. For a given time step, the state of the hidden layers 1204 is defined by the previous state and the input at the current time step. An initial input (x1) at a first time step can be processed by the hidden layer 1204. A second input (x2) can be processed by the hidden layer 1204 using state information that is determined during the processing of the initial input (x1). A given state can be computed as st = f(Uxt + Wst-1), where U and W are parameter matrices. The function f is generally a nonlinearity, such as the hyperbolic tangent function (Tanh) or a variant of the rectifier function f(x) = max(0, x). However, the specific mathematical function used in the hidden layers 1004 can vary depending on the specific implementation details of the RNN 1200.In addition to the basic CNN and RNN networks described, variations on those networks may be enabled. One example RNN variant is the long short term memory (LSTM) RNN. LSTM RNNs are capable of learning long-term dependencies that may be necessary for processing longer sequences of language. A variant on the CNN is a convolutional deep belief network, which has a structure similar to a CNN and is trained in a manner similar to a deep belief network. A deep belief network (DBN) is a generative neural network that is composed of multiple layers of stochastic (random) variables. DBNs can be trained layer-by-layer using greedy unsupervised learning. The learned weights of the DBN can then be used to provide pre-train neural networks by determining an optimal initial set of weights for the neural network.Figure 13 illustrates training and deployment of a deep neural network. Once a given network has been structured for a task the neural network is trained using a training dataset 1302. Various training frameworks 1304 have been developed to enable hardware acceleration of the training process. For example, the machine learning framework 804 of Figure 8 may be configured as a training framework 1304. The training framework 1304 can hook into an untrained neural network 1306 and enable the untrained neural net to be trained using the parallel processing resources described herein to generate a trained neural net 1308.To start the training process the initial weights may be chosen randomly or by pre-training using a deep belief network. The training cycle then be performed in either a supervised or unsupervised manner.Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset 1302 includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework 1304 can adjust to adjust the weights that control the untrained neural network 1306. The training framework 1304 can provide tools to monitor how well the untrained neural network 1306 is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural net 1308. The trained neural network 1308 can then be deployed to implement any number of machine learning operations.Unsupervised learning is a learning method in which the network attempts to train itself using unlabeled data. Thus, for unsupervised learning the training dataset 1302 will include input data without any associated output data. The untrained neural network 1306 can learn groupings within the unlabeled input and can determine how individual inputs are related to the overall dataset. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 1307 capable of performing operations useful in reducing the dimensionality of data. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in an input dataset that deviate from the normal patterns of the data.Variations on supervised and unsupervised training may also be employed. Semi-supervised learning is a technique in which in the training dataset 1302 includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network 1308 to adapt to the new data 1312 without forgetting the knowledge instilled within the network during initial training.Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single compute node. Instead of using a single compute node, a distributed network of computational nodes can be used to accelerate the training process.Figure 14 is a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of a neural network. The distributed computational nodes can each include one or more host processors and one or more of the general-purpose processing nodes, such as the highly-parallel general-purpose graphics processing unit 900 as in Figure 9 . As illustrated, distributed learning can be performed model parallelism 1402, data parallelism 1404, or a combination of model and data parallelism 1204.In model parallelism 1402, different computational nodes in a distributed system can perform training computations for different parts of a single network. For example, each layer of a neural network can be trained by a different processing node of the distributed system. The benefits of model parallelism include the ability to scale to particularly large models. Splitting the computations associated with different layers of the neural network enables the training of very large neural networks in which the weights of all layers would not fit into the memory of a single computational node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks.In data parallelism 1404, the different nodes of the distributed network have a complete instance of the model and each node receives a different portion of the data. The results from the different nodes are then combined. While different approaches to data parallelism are possible, data parallel training approaches all require a technique of combining results and synchronizing the model parameters between each node. Exemplary approaches to combining data include parameter averaging and update based data parallelism. Parameter averaging trains each node on a subset of the training data and sets the global parameters (e.g., weights, biases) to the average of the parameters from each node. Parameter averaging uses a central parameter server that maintains the parameter data. Update based data parallelism is similar to parameter averaging except that instead of transferring parameters from the nodes to the parameter server, the updates to the model are transferred. Additionally, update based data parallelism can be performed in a decentralized manner, where the updates are compressed and transferred between nodes.Combined model and data parallelism 1406 can be implemented, for example, in a distributed system in which each computational node includes multiple GPUs. Each node can have a complete instance of the model with separate GPUs within each node are used to train different portions of the model.Distributed training has increased overhead relative to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques to reduce the overhead of distributed training, including techniques to enable high bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization.Exemplary Machine Learning ApplicationsMachine learning can be applied to solve a variety of technological problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. Applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating new categories of visual abilities. For example, computer vision applications can be configured to recognize sound waves from the vibrations induced in objects visible in a video. Parallel processor accelerated machine learning enables computer vision applications to be trained using significantly larger training dataset than previously feasible and enables inferencing systems to be deployed using low power parallel processors.Parallel processor accelerated machine learning has autonomous driving applications including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train driving models based on datasets that define the appropriate responses to specific training input. The parallel processors described herein can enable rapid training of the increasingly complex neural networks used for autonomous driving solutions and enables the deployment of low power inferencing processors in a mobile platform suitable for integration into autonomous vehicles.Parallel processor accelerated deep neural networks have enabled machine learning approaches to automatic speech recognition (ASR). ASR includes the creation of a function that computes the most probable linguistic sequence given an input acoustic sequence. Accelerated machine learning using deep neural networks have enabled the replacement of the hidden Markov models (HMMs) and Gaussian mixture models (GMMs) previously used for ASR.Parallel processor accelerated machine learning can also be used to accelerate natural language processing. Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to erroneous or unfamiliar input. Exemplary natural language processor applications include automatic machine translation between human languages.The parallel processing platforms used for machine learning can be divided into training platforms and deployment platforms. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single node training and multi-node, multi-GPU training. Exemplary parallel processors suited for training include the highly-parallel general-purpose graphics processing unit and the multi-GPU computing system. On the contrary, deployed machine learning platforms generally include lower power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles.Figure 15 illustrates an exemplary inferencing system on a chip (SOC) 1500 suitable for performing inferencing using a trained model. The SOC 1500 can integrate processing components including a media processor 1502, a vision processor 1504, a GPGPU 1506 and a multi-core processor 1508. The SOC 1500 can additionally include on-chip memory 1505 that can enable a shared on-chip data pool that is accessible by each of the processing components. The processing components can be optimized for low power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of the SOC 1500 can be used as a portion of the main control system for an autonomous vehicle. Where the SOC 1500 is configured for use in autonomous vehicles the SOC is designed and configured for compliance with the relevant functional safety standards of the deployment jurisdiction.During operation, the media processor 1502 and vision processor 1504 can work in concert to accelerate computer vision operations. The media processor 1502 can enable low latency decode of multiple high-resolution (e.g., 4K, 8K) video streams. The decoded video streams can be written to a buffer in the on-chip-memory 1505. The vision processor 1304 can then parse the decoded video and perform preliminary processing operations on the frames of the decoded video in preparation of processing the frames using a trained image recognition model. For example, the vision processor 1504 can accelerate convolution operations for a CNN that is used to perform image recognition on the high-resolution video data, while back end model computations are performed by the GPGPU 1506.The multi-core processor 1508 can include control logic to assist with sequencing and synchronization of data transfers and shared memory operations performed by the media processor 1502 and the vision processor 1504. The multi-core processor 1308 can also function as an application processor to execute software applications that can make use of the inferencing compute capability of the GPGPU 1506. For example, at least a portion of the navigation and driving logic can be implemented in software executing on the multi-core processor 1508. Such software can directly issue computational workloads to the GPGPU 1506 or the computational workloads can be issued to the multi-core processor 1508, which can offload at least a portion of those operations to the GPGPU 1506.The GPGPU 1506 can include compute clusters such as a low power configuration of the compute clusters 906A-906H within the highly-parallel general-purpose graphics processing unit 900. The compute clusters within the GPGPU 1506 can support instruction that are specifically optimized to perform inferencing computations on a trained neural network. For example, the GPGPU 1506 can support instructions to perform low precision computations such as 8-bit and 4-bit integer vector operations.Additional Exemplary Graphics Processing SystemDetails of the embodiments described above can be incorporated within graphics processing systems and devices described below. The graphics processing system and devices of Figures 16-29 illustrate alternative systems and graphics processing hardware that can implement any and all of the techniques described above.Additional Exemplary Graphics Processing System OverviewFigure 16 is a block diagram of a processing system 1600, according to an embodiment. In various embodiments the system 1600 includes one or more processors 1602 and one or more graphics processors 1608, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 1602 or processor cores 1607. In one embodiment, the system 1600 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.An embodiment of system 1600 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments system 1600 is a mobile phone, smart phone, tablet computing device or mobile Internet device. Data processing system 1600 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, data processing system 1600 is a television or set top box device having one or more processors 1602 and a graphical interface generated by one or more graphics processors 1608.In some embodiments, the one or more processors 1602 each include one or more processor cores 1607 to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores 1607 is configured to process a specific instruction set 1609. In some embodiments, instruction set 1609 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores 1607 may each process a different instruction set 1609, which may include instructions to facilitate the emulation of other instruction sets. Processor core 1607 may also include other processing devices, such a Digital Signal Processor (DSP).In some embodiments, the processor 1602 includes cache memory 1604. Depending on the architecture, the processor 1602 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 1602. In some embodiments, the processor 1602 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 1607 using known cache coherency techniques. A register file 1606 is additionally included in processor 1602 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 1602.In some embodiments, processor 1602 is coupled with a processor bus 1610 to transmit communication signals such as address, data, or control signals between processor 1602 and other components in system 1600. In one embodiment the system 1600 uses an exemplary 'hub' system architecture, including a memory controller hub 1616 and an Input Output (I/O) controller hub 1630. A memory controller hub 1616 facilitates communication between a memory device and other components of system 1600, while an I/O Controller Hub (ICH) 1630 provides connections to I/O devices via a local I/O bus. In one embodiment, the logic of the memory controller hub 1616 is integrated within the processor.Memory device 1620 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 1620 can operate as system memory for the system 1600, to store data 1622 and instructions 1621 for use when the one or more processors 1602 executes an application or process. Memory controller hub 1616 also couples with an optional external graphics processor 1612, which may communicate with the one or more graphics processors 1608 in processors 1602 to perform graphics and media operations.In some embodiments, ICH 1630 enables peripherals to connect to memory device 1620 and processor 1602 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 1646, a firmware interface 1628, a wireless transceiver 1626 (e.g., Wi-Fi, Bluetooth), a data storage device 1624 (e.g., hard disk drive, flash memory, etc.), and a legacy I/O controller 1640 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. One or more Universal Serial Bus (USB) controllers 1642 connect input devices, such as keyboard and mouse 1644 combinations. A network controller 1634 may also couple with ICH 1630. In some embodiments, a high-performance network controller (not shown) couples with processor bus 1610. It will be appreciated that the system 1600 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, the I/O controller hub 1630 may be integrated within the one or more processor 1602, or the memory controller hub 1616 and I/O controller hub 1630 may be integrated into a discreet external graphics processor, such as the external graphics processor 1612.Figure 17 is a block diagram of an embodiment of a processor 1700 having one or more processor cores 1702A-1702N, an integrated memory controller 1714, and an integrated graphics processor 1708. Those elements of Figure 17 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 1700 can include additional cores up to and including additional core 1702N represented by the dashed lined boxes. Each of processor cores 1702A-1702N includes one or more internal cache units 1704A-1704N. In some embodiments each processor core also has access to one or more shared cached units 1706.The internal cache units 1704A-1704N and shared cache units 1706 represent a cache memory hierarchy within the processor 1700. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 1706 and 1704A-1704N.In some embodiments, processor 1700 may also include a set of one or more bus controller units 1716 and a system agent core 1710. The one or more bus controller units 1716 manage a set of peripheral buses, such as one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express). System agent core 1710 provides management functionality for the various processor components. In some embodiments, system agent core 1710 includes one or more integrated memory controllers 1714 to manage access to various external memory devices (not shown).In some embodiments, one or more of the processor cores 1702A-1702N include support for simultaneous multi-threading. In such embodiment, the system agent core 1710 includes components for coordinating and operating cores 1702A-1702N during multi-threaded processing. System agent core 1710 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 1702A-1702N and graphics processor 1708.In some embodiments, processor 1700 additionally includes graphics processor 1708 to execute graphics processing operations. In some embodiments, the graphics processor 1708 couples with the set of shared cache units 1706, and the system agent core 1710, including the one or more integrated memory controllers 1714. In some embodiments, a display controller 1711 is coupled with the graphics processor 1708 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 1711 may be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 1708 or system agent core 1710.In some embodiments, a ring based interconnect unit 1712 is used to couple the internal components of the processor 1700. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 1708 couples with the ring interconnect 1712 via an I/O link 1713.The exemplary I/O link 1713 represents at least one of multiple varieties of I/O interconnects, including an on-package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 1718, such as an eDRAM module. In some embodiments, each of the processor cores 1702A-1702N and graphics processor 1708 use embedded memory modules 1718 as a shared Last Level Cache.In some embodiments, processor cores 1702A-1702N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 1702A-1702N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 1702A-1702N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores 1702A-1702N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor 1700 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.Figure 18 is a block diagram of a graphics processor 1800, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor 1800 includes a memory interface 1814 to access memory. Memory interface 1814 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, graphics processor 1800 also includes a display controller 1802 to drive display output data to a display device 1820. Display controller 1802 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. In some embodiments, graphics processor 1800 includes a video codec engine 1806 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.In some embodiments, graphics processor 1800 includes a block image transfer (BLIT) engine 1804 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 1810. In some embodiments, GPE 1810 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 1810 includes a 3D pipeline 1812 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 1812 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 1815. While 3D pipeline 1812 can be used to perform media operations, an embodiment of GPE 1810 also includes a media pipeline 1816 that is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, media pipeline 1816 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 1806. In some embodiments, media pipeline 1816 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 1815. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system 1815.In some embodiments, 3D/Media subsystem 1815 includes logic for executing threads spawned by 3D pipeline 1812 and media pipeline 1816. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 1815, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem 1815 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.Graphics Processing EngineFigure 19 is a block diagram of a graphics processing engine 1910 of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE) 1910 is a version of the GPE 1810 shown in Figure 18. Elements of Figure 19 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. For example, the 3D pipeline 1812 and media pipeline 1816 of Figure 18 are illustrated. The media pipeline 1816 is optional in some embodiments of the GPE 1910 and may not be explicitly included within the GPE 1910. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE 1910.In some embodiments, GPE 1910 couples with or includes a command streamer 1903, which provides a command stream to the 3D pipeline 1812 and/or media pipelines 1816. In some embodiments, command streamer 1903 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 1903 receives commands from the memory and sends the commands to 3D pipeline 1812 and/or media pipeline 1816. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 1812 and media pipeline 1816. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 1812 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 1812 and/or image data and memory objects for the media pipeline 1816. The 3D pipeline 1812 and media pipeline 1816 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array 1914.In various embodiments the 3D pipeline 1812 can execute one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array 1914. The graphics core array 1914 provides a unified block of execution resources. Multi-purpose execution logic (e.g., execution units) within the graphic core array 1914 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.In some embodiments the graphics core array 1914 also includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the execution units additionally include general-purpose logic that is programmable to perform parallel general purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general purpose logic within the processor core(s) 1607 of Figure 16 or core 1702A-1702N as in Figure 17.Output data generated by threads executing on the graphics core array 1914 can output data to memory in a unified return buffer (URB) 1918. The URB 1918 can store data for multiple threads. In some embodiments the URB 1918 may be used to send data between different threads executing on the graphics core array 1914. In some embodiments the URB 1918 may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic 1920.In some embodiments, graphics core array 1914 is scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE 1910. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed.The graphics core array 1914 couples with shared function logic 1920 that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic 1920 are hardware logic units that provide specialized supplemental functionality to the graphics core array 1914. In various embodiments, shared function logic 1920 includes but is not limited to sampler 1921, math 1922, and inter-thread communication (ITC) 1923 logic. Additionally, some embodiments implement one or more cache(s) 1925 within the shared function logic 1920. A shared function is implemented where the demand for a given specialized function is insufficient for inclusion within the graphics core array 1914. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic 1920 and shared among the execution resources within the graphics core array 1914. The precise set of functions that are shared between the graphics core array 1914 and included within the graphics core array 1914 varies between embodiments.Figure 20 is a block diagram of another embodiment of a graphics processor 2000. Elements of Figure 20 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, graphics processor 2000 includes a ring interconnect 2002, a pipeline front-end 2004, a media engine 2037, and graphics cores 2080A-2080N. In some embodiments, ring interconnect 2002 couples the graphics processor to other processing units, including other graphics processors or one or more general-purpose processor cores. In some embodiments, the graphics processor is one of many processors integrated within a multi-core processing system.In some embodiments, graphics processor 2000 receives batches of commands via ring interconnect 2002. The incoming commands are interpreted by a command streamer 2003 in the pipeline front-end 2004. In some embodiments, graphics processor 2000 includes scalable execution logic to perform 3D geometry processing and media processing via the graphics core(s) 2080A-2080N. For 3D geometry processing commands, command streamer 2003 supplies commands to geometry pipeline 2036. For at least some media processing commands, command streamer 2003 supplies the commands to a video front end 2034, which couples with a media engine 2037. In some embodiments, media engine 2037 includes a Video Quality Engine (VQE) 2030 for video and image post-processing and a multi-format encode/decode (MFX) 2033 engine to provide hardware-accelerated media data encode and decode. In some embodiments, geometry pipeline 2036 and media engine 2037 each generate execution threads for the thread execution resources provided by at least one graphics core 2080A.In some embodiments, graphics processor 2000 includes scalable thread execution resources featuring modular cores 2080A-2080N (sometimes referred to as core slices), each having multiple sub-cores 2050A-550N, 2060A-2060N (sometimes referred to as core sub-slices). In some embodiments, graphics processor 2000 can have any number of graphics cores 2080A through 2080N. In some embodiments, graphics processor 2000 includes a graphics core 2080A having at least a first sub-core 2050A and a second sub-core 2060A. In other embodiments, the graphics processor is a low power processor with a single sub-core (e.g., 2050A). In some embodiments, graphics processor 2000 includes multiple graphics cores 2080A-2080N, each including a set of first sub-cores 2050A-2050N and a set of second sub-cores 2060A-2060N. Each sub-core in the set of first sub-cores 2050A-2050N includes at least a first set of execution units 2052A-2052N and media/texture samplers 2054A-2054N. Each sub-core in the set of second sub-cores 2060A-2060N includes at least a second set of execution units 2062A-2062N and samplers 2064A-2064N. In some embodiments, each sub-core 2050A-2050N, 2060A-2060N shares a set of shared resources 2070A-2070N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in the various embodiments of the graphics processor.Execution UnitsFigure 21 illustrates thread execution logic 2100 including an array of processing elements employed in some embodiments of a GPE. Elements of Figure 21 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, thread execution logic 2100 includes a shader processor 2102, a thread dispatcher 2104, instruction cache 2106, a scalable execution unit array including a plurality of execution units 2108A-2108N, a sampler 2110, a data cache 2112, and a data port 2114. In one embodiment the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit 2108A, 2108B, 2108C, 2108D, through 2108N-1 and 2108N) based on the computational requirements of a workload. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic 2100 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 2106, data port 2114, sampler 2110, and execution units 2108A-2108N. In some embodiments, each execution unit (e.g. 2108A) is a stand-alone programmable general purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 2108A-2108N is scalable to include any number individual execution units.In some embodiments, the execution units 2108A-2108N are primarily used to execute shader programs. A shader processor 2102 can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher 2104. In one embodiment the thread dispatcher includes logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution unit in the execution units 2108A-2108N. For example, the geometry pipeline (e.g., 2036 of Figure 20) can dispatch vertex, tessellation, or geometry shaders to the thread execution logic 2100 (Figure 21) for processing. In some embodiments, thread dispatcher 2104 can also process runtime thread spawning requests from the executing shader programs.In some embodiments, the execution units 2108A-2108N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the execution units 2108A-2108N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units 2108A-2108N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader.Each execution unit in execution units 2108A-2108N operates on arrays of data elements. The number of data elements is the "execution size," or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units 2108A-2108N support integer and floating-point data types.The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.One or more internal instruction caches (e.g., 2106) are included in the thread execution logic 2100 to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g., 2112) are included to cache thread data during thread execution. In some embodiments, a sampler 2110 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 2110 includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 2100 via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor 2102 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, a pixel shader or fragment shader calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel processor logic within the shader processor 2102 then executes an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor 2102 dispatches threads to an execution unit (e.g., 2108A) via thread dispatcher 2104. In some embodiments, pixel shader 2102 uses texture sampling logic in the sampler 2110 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.In some embodiments, the data port 2114 provides a memory access mechanism for the thread execution logic 2100 output processed data to memory for processing on a graphics processor output pipeline. In some embodiments, the data port 2114 includes or couples to one or more cache memories (e.g., data cache 2112) to cache data for memory access via the data port.Figure 22 is a block diagram illustrating a graphics processor instruction formats 2200 according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format 2200 described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed.In some embodiments, the graphics processor execution units natively support instructions in a 128-bit instruction format 2210. A 64-bit compacted instruction format 2230 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format 710 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 2230. The native instructions available in the 64-bit format 2230 vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field 2213. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format 2210.For each format, instruction opcode 2212 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field 2214 enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 2210 an exec-size field 2216 limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field 2216 is not available for use in the 64-bit compact instruction format 2230.Some execution unit instructions have up to three operands including two source operands, src0 2220, src1 2222, and one destination 2218. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 2224), where the instruction opcode 2212 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.In some embodiments, the 128-bit instruction format 2210 includes an access/address mode field 2226 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction.In some embodiments, the 128-bit instruction format 2210 includes an access/address mode field 2226, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode is used to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands.In one embodiment, the address mode portion of the access/address mode field 2226 determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.In some embodiments instructions are grouped based on opcode 2212 bit-fields to simplify Opcode decode 2240. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group 2242 includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group 2242 shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of OOO1xxxxb. A flow control instruction group 2244 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group 2246 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group 2248 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 2248 performs the arithmetic operations in parallel across data channels. The vector math group 2250 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands.Graphics PipelineFigure 23 is a block diagram of another embodiment of a graphics processor 2300. Elements of Figure 23 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, graphics processor 2300 includes a graphics pipeline 2320, a media pipeline 2330, a display engine 2340, thread execution logic 2350, and a render output pipeline 2370. In some embodiments, graphics processor 2300 is a graphics processor within a multi-core processing system that includes one or more general purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 2300 via a ring interconnect 2302. In some embodiments, ring interconnect 2302 couples graphics processor 2300 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 2302 are interpreted by a command streamer 2303, which supplies instructions to individual components of graphics pipeline 2320 or media pipeline 2330.In some embodiments, command streamer 2303 directs the operation of a vertex fetcher 2305 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 2303. In some embodiments, vertex fetcher 2305 provides vertex data to a vertex shader 2307, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher 2305 and vertex shader 2307 execute vertex-processing instructions by dispatching execution threads to execution units 2352A-2352B via a thread dispatcher 2331.In some embodiments, execution units 2352A-2352B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units 2352A-2352B have an attached L1 cache 2351 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.In some embodiments, graphics pipeline 2320 includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 811 configures the tessellation operations. A programmable domain shader 817 provides back-end evaluation of tessellation output. A tessellator 2313 operates at the direction of hull shader 2311 and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to graphics pipeline 2320. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader 2311, tessellator 2313, and domain shader 2317) can be bypassed.In some embodiments, complete geometric objects can be processed by a geometry shader 2319 via one or more threads dispatched to execution units 2352A-2352B, or can proceed directly to the clipper 2329. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 2319 receives input from the vertex shader 2307. In some embodiments, geometry shader 2319 is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.Before rasterization, a clipper 2329 processes vertex data. The clipper 2329 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component 2373 in the render output pipeline 2370 dispatches pixel shaders to convert the geometric objects into their per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 2350. In some embodiments, an application can bypass the rasterizer and depth test component 2373 and access un-rasterized vertex data via a stream out unit 2323.The graphics processor 2300 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 2352A-2352B and associated cache(s) 2351, texture and media sampler 2354, and texture/sampler cache 2358 interconnect via a data port 2356 to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler 2354, caches 2351, 2358 and execution units 2352A-2352B each have separate memory access paths.In some embodiments, render output pipeline 2370 contains a rasterizer and depth test component 2373 that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 2378 and depth cache 2379 are also available in some embodiments. A pixel operations component 2377 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine 2341, or substituted at display time by the display controller 2343 using overlay display planes. In some embodiments, a shared L3 cache 2375 is available to all graphics components, allowing the sharing of data without the use of main system memory.In some embodiments, graphics processor media pipeline 2330 includes a media engine 2337 and a video front end 2334. In some embodiments, video front end 2334 receives pipeline commands from the command streamer 2303. In some embodiments, media pipeline 2330 includes a separate command streamer. In some embodiments, video front-end 2334 processes media commands before sending the command to the media engine 2337. In some embodiments, media engine 2337 includes thread spawning functionality to spawn threads for dispatch to thread execution logic 2350 via thread dispatcher 2331.In some embodiments, graphics processor 2300 includes a display engine 2340. In some embodiments, display engine 2340 is external to processor 2300 and couples with the graphics processor via the ring interconnect 2302, or some other interconnect bus or fabric. In some embodiments, display engine 2340 includes a 2D engine 2341 and a display controller 2343. In some embodiments, display engine 2340 contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 2343 couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.In some embodiments, graphics pipeline 2320 and media pipeline 2330 are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from the Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.Graphics Pipeline ProgrammingFigure 24A is a block diagram illustrating a graphics processor command format 2400 according to some embodiments. Figure 24B is a block diagram illustrating a graphics processor command sequence 2410 according to an embodiment. The solid lined boxes in Figure 24A illustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format 2400 of Figure 24A includes data fields to identify a target client 2402 of the command, a command operation code (opcode) 2404, and the relevant data 2406 for the command. A sub-opcode 2405 and a command size 2408 are also included in some commands.In some embodiments, client 2402 specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 2404 and, if present, sub-opcode 2405 to determine the operation to perform. The client unit performs the command using information in data field 2406. For some commands an explicit command size 2408 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word.The flow diagram in Figure 24B shows an exemplary graphics processor command sequence 2410. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.In some embodiments, the graphics processor command sequence 2410 may begin with a pipeline flush command 2412 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 2422 and the media pipeline 2424 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked 'dirty' can be flushed to memory. In some embodiments, pipeline flush command 2412 can be used for pipeline synchronization or before placing the graphics processor into a low power state.In some embodiments, a pipeline select command 2413 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command 2413 is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command 2412 is required immediately before a pipeline switch via the pipeline select command 2413.In some embodiments, a pipeline control command 2414 configures a graphics pipeline for operation and is used to program the 3D pipeline 2422 and the media pipeline 2424. In some embodiments, pipeline control command 2414 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 2414 is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.In some embodiments, return buffer state commands 2416 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 2416 includes selecting the size and number of return buffers to use for a set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 2420, the command sequence is tailored to the 3D pipeline 2422 beginning with the 3D pipeline state 2430 or the media pipeline 2424 beginning at the media pipeline state 2440.The commands to configure the 3D pipeline state 2430 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. In some embodiments, 3D pipeline state 2430 commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used.In some embodiments, 3D primitive 2432 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 2432 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 2432 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 2432 command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 2422 dispatches shader execution threads to graphics processor execution units.In some embodiments, 3D pipeline 2422 is triggered via an execute 2434 command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a 'go' or 'kick' command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.In some embodiments, the graphics processor command sequence 2410 follows the media pipeline 2424 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 2424 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.In some embodiments, media pipeline 2424 is configured in a similar manner as the 3D pipeline 2422. A set of commands to configure the media pipeline state 2440 are dispatched or placed into a command queue before the media object commands 2442. In some embodiments, media pipeline state commands 2440 include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, media pipeline state commands 2440 also support the use of one or more pointers to "indirect" state elements that contain a batch of state settings.In some embodiments, media object commands 2442 supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command 2442. Once the pipeline state is configured and media object commands 2442 are queued, the media pipeline 2424 is triggered via an execute command 2444 or an equivalent execute event (e.g., register write). Output from media pipeline 2424 may then be post processed by operations provided by the 3D pipeline 2422 or the media pipeline 2424. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations.Graphics Software ArchitectureFigure 25 illustrates exemplary graphics software architecture for a data processing system 2500 according to some embodiments. In some embodiments, software architecture includes a 3D graphics application 2510, an operating system 2520, and at least one processor 2530. In some embodiments, processor 2530 includes a graphics processor 2532 and one or more general-purpose processor core(s) 2534. The graphics application 2510 and operating system 2520 each execute in the system memory 2550 of the data processing system.In some embodiments, 3D graphics application 2510 contains one or more shader programs including shader instructions 2512. The shader language instructions may be in a high-level shader language, such as the High Level Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions 2514 in a machine language suitable for execution by the general-purpose processor core 2534. The application also includes graphics objects 2516 defined by vertex data.In some embodiments, operating system 2520 is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 2520 can support a graphics API 2522 such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system 2520 uses a front-end shader compiler 2524 to compile any shader instructions 2512 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application 2510. In some embodiments, the shader instructions 2512 are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.In some embodiments, user mode graphics driver 2526 contains a back-end shader compiler 2527 to convert the shader instructions 2512 into a hardware specific representation. When the OpenGL API is in use, shader instructions 2512 in the GLSL high-level language are passed to a user mode graphics driver 2526 for compilation. In some embodiments, user mode graphics driver 2526 uses operating system kernel mode functions 2528 to communicate with a kernel mode graphics driver 2529. In some embodiments, kernel mode graphics driver 2529 communicates with graphics processor 2532 to dispatch commands and instructions.IP Core ImplementationsOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as "IP cores," are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.Figure 26 is a block diagram illustrating an IP core development system 2600 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 2600 may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 2630 can generate a software simulation 2610 of an IP core design in a high level programming language (e.g., C/C++). The software simulation 2610 can be used to design, test, and verify the behavior of the IP core using a simulation model 2612. The simulation model 2612 may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design 2615 can then be created or synthesized from the simulation model 2612. The RTL design 2615 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 2615, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.The RTL design 2615 or equivalent may be further synthesized by the design facility into a hardware model 2620, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility 2665 using non-volatile memory 2640 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 2650 or wireless connection 2660. The fabrication facility 2665 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.Exemplary System on a Chip Integrated CircuitFigures 27-29 illustrated exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.Figure 27 is a block diagram illustrating an exemplary system on a chip integrated circuit 2700 that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuit 2700 includes one or more application processor(s) 2705 (e.g., CPUs), at least one graphics processor 2710, and may additionally include an image processor 2715 and/or a video processor 2720, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit 2700 includes peripheral or bus logic including a USB controller 2725, UART controller 2730, an SPI/SDIO controller 2735, and an I2S/I2C controller 2740. Additionally, the integrated circuit can include a display device 2745 coupled to one or more of a high-definition multimedia interface (HDMI) controller 2750 and a mobile industry processor interface (MIPI) display interface 2755. Storage may be provided by a flash memory subsystem 2760 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 2765 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 2770.Figure 28 is a block diagram illustrating an exemplary graphics processor 2810 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 2810 can be a variant of the graphics processor 2710 of Figure 27. Graphics processor 2810 includes a vertex processor 2805 and one or more fragment processor(s) 2815A-2815N (e.g., 2815A, 2815B, 2815C, 2815D, through 2815N-1, and 2815N). Graphics processor 2810 can execute different shader programs via separate logic, such that the vertex processor 2805 is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s) 2815A-2815N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor 2805 performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s) 2815A-2815N use the primitive and vertex data generated by the vertex processor 2805 to produce a framebuffer that is displayed on a display device. In one embodiment, the fragment processor(s) 2815A-2815N are optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API.Graphics processor 2810 additionally includes one or more memory management units (MMUs) 2820A-2820B, cache(s) 2825A-2825B, and circuit interconnect(s) 2830A-2830B. The one or more MMU(s) 2820A-2820B provide for virtual to physical address mapping for integrated circuit 2810, including for the vertex processor 2805 and/or fragment processor(s) 2815A-2815N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s) 2825A-2825B. In one embodiment the one or more MMU(s) 2825A-2825B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 2705, image processor 2715, and/or video processor 2720 of Figure 27, such that each processor 2705-2720 can participate in a shared or unified virtual memory system. The one or more circuit interconnect(s) 2830A-2830B enable graphics processor 2810 to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments.Figure 29 is a block diagram illustrating an additional exemplary graphics processor 2910 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 2910 can be a variant of the graphics processor 2710 of Figure 27. Graphics processor 2910 includes the one or more MMU(s) 2820A-2820B, caches 2825A-2825B, and circuit interconnects 2830A-2830B of the integrated circuit 2800 of Figure 28.Graphics processor 2910 includes one or more shader core(s) 2915A-2915N (e.g., 2915A, 2915B, 2915C, 2915D, 2915E, 2915F, through 2915N-1, and 2915N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor 2910 includes an inter-core task manager 2905, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 2915A-2915N and a tiling unit 2918 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches.Some embodiments pertain to Example 1 that includes an apparatus to facilitate compute optimization, comprising a plurality of processing units each comprising a plurality of execution units (EUs), wherein the plurality of EUs comprise a first EU type and a second EU type.Example 2 includes the subject matter of Example 1, wherein the plurality of processing units comprise a first processing unit including a plurality of EUs of the first type and a second processing unit including a plurality of EUs of the second type.Example 3 includes the subject matter of Examples 1 and 2, wherein the plurality of processing units comprise a first processing unit including a first set of EUs of the first type and a second set of EUs of the second type and a second processing unit including a third set of EUs of the first type and a fourth set of EUs of the second type.Example 4 includes the subject matter of Examples 1-3, further comprising compute logic to select the EUs that are to be implemented to execute a workload.Example 5 includes the subject matter of Examples 1-4, wherein the compute logic selects the EUs of the first type to process a first type of application workload and selects the EUs of the second type to process a second type of application workload.Example 6 includes the subject matter of Examples 1-5, further comprising a memory, wherein the plurality of processing units are included in the memory.Example 7 includes the subject matter of Examples 1-6, wherein the memory comprises a high bandwidth memory (HBM).Example 8 includes the subject matter of Examples 1-7, wherein the HBM comprises a first memory channel and a first processing unit included in the first memory channel.Example 9 includes the subject matter of Examples 1-8, further comprising a register file implemented to perform matrix-vector transformations.Example 10 includes the subject matter of Examples 1-9, further comprising a shared local memory (SLM) implemented to perform matrix-vector transformations.Some embodiments pertain to Example 11 that includes a graphics processor comprising a plurality of processing units each comprising a plurality of execution units (EUs), wherein the plurality of EUs comprise a first EU type and a second EU type, a first processing unit including a first set of execution units (EUs) and a second processing unit including a second set of EUs, wherein the first and second sets of EUs are comprised of a first EU type and a second EU type.Example 12 includes the subject matter of Example 11, wherein the first set of EUs comprise a plurality of EUs of the first type and the second set of EUs comprise a plurality of EUs of the second type.Example 13 includes the subject matter of Examples 11 and 12, wherein the first and second sets of EUs each comprise one or more EUs of the first type and one or more EUs of the second type.Example 14 includes the subject matter of Examples 11-13, further comprising compute logic to select the EUs that are to be implemented to execute a workload.Example 15 includes the subject matter of Examples 11-14, wherein the compute logic selects the EUs of the first type to process a first type of application workload and selects the EUs of the second type to process a second type of application workload.Example 16 includes the subject matter of Examples 11-15, further comprising a memory, wherein the plurality of processing units are included in the memory.Example 17 includes the subject matter of Examples 11-16, wherein the memory comprises a high bandwidth memory (HBM).Example 18 includes the subject matter of Examples 11-17, wherein the HBM comprises a first memory channel and a first processing unit included in the first memory channel.Example 19 includes the subject matter of Examples 11-18, further comprising a register file implemented to perform matrix-vector transformations.Example 20 includes the subject matter of Examples 11-19, further comprising a shared local memory (SLM) implemented to perform matrix-vector transformations.The foregoing description and drawings are to be regarded in an illustrative rather than a restrictive sense. Persons skilled in the art will understand that various modifications and changes may be made to the embodiments described herein without departing from the broader spirit and scope of the invention as set forth in the appended claims. |
The various aspects include systems and methods for enabling mobile computing devices to recognize when they are at risk of experiencing malicious behavior in the near future given a current configuration. Thus, the various aspects enable mobile computing devices to anticipate malicious behaviors before a malicious behavior begins rather than after the malicious behavior has begun. In the various aspects, a network server may receive behavior vector information from multiple mobile computing devices and apply pattern recognition techniques to the received behavior vector information to identify malicious configurations and pathway configurations that may lead to identified malicious configurations. The network server may inform mobile computing devices of identified malicious configurations and the corresponding pathway configurations, thereby enabling mobile computing devices to anticipate and prevent malicious behavior from beginning by recognizing when they have entered a pathway configuration leading to malicious behavior. |
1.A method of identifying a mobile computing device configuration that results in malicious behavior, including:Receiving configuration information and configuration history from a plurality of mobile computing devices;Analyzing the configuration information to identify a malicious configuration;Identifying a channel configuration based on the identified malicious configuration and the configuration history;Generating a malicious and channel configuration database including the identified malicious configuration and the identified channel configuration;The malicious and channel configuration database is sent to a plurality of mobile computing devices.2.The method of claim 1 further comprising:Calculate the probability of transitioning to a malicious configuration for each channel configuration in the identified channel configuration;The calculated probability is included in the malicious and channel configuration database.3.The method of claim 1 further comprising:Identifying a malicious channel instruction that, when executed in the identified channel configuration, results in a malicious configuration of the identity;A list of instructions that, when executed, results in an identification of the identified malicious configuration is included in the malicious and channel configuration database.4.A method of predicting possible malicious behavior on a mobile computing device before it occurs, including:Receive malicious and channel configuration databases;Determine the current configuration;Determining whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database;Preventive measures are implemented in response to determining that the current configuration is causing a malicious configuration to avoid the malicious configuration.5.The method of claim 4 wherein implementing preventive measures to avoid said malicious configuration comprises:Identifying processes associated with the current configuration;Slow down the execution of the process.6.The method of claim 5 further comprising:Checking other behaviors that occur on the mobile computing device;Determining whether there is a significant likelihood that the current configuration is causing a malicious configuration based on the checking of the other behaviors;Preventive measures are implemented for the process in response to determining that there is a significant likelihood that the current configuration is causing a malicious configuration.7.The method of claim 5 further comprising:Determining the category of the current configuration;Identify the categories of potential future configurations;Determining, based on the category of the current configuration and the category of the potential future configuration, the likelihood that the current configuration results in a malicious configuration;Determining whether the likelihood is considerable;Preventive measures are implemented for the process in response to determining that the likelihood is substantial.8.The method of claim 5 further comprising:Determining, based on the current configuration and configuration transition probability, a probability that the current configuration results in a malicious configuration, wherein the configuration transition probability is included in the malicious and channel configuration database;Determining whether the probability that the current configuration results in a malicious configuration exceeds a risk threshold;A preventive measure is implemented for the process in response to determining that the current configuration results in a probability that the malicious configuration exceeds the risk threshold.9.A web server that includes:A server processor configured with server executable instructions to perform operations including:Receiving configuration information and configuration history from a plurality of mobile computing devices;Analyzing the configuration information to identify a malicious configuration;Identifying a channel configuration based on the identified malicious configuration and the configuration history;Generating a malicious and channel configuration database including the identified malicious configuration and the identified channel configuration;The malicious and channel configuration database is sent to a plurality of mobile computing devices.10.The network server of claim 9, wherein the server processor is configured with server executable instructions to perform operations further comprising:Calculate the probability of transitioning to a malicious configuration for each channel configuration in the identified channel configuration;The calculated probability is included in the malicious and channel configuration database.11.The network server of claim 9, wherein the server processor is configured with server executable instructions to perform operations further comprising:Identifying a malicious channel instruction that, when executed in the identified channel configuration, results in a malicious configuration of the identity;A list of instructions that, when executed, results in an identification of the identified malicious configuration is included in the malicious and channel configuration database.12.A mobile computing device comprising:MemoryTransceiver; andA processor coupled to the memory and the transceiver, wherein the processor is configured with processor executable instructions to perform operations including:Receive malicious and channel configuration databases;Determine the current configuration;Determining whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database;Preventive measures are implemented in response to determining that the current configuration is causing a malicious configuration to avoid the malicious configuration.13.The mobile computing device of claim 12 wherein the processor is configured with processor executable instructions to perform operations comprising: enabling precautionary measures to avoid the malicious configuration:Identifying processes associated with the current configuration;Slow down the execution of the process.14.The mobile computing device of claim 13 wherein the processor is configured with processor executable instructions to perform operations further comprising:Checking other behaviors that occur on the mobile computing device;Determining whether there is a significant likelihood that the current configuration is causing a malicious configuration based on the checking of the other behaviors;Preventive measures are implemented for the process in response to determining that there is a significant likelihood that the current configuration is causing a malicious configuration.15.The mobile computing device of claim 13 wherein the processor is configured with processor executable instructions to perform operations further comprising:Determining the category of the current configuration;Identify the categories of potential future configurations;Determining, based on the category of the current configuration and the category of the potential future configuration, the likelihood that the current configuration results in a malicious configuration;Determining whether the likelihood is considerable;Preventive measures are implemented for the process in response to determining that the likelihood is substantial.16.The mobile computing device of claim 13 wherein the processor is configured with processor executable instructions to perform operations further comprising:Determining, based on the current configuration and configuration transition probability, a probability that the current configuration results in a malicious configuration, wherein the configuration transition probability is included in the malicious and channel configuration database;Determining whether the probability that the current configuration results in a malicious configuration exceeds a risk threshold;A preventive measure is implemented for the process in response to determining that the current configuration results in a probability that the malicious configuration exceeds the risk threshold.17.A web server that includes:Means for receiving configuration information and configuration history from a plurality of mobile computing devices;A unit for analyzing received configuration information to identify a malicious configuration;Means for identifying a channel configuration based on the identified malicious configuration and the configuration history;Means for generating a malicious and channel configuration database including the identified malicious configuration and the identified channel configuration;A unit for transmitting the malicious and channel configuration database to a plurality of mobile computing devices.18.The network server according to claim 17, further comprising:Means for calculating the probability of transitioning to a malicious configuration for each of the identified channel configurations;A unit for including the calculated probability in the malicious and channel configuration database.19.The network server according to claim 17, further comprising:Means for identifying a malicious channel instruction that, when executed in the identified channel configuration, results in a malicious configuration of the identity;A list of instructions for causing an identification of the identified malicious configuration when executed is included in the malicious and channel configuration database.20.A mobile computing device comprising:a unit for receiving malicious and channel configuration databases;The unit used to determine the current configuration;Means for determining whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database;A means for implementing preventive measures to avoid the malicious configuration in response to determining that the current configuration is causing a malicious configuration.21.The mobile computing device of claim 20 wherein the means for implementing preventive measures to avoid said malicious configuration comprises:a unit for identifying a process associated with the current configuration;A unit for slowing down the execution of the process.22.The mobile computing device of claim 21, further comprising:a unit for checking other behaviors occurring on the mobile computing device;Means for determining whether there is a substantial likelihood that the current configuration is causing a malicious configuration based on a check of other behaviors;A means for implementing preventive measures against the process in response to determining that there is a likelihood that the current configuration is causing a malicious configuration.23.The mobile computing device of claim 21, further comprising:a unit for determining a category of the current configuration;a unit for determining the category of potential future configurations;Means for determining a likelihood that the current configuration results in a malicious configuration based on the category of the current configuration and the category of the potential future configuration;a unit for determining whether the likelihood is quite large;Means for implementing preventive measures against the process in response to determining that the likelihood is substantial.24.The mobile computing device of claim 21, further comprising:Means for determining a probability that the current configuration results in a malicious configuration based on the current configuration and configuration transition probability, wherein the configuration transition probability is included in the malicious and channel configuration database;Means for determining whether the probability that the current configuration causes a malicious configuration exceeds a risk threshold;Means for implementing preventive measures for the process in response to determining that the current configuration results in a malicious configuration that exceeds the risk threshold.25.A non-transitory server readable storage medium having server executable instructions stored thereon, the server executable instructions being configured to cause a server processor to perform operations comprising:Receiving configuration information and configuration history from a plurality of mobile computing devices;Analyzing the configuration information to identify a malicious configuration;Identifying a channel configuration based on the identified malicious configuration and the configuration history;Generating a malicious and channel configuration database including the identified malicious configuration and the identified channel configuration;The malicious and channel configuration database is sent to a plurality of mobile computing devices.26.The non-transitory server readable storage medium of claim 25, wherein the stored server executable instructions are configured to cause the server processor to perform operations further comprising:Calculate the probability of transitioning to a malicious configuration for each channel configuration in the identified channel configuration;The calculated probability is included in the malicious and channel configuration database.27.The non-transitory server readable storage medium of claim 25, wherein the stored server executable instructions are configured to cause the server processor to perform operations further comprising:Identifying a malicious channel instruction that, when executed in the identified channel configuration, results in a malicious configuration of the identity;A list of instructions that, when executed, results in an identification of the identified malicious configuration is included in the malicious and channel configuration database.28.A non-transitory processor readable storage medium having processor executable instructions stored thereon, the processor executable instructions being configured to cause a mobile computing device processor to perform operations comprising:Receive malicious and channel configuration databases;Determine the current configuration;Determining whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database;Preventive measures are implemented in response to determining that the current configuration is causing a malicious configuration to avoid the malicious configuration.29.The non-transitory processor readable storage medium of claim 28, wherein the stored processor-executable instructions are configured to cause the mobile computing device processor to perform operations comprising: enabling a precautionary measure to avoid The malicious configuration:Identifying processes associated with the current configuration;Slow down the execution of the process.30.The non-transitory processor readable storage medium of claim 29, wherein the stored processor executable instructions are configured to cause the mobile computing device processor to perform operations further comprising:Checking other behaviors that occur on the mobile computing device;Determining whether there is a significant likelihood that the current configuration is causing a malicious configuration based on a check of the other behavior;Preventive measures are implemented for the process in response to determining that there is a significant likelihood that the current configuration is causing a malicious configuration.31.The non-transitory processor readable storage medium of claim 29, wherein the stored processor executable instructions are configured to cause the mobile computing device processor to perform operations further comprising:Determining the category of the current configuration;Identify the categories of potential future configurations;Determining, based on the category of the current configuration and the category of the potential future configuration, the likelihood that the current configuration results in a malicious configuration;Determining whether the likelihood is considerable;Preventive measures are implemented for the process in response to determining that the likelihood is substantial.32.The non-transitory processor readable storage medium of claim 29, wherein the stored processor executable instructions are configured to cause the mobile computing device processor to perform operations further comprising:Determining, based on the current configuration and configuration transition probability, a probability that the current configuration results in a malicious configuration, wherein the configuration transition probability is included in the malicious and channel configuration database;Determining whether the probability that the current configuration results in a malicious configuration exceeds a risk threshold;A preventive measure is implemented for the process in response to determining that the current configuration results in a probability that the malicious configuration exceeds the risk threshold. |
Pre-identify possible malicious behavior based on configuration channelsRelated applicationThe present application is related to U.S. Patent Application Serial No. 14/044,956, filed on Jan. 3,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,Background techniqueIn general, the performance and power efficiency of mobile computing devices degrade over time. Antivirus companies (for example, McAfee, Symantec, etc.) are now selling to mitigate this degraded mobile anti-virus product, firewall product, and encryption product. However, many of these solutions rely on periodic execution of computationally intensive scan engines on mobile computing devices, which can consume many of the processing and battery resources of mobile computing devices, slowing down mobile computing devices or causing Mobile computing devices cannot be used for a long period of time, and/or otherwise reduce the user experience. Moreover, these solutions are typically limited to detecting known viruses and malware, and do not deal with a variety of complex factors and/or interactions that often combine to facilitate mobile computing devices to Degradation of time (for example, when performance degradation is not caused by viruses or malware). For these and other reasons, existing anti-virus products, firewall products, and encryption products do not provide a variety of factors for identifying degradations that may contribute to the degradation of mobile computing devices over time, or for preventing the degradation of mobile computing devices. solution.Summary of the inventionVarious aspects propose a system for predicting malicious behavior on a mobile computing device before a malicious act begins, rather than after a malicious act has occurred or begins. In various aspects, the network server can receive behavior vector information from a plurality of mobile computing devices and can implement various pattern recognition techniques on the received behavior vector information to identify malicious configurations and channel configurations that result in those malicious configurations. The web server can notify the mobile computing device of the malicious configuration and corresponding channel configuration, thereby enabling the mobile computing device to predict and prevent malicious behavior in real time by identifying when it has entered or is about to enter a channel configuration that results in malicious behavior.In one aspect, the network server can receive configuration information from a plurality of mobile computing devices after the mobile computing device has detected an ongoing malicious activity. The configuration information may indicate the configuration or status of the mobile computing device at the time the malicious behavior was detected, as well as the history of the configuration and status of the mobile computing device that caused the malicious behavior. The network server can analyze the configuration information of the combined mobile computing device to determine the configuration indicating the malicious behavior and configuration mode, as well as the channel between the configuration (i.e., channel configuration) that results in the malicious configuration. The server can combine the identified channel configuration into a database or other suitable data structure, and can send the malicious and channel configuration database to the mobile computing device, which provides that the mobile computing device can be used in analyzing its own behavior and configuration. A database or data structure for the identified malicious configuration and channel configuration.In one aspect, after receiving the malicious and channel configuration database, the mobile computing device can determine its current configuration and compare its current configuration to the configuration included in the malicious and channel configuration data to determine if its current configuration is Is causing malicious behavior (ie, channel configuration). When the current configuration of the mobile computing device is a channel configuration, the mobile computing device can implement various preventive measures to prevent or prevent the initiation of malicious behavior.In another aspect, the web server can also calculate the probability that the channel configuration would result in malicious behavior. In such an aspect, the network server may utilize a configuration database or data structure to transmit a probability that a particular channel configuration results in a malicious configuration, and the mobile computing device may refer to the received probability other than the configuration of the channel in the configuration database or data structure. To determine if its current configuration is likely to result in a malicious configuration.In another aspect, the network server can identify particular instructions that, if executed, will turn the channel configuration into a malicious configuration. The network server can include such identified instructions in a configuration database or data structure, and the mobile computing device can reference the configuration database or data structure to closely monitor and prevent execution of the identified instructions when the current configuration of the device is a channel configuration. .Aspects include a method implemented by a web server for identifying a configuration of a mobile computing device that causes malicious behavior by receiving configuration information and configuration history from a plurality of mobile computing devices, analyzing the configuration information to identify a malicious configuration Identifying the channel configuration based on the identified malicious configuration and configuration history, generating a malicious and channel configuration database including the identified malicious configuration and the identified channel configuration, and transmitting the malicious and channel configuration database to the plurality of mobile computing devices. In one aspect, the method can also include calculating a probability of transitioning to a malicious configuration for each of the identified channel configurations, and including the calculated probability in the malicious and channel configuration database. In another aspect, the method can also include identifying a malicious channel instruction that, when executed in the identified channel configuration, results in a malicious configuration of the identification and, when executed, results in the identified malicious configuration The list of identified instructions is included in the malicious and channel configuration database.Further aspects include a method implemented by a mobile computing device for predicting possible malicious behavior on a mobile computing device by: performing a malicious and channel configuration before the possible malicious behavior on the mobile computing device occurs A database that determines the current configuration, determines whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database, and implements preventive measures to avoid malicious configuration in response to determining that the current configuration is causing a malicious configuration. In one aspect, implementing a precautionary measure to avoid malicious configuration can include identifying a process associated with the current configuration and slowing down execution of the process.In another aspect, the method can also include checking other behaviors occurring on the mobile computing device, determining whether there is a significant current configuration that is causing a malicious configuration based on a check of other behaviors, and in response to determining the presence A fairly large current configuration is leading to the possibility of malicious configuration to implement preventive measures against the process. In another aspect, the method can also include determining a category of the current configuration, determining a category of potential future configurations, determining a likelihood that the current configuration results in a malicious configuration based on the currently configured category and the potential future configured category, determining the Whether the likelihood is considerable, and in response to determining that the likelihood is considerable, implement preventive measures for the process. In another aspect, the method can also include determining a probability that the current configuration results in a malicious configuration based on the current configuration and configuration transition probabilities (where the configuration transition probabilities are included in the malicious and channel configuration database), determining that the current configuration results in a malicious configuration Whether the probability exceeds the risk threshold, and preventive measures are implemented for the process in response to determining that the current configuration causes the probability of malicious configuration to exceed the risk threshold.Further aspects include a network server that can include a server processor configured with server-executable instructions to perform operations including: receiving configuration information and configuration history from a plurality of mobile computing devices, performing the configuration information Analysis to identify malicious configurations, identify channel configurations based on identified malicious configurations and configuration histories, generate malicious and channel configuration databases including identified malicious configurations and identified channel configurations, and send malicious and channel configuration databases to multiple Mobile computing devices. In one aspect, the server processor can be configured with server executable instructions to perform operations including: calculating a probability of transitioning to a malicious configuration for each of the identified channel configurations, and The calculated probabilities are included in the malicious and channel configuration database. In another aspect, the server processor can be configured with server executable instructions to perform operations including: identifying malicious channel instructions that, when executed in the identified channel configuration, result in The malicious configuration of the identity, as well as the list of instructions that will result in the identity of the identified malicious configuration when executed, are included in the malicious and channel configuration database.Further aspects include a mobile computing device that can include a memory, a transceiver, and a processor coupled to the memory and the transceiver, wherein the processor can be configured with processor-executable instructions to perform the Operation: Receive the malicious and channel configuration database, determine the current configuration, determine if the current configuration is causing malicious configuration based on the malicious and channel configuration database, and implement preventive measures to avoid malicious configuration in response to determining that the current configuration is causing malicious configuration. In another aspect, the processor can be configured with processor-executable instructions to perform the following operations such that preventive measures are implemented to avoid malicious configuration: identifying a process associated with the current configuration and slowing execution of the process.In one aspect, the processor can be configured with processor-executable instructions to perform operations further comprising: checking other behaviors occurring on the mobile computing device, based on checking other behaviors to determine if there is a substantial The current configuration is causing the possibility of malicious configuration, and implementing preventive measures against the process in response to determining that there is a considerable likelihood that the current configuration is causing a malicious configuration. In another aspect, the processor can be configured with processor-executable instructions to perform operations further comprising: determining a category of the current configuration, determining a category of potential future configurations, based on the currently configured category and potential The categories of future configurations determine the likelihood that the current configuration will result in a malicious configuration, determine if the likelihood is significant, and implement preventive measures for the process in response to determining that the likelihood is significant. In another aspect, the processor can be configured with processor-executable instructions to perform operations further comprising: determining a probability that the current configuration results in a malicious configuration based on the current configuration and configuring the transition probability (wherein the transition probability is configured) Included in the malicious and channel configuration database), determine if the current configuration causes the probability of malicious configuration to exceed the risk threshold, and implement preventive measures for the process in response to determining that the current configuration causes the probability of malicious configuration to exceed the risk threshold.Further aspects include a server including means for receiving configuration information and configuration history from a plurality of mobile computing devices for analyzing the configuration information to identify maliciously configured units for based on the identified malicious configuration and configuration A unit that historically identifies a channel configuration, a unit for generating a malicious and channel configuration database including the identified malicious configuration and the identified channel configuration, and a unit for transmitting the malicious and channel configuration database to the plurality of mobile computing devices . In one aspect, the server can also include means for calculating a probability of transitioning to a malicious configuration for each of the identified channel configurations, and for including the calculated probability in the malicious and channel configuration database The unit in . In another embodiment, the server may further comprise means for identifying malicious channel instructions when the identified channel configuration results in the identified malicious configuration, and for malicious configuration that will result in identification when executed The list of identified instructions is included in the unit in the malicious and channel configuration database.Further aspects include a mobile computing device including means for receiving a malicious and channel configuration database for determining a currently configured unit for determining whether a current configuration is causing a malicious configuration based on a malicious and channel configuration database And means for implementing preventive measures to avoid malicious configuration in response to determining that the current configuration is causing malicious configuration. In one aspect, the means for implementing preventive measures to avoid malicious configuration may include means for identifying a process associated with the current configuration, and means for mitigating execution of the process.In one aspect, the mobile computing device can include means for checking other behaviors occurring on the mobile computing device for determining whether there is a significant current configuration that is causing a malicious configuration based on a check of other behaviors. The unit, and the means for implementing preventive measures against the process in response to determining that a substantial current configuration is causing a malicious configuration. In another aspect, the mobile computing device can also include means for determining a category of the current configuration, a unit for determining a category of potential future configurations for based on the currently configured category and the potential future configured category A unit that determines the likelihood that the current configuration is causing a malicious configuration, is used to determine if the likelihood is a fairly large unit, and a means for implementing preventive measures for the process in response to determining that the likelihood is significant. In another aspect, the mobile computing device can further include means for determining a probability that the current configuration results in a malicious configuration based on the current configuration and configuration transition probabilities (where the configuration transition probabilities are included in the malicious and channel configuration database), A means for determining whether the current configuration results in a malicious configuration exceeding a risk threshold, and means for implementing preventive measures for the process in response to determining that the current configuration causes the probability of malicious configuration to exceed the risk threshold.In a further aspect, a non-transitory server readable storage medium can have server executable instructions stored thereon, the server executable instructions being configured to cause a server processor to perform operations comprising: Receiving configuration information and configuration history from a plurality of mobile computing devices, analyzing the configuration information to identify a malicious configuration, identifying a channel configuration based on the identified malicious configuration and configuration history, generating the identified malicious configuration and the identified channel configuration Malicious and channel configuration databases, and sending malicious and channel configuration databases to multiple mobile computing devices. In one aspect, the stored server executable instructions can be configured to cause the server processor to perform operations including calculating a probability of transitioning to a malicious configuration for each of the identified channel configurations, and The calculated probabilities are included in the malicious and channel configuration database. In another aspect, the stored server executable instructions can be configured to cause the server processor to perform an operation comprising: identifying a malicious channel instruction, when executed in the identified channel configuration, A malicious configuration that results in an identity, and a list of instructions that will cause the identified malicious configuration to be identified when executed, are included in the malicious and channel configuration database.In a further aspect, a non-transitory processor readable storage medium can have processor-executable instructions stored thereon, the processor-executable instructions being configured to cause a mobile computing device processor to perform the following Operation of each item: Receive malicious and channel configuration database, determine current configuration, confirm the current configuration based on malicious and channel configuration database to cause malicious configuration, and implement preventive measures to prevent maliciousness in response to determining that the current configuration is causing malicious configuration Configuration. In one aspect, the stored processor-executable instructions can be configured to cause the mobile computing device processor to perform operations including, such that a preventive measure is implemented to avoid malicious configuration: identifying a process associated with the current configuration, And slow down the execution of the process.In one aspect, the stored processor-executable instructions can be configured to cause the mobile computing device processor to perform operations including: checking other behaviors occurring on the mobile computing device, determining based on inspections of other behaviors Whether there is a considerable likelihood that the current configuration is causing a malicious configuration, and implementing preventive measures for the process in response to determining that there is a considerable likelihood that the current configuration is causing a malicious configuration. In another aspect, the stored processor-executable instructions can be configured to cause the mobile computing device processor to perform operations comprising: determining a current configured category, determining a category of potential future configurations, based on the current configuration The categories and categories of potential future configurations determine the likelihood that the current configuration will result in a malicious configuration, determine if the likelihood is significant, and implement preventive measures for the process in response to determining that the likelihood is significant. In another aspect, the stored processor-executable instructions can be configured to cause the mobile computing device processor to perform operations including: determining a probability that the current configuration results in a malicious configuration based on the current configuration and the configuration transition probability (where The configuration transition probability is included in the malicious and channel configuration database, determining whether the current configuration causes the probability of malicious configuration to exceed the risk threshold, and implementing preventive measures for the process in response to determining that the current configuration causes the probability of malicious configuration to exceed the risk threshold.DRAWINGSThe accompanying drawings, which are incorporated in and constitute a part of this specification feature.1 is a block diagram of a communication system showing network components of an exemplary communication system suitable for use in various aspects.2 is a block diagram showing exemplary logical components and information flows in a mobile computing device of one aspect, the mobile computing device of the one aspect being configured to determine whether a behavior, software application, or process of a particular mobile computing device is Is causing malicious behavior.3 is a block diagram showing exemplary components and information flows in a system having an aspect of a network server configured to identify malicious configurations and configurations that result in malicious behavior in a cloud service/network, and These configurations are sent to the mobile computing device for use in avoiding malicious behavior on the mobile computing device.4 is a process flow diagram showing an aspect method for transmitting a malware and channel configuration database including information regarding malicious configurations and channel configurations to a mobile computing device.5 is a process flow diagram showing a method for predicting and implementing preventative measures to avoid malicious configuration on a mobile computing device.6 is a process diagram showing an aspect method for implementing preventive measures in response to determining, based in part on a check of other behaviors occurring on a mobile computing device, that there is a likelihood of substantial malicious behavior in the near future. flow chart.Figure 7A is a finite state machine diagram showing finite state machine analysis for predicting the likelihood of malicious behavior in the near future.Figure 7B is an embodiment lookup table used when predicting the likelihood of malicious behavior in the near future.8 is a process flow diagram showing an aspect method for determining the likelihood of entering a malicious configuration based on a current configuration and a potential future configuration.Figure 9 is a Markov chain diagram showing a Markov chain analysis for predicting the probability of a malicious behavior in the near future based on the transition probability between configurations.10 is a process flow diagram showing an aspect method for determining a probability of entering a malicious configuration based on a probability that a current configuration will result in a malicious configuration.11 is a process flow diagram showing an aspect method for implementing preventive measures in response to determining the likelihood of substantial malicious behavior in the near future based on instructions to be executed.12 is a component block diagram of a mobile computing device suitable for use in one aspect.13 is a component block diagram of another mobile computing device suitable for use in one aspect.14 is a component block diagram of a network server device suitable for use in one aspect.Detailed waysVarious aspects will be described in detail with reference to the accompanying drawings. Whenever possible, the same reference numerals will be used throughout the drawings. References to specific examples and implementations are for illustrative purposes and are not intended to limit the scope of the invention or the claims.Many different cellular and mobile communication services and standards are available or contemplated in the future, all of which can achieve various aspects and benefit from various aspects. Such services and standards include, for example, Third Generation Partnership Project (3GPP), Long Term Evolution (LTE) systems, Third Generation Wireless Mobile Telecommunications (3G), Fourth Generation Wireless Mobile Telecommunications (4G), Global Mobile Communication System (GSM), Universal Mobile Telecommunications System (UMTS), 3GSM, General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA) systems (eg, cdmaOne), Enhanced Data Rate GSM Evolution (EDGE), Advanced Mobile Phone System (AMPS), Digital AMPS (IS-136/TDMA), Evolution Data Optimized (EV-DO), Digital Enhanced Cordless Telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), Wireless Local Area Network (WLAN) Wi-Fi Protected Access I&II (WPA, WPA2), and Integrated Digital Enhanced Network (iden). Each of these techniques involves, for example, the transmission and reception of voice, data, signaling, and/or content messages. It will be understood that any reference to terms and/or technical details relating to individual telecommunication standards or technologies, unless specifically stated in the language of the claims, is for illustrative purposes only. It is not intended to limit the scope of the claims to the particular communication system or technology.The term "mobile computing device" as used herein refers to any or all of the following: cellular telephone, smart phone, personal or mobile multimedia player, personal digital assistant (PDA), laptop Computers, tablets, smartbooks, ultra-notebooks, palmtop computers, wireless email receivers, cellular phones with multimedia Internet capabilities, wireless game controllers, and programmable processors including memory, performance-critical, and Operating under battery power makes the power saving method useful for similar personal electronic devices. While various aspects are particularly useful for mobile computing devices, such as smart phones, having limited resources, these aspects are generally useful in any electronic device that includes a processor and executes an application.The term "malicious behavior" is used herein to refer to a variety of undesirable mobile computing device operations and characteristics, such as longer processing times, lower battery life, loss of dedicated data, malicious economic activity (eg, sending Unauthorized extra paid SMS messages), operations related to the use of mobile computing devices or the use of phones for spy or botnet activities.The term "malicious configuration" is used herein to refer to a configuration of a mobile computing device, application, process, etc. that presents or performs a malicious act. The term "suspicious configuration" is used herein to refer to evidence of some malicious behavior in it, but requires more information to be configured before reaching a definitive conclusion about malicious behavior. This article uses the term "benign configuration" to refer to a configuration that is neither a malicious configuration nor a suspicious configuration.The term "channel configuration" is used herein to refer to a vector or channel that the network server has identified as an intermediate configuration that results in a malicious configuration. In various aspects, the channel configuration can be any configuration that results in a malicious configuration (eg, a benign configuration, a suspicious configuration, or a malicious configuration).There are a variety of factors that can contribute to the degradation of performance and power utilization of mobile computing devices over time, including: poorly designed software applications, malware, viruses, fragmented memory, background processes, and other malicious behaviors. However, due to the complexity of modern mobile computing devices, it is increasingly difficult for users, operating systems, and/or applications (eg, anti-virus software, etc.) to accurately and efficiently identify the source of such problems and/or Or provide an appropriate remedy for the identified problem.There are currently various solutions for detecting malicious behavior on computing devices. Many solutions have traditionally relied on a feature database of malicious code/malware built on the server. These solutions require reference to the feature database to detect the code based identification (ie, features) (eg, the name of the file, the name of the function call, the structure of the particular code segment, and even the characteristics of each byte of the code). Whether the code is malicious. However, these solutions are not sufficient to detect malicious behavior that may be undetectable until the code is executed and become increasingly ineffective due to new techniques for falsifying features. In contrast, the various aspects described below enable a mobile computing device to detect malicious behavior during normal operation (i.e., in real time) and prevent such malicious behavior from occurring in the future, regardless of any particular identification or feature.Other solutions use behavioral models to distinguish between malicious and benign processes/programs on computing devices. However, these solutions are currently limited to evaluating the current/in-progressive behavior of individual applications or processes. Therefore, these solutions are limited to solving problems after they have started. In contrast, the various aspects described below enable mobile computing devices to predict and prevent such malicious behavior in real time before future malicious behavior occurs.In addition, some solutions look for signs of malicious behavior in code, files, scripts, etc., by initiating a preemptive scan before code, files, scripts, etc. are executed. For example, a solution may require that a file downloaded from a location on the Internet be scanned for viruses before it is executed locally. Other solutions attempt to discover malicious behavior by executing a program or process in a secure environment (e. g., a virtual machine) and attempting to discover whether the program or process behaves maliciously when it is run. However, because each suspicious program, file, process, etc. must be determined to be benign before being allowed to execute as part of normal operation, these solutions require considerable investment in computing resources.In contrast to conventional approaches, the various aspects described below enable mobile computing devices to detect and prevent malicious behavior in real time, thereby avoiding considerable startup costs for concurrent methods and allowing applications and processes to perform normally until mobile computing The device detects a credible risk of future malicious behavior. In the overview, given the current state or operational status of the mobile computing device, as well as the operations scheduled for execution, various aspects address the limitations of the solution at the same time by providing the mobile device with a database of channel configurations - for example, Those described herein, the channel configuration enables the mobile computing device to determine if it is in danger of experiencing malicious behavior in the near future. Accordingly, various aspects propose a system for predicting malicious behavior on a mobile computing device prior to the onset of malicious behavior, rather than after malicious behavior has occurred or begun. In various aspects, a network server can receive behavior vector information from a plurality of mobile computing devices, and can implement various pattern recognition techniques (including finite state machine analysis) on the received behavior vector information to identify malicious configurations and cause those malicious configurations. Channel configuration. The network server can notify the mobile computing device of the malicious configuration and the corresponding channel configuration (ie, exit the short configuration before the identified malicious configuration), thereby enabling the mobile computing device to identify when it has entered or is about to enter. The channel configuration of malicious behavior to predict and prevent malicious behavior in real time.In one aspect, after the mobile computing device has detected an ongoing malicious behavior, the network server can receive configuration information (e.g., a state in a finite state machine or a vector value in a behavior vector) from a plurality of mobile computing devices. The configuration information may indicate the configuration or status of the mobile computing device at the time the malicious behavior was detected, as well as the history of the configuration and status of the mobile computing device that caused the malicious behavior. The network server can analyze the configuration information of the combined mobile computing device (e. g., by utilizing pattern recognition or finite state machine analysis) to determine a configuration indicative of malicious behavior. The network server can utilize the configuration history of the mobile computing device to "return" from the malicious configuration to identify configuration patterns and channels between configurations (ie, channel configurations) that result in malicious configuration. The server may combine the identified channel configurations into a database or other suitable data structure and may send a malicious and channel configuration database to the mobile computing device providing the identified malicious configuration and channel configuration database or data structure, the mobile computing device You can use the database or data structure of the identified malicious configuration and channel configuration when analyzing its own behavior and configuration.In another aspect, after receiving the malicious and channel configuration data, the mobile computing device can determine its current configuration and compare the current configuration to the configuration included in the malicious and channel configuration database to determine if its current configuration is Lead to malicious behavior. In other words, the mobile computing device can utilize the configuration database or data structure received from the network server to determine if its current configuration is a channel configuration. When the current configuration of the mobile computing device is a channel configuration, the mobile computing device can implement various preventive measures to prevent or prevent the initiation of malicious behavior.In another aspect, the web server can also calculate the probability that the channel configuration would result in malicious behavior. In such an aspect, the network server may utilize a configuration database or data structure to transmit a probability that a particular channel configuration results in a malicious configuration, and the mobile computing device may refer to a probability in the received configuration database or data structure other than the channel configuration. To determine if its current configuration is likely to result in a malicious configuration.In another aspect, the web server can identify a particular instruction that, if executed, changes the channel configuration to a malicious configuration. The network server can include such identified instructions in a configuration database or data structure, and when the current configuration of the device is a channel configuration, the mobile computing device can refer to the configuration database or data structure to closely monitor and prevent execution of the identified instructions.Various aspects may be implemented in a wide variety of communication systems, such as the exemplary communication system 100 shown in FIG. A typical cellular telephone network 104 includes a plurality of cell base stations 106 coupled to a network operations center 108 that operate, for example, via a telephone landline (e.g., a POTS network, not shown) and the Internet 110. Voice calls and data are connected between the mobile computing device 102 (e.g., cell phone, laptop, tablet, etc.) and other network destinations. Communication between the mobile computing device 102 and the telephony network 104 can be accomplished via a two-way wireless communication link 112, such as 4G, 3G, CDMA, TDMA, LTE, and/or other mobile communication technologies. Telephone network 104 may also include one or more servers 114 coupled to network operations center 108 or within network operations center 108, which provides a connection to Internet 110.Communication system 100 may also include a network server 118 that is coupled to telephone network 104 and that is connected to Internet 110. The connection between the web server 116 and the telephone network 104 may be through the Internet 110 or through a dedicated network (as indicated by the dashed arrows). Network server 116 may also be implemented as a server within the network infrastructure of cloud service provider network 118. Communication between the web server 116 and the mobile computing device 102 can be accomplished over the telephone network 104, the Internet 110, a private network (not shown), or any combination thereof.The mobile computing device 102 can collect behavior, status, classification, modeling, success rate, and/or statistical information in the mobile computing device 102 and send the collected information to the network server 116 (eg, via the telephone network 104) For analysis. In one aspect, mobile computing device 102 can transmit its current configuration information (e.g., its behavior vector describing its current state) after experiencing malicious behavior. Mobile computing device 102 can also send its configuration history to network server 116. The configuration history can include a history of configuration changes that occur that result in the discovery of malicious behavior, and, optionally, instructions that cause those configurations to change. As further described below with respect to FIG. 4, web server 116 can use the information received from mobile computing device 102 to determine a list of malicious configurations and configurations (i.e., channel configurations) that result in malicious configurations.In another aspect, the web server 116 can send a malicious and channel configuration database to the mobile computing device 102, which can receive and use a malicious and channel configuration database to predict future malicious behavior before it occurs. . The web server 116 can send subsequent malicious and channel configuration databases to the mobile computing device 102 to replace, update, create, and/or maintain the data/behavior model of the mobile computing device.2 illustrates exemplary logical components and information flows in a mobile computing device 102 of one aspect, the mobile computing device 102 of the one aspect being configured to determine that a particular mobile computing device behavior, software application, or process is malicious, Suspicious or benign. In the example shown in FIG. 2, mobile computing device 102 can include behavior observer unit 202, behavior analyzer unit 204, external context information unit 206, classifier unit 208, and executor unit 210. In one aspect, classifier unit 208 can be implemented as part of behavior analyzer unit 204. In one aspect, the behavior analyzer unit 204 can be configured to generate one or more classifier units 208, each of the one or more classifier units 208 can include one or more classifiers .Each of the units 202-210 can be implemented in software, hardware, or any combination thereof. In various aspects, units 202-210 can be within components of an operating system (eg, within a kernel, in kernel space, in user space, in a separate program or application, in a dedicated hardware buffer or processor, or It is implemented in any combination. In one aspect, one or more of the units 202-210 can be implemented as software instructions executing on one or more processors of the mobile computing device 102.The behavior observer unit 202 can be configured to equip or coordinate an application programming interface (API) at various layers/modules of the mobile computing device with an instrument and monitor/observe the mobile computing device at each layer/module via an API of the instrumentation Operations and events (eg, system events, state changes, etc.), collecting information about observed operations/events, intelligently filtering the collected information, generating one or more observations based on the filtered information, and The generated observation data is stored in memory (eg, a log file, etc.) and/or the generated observation data is sent (eg, via memory writes, function calls, etc.) to the behavior analyzer unit 204.The behavior observer unit 202 can collect information about library API calls, system call APIs, file system and networked subsystem operations, device (including sensor device) state changes, and other similar events in an application framework or runtime library, To monitor/observe mobile computing device operations and events. The behavior observer unit 202 can also monitor file system activity, which can include searching for file names, classification of file access (personal information or normal data files), creation or deletion of files (eg, type exe, zip, etc.) , file read / write / find operations, change file permissions, and so on.The behavior observer unit 202 can also monitor data network activity, which can include the type of connection, protocol, port number, server/client to which the device is connected, number of connections, capacity or frequency of communication, and the like. The behavioral observer unit 202 can monitor telephone network activity, which can include the type and number of outgoing, received, or intercepted calls or messages (eg, SMS, etc.) (eg, an extra paid call) The number) is monitored.The behavior observer unit 202 can also monitor the usage of system resources, which can include monitoring the number of forks, memory access operations, the number of open files, and the like. The behavior observer unit 202 can monitor the status of the mobile computing device, which can include monitoring various factors, such as whether the display is on or off, whether the device is locked or unlocked, and the amount of remaining battery , the state of the camera, etc. The behavior observer unit 202 can also monitor interprocess communication (IPC) by, for example, monitoring the intent of key services (browser, contract provider), the degree of inter-process communication, pop-ups, and the like.The behavior observer unit 202 can also monitor/observe driver statistics and/or status of one or more hardware components, which can include cameras, sensors, electronic displays, WiFi communication components, data controllers Memory controllers, system controllers, access ports, timers, peripherals, wireless communication components, external memory chips, voltage regulators, oscillators, phase-locked loops, peripheral bridges, and are used to support the processor and Other similar components of the client running on the mobile computing device.The behavior observer unit 202 can also monitor/observe one or more hardware counters representing the state or condition of the mobile computing device and/or the mobile computing device subsystem. The hardware counter may include a processor/kernel special register configured to store a count or state of hardware related activities or events occurring in the mobile computing device.The behavior observer unit 202 can also monitor/observe the actions or operations of the software application, software downloads from the application download server (eg,application store server), mobile computing device information used by the software application, call information, text messaging information (eg, sending SMS, blocking SMS, reading SMS, etc.), media messaging information (eg, receiving MMS), user account information, location information, camera information, accelerometer information, browser information, browser-based communication content Content based on voice communication, short-range wireless communication (eg, Bluetooth, WiFi, etc.), content based on text communication, content of recorded audio files, phone book or contact information, contact list, and the like.The behavior observer unit 202 can monitor/observe transmissions or communications of the mobile computing device, including communications, including voicemail (voicemail communication), device identifier (device identifier communication), user account information (user account communication) ), calendar information (calendar communication), location information (location communication), recorded audio information (recorded audio communication), accelerometer information (accelerometer communication), and the like.The behavior observer unit 202 can monitor/observe the use of compass information, mobile computing device settings, battery life, gyroscope information, pressure sensors, magnetic sensors, screen activity, and the like, as well as updates/changes thereto. The behavior observer unit 202 can monitor/observe notifications transmitted to and from software applications (application notifications), application updates, etc. transmitted therefrom. The behavior observer unit 202 can monitor/observe conditions or events regarding the download and/or installation of the second software application by the first software application. The behavior observer unit 202 can monitor/observe conditions or events regarding user authentication (e.g., input of a password, etc.).The behavior observer unit 202 can also monitor/observe conditions or events at multiple layers of the mobile computing device, including the application layer, the wireless layer, and the sensor layer. Application layer observations may include observing users via facial recognition software, observing social flows, observing notes entered by the user, observing events regarding the use of the bank passbook (PassBook)/Google Wallet/PayPal. Application layer observations may also include events related to the use of virtual private networks (VPNs), as well as on synchronization, voice search, voice control (eg, by speaking a word to lock/unlock a phone), language translators, unloading data It is used for calculation, video streaming, use of a camera without the activity of the user, observation of the use of the microphone without the active activity of the user, and the like.Wireless layer observations may include determining the presence, presence, or number of any one or more of the following: user interaction with the mobile computing device, dual/multi SIM card, internet prior to establishing a wireless communication link or transmitting information Broadcast, mobile tethering, unloading data for computing, device status communication, use as a game controller or home controller, in-vehicle communication, mobile computing device synchronization, and the like. Wireless layer observations may also include monitoring the use of wireless (WiFi, WiMax, Bluetooth, etc.) for positioning, point-to-point (p2p) communication, synchronization, vehicle-to-vehicle communication, and/or machine-to-machine (m2m). Wireless layer observations can also include monitoring network traffic usage, statistics, or profiles.Sensor layer observations may include monitoring magnetic sensors or other sensors to determine the use of the mobile computing device and/or the external environment. For example, the mobile computing device processor can be configured to determine whether the phone is in a sleeve (eg, via a magnetic sensor configured to sense magnetic within the sleeve) or in a user's pocket (eg, via a camera or light sensor) The amount of light reached). Detecting that the mobile computing device can be associated with identifying malicious behavior within the suite, for example, because of activities and functions related to activity usage that occur when the user is crammed by the mobile computing device (eg, taking a picture or taking a picture, sending a message, making a voice) Calling, recording sounds, etc.) may be a sign that an illegal process is executing on the device (for example, tracking or investigating a user).Other examples of sensor layer observations related to usage or external environment may include detecting near field communication (NFC), collecting information from a credit card scanner, a barcode scanner, or a mobile tag reader, detecting the presence of a USB charging power source, and detecting a keyboard. Or the auxiliary device has been coupled to the mobile computing device, detecting that the mobile computing device has been coupled to the computing device (eg, via USB, etc.), determining whether the LED, flash, flashlight, or light source has been modified or has been disabled (eg, malicious The emergency signaling application is disabled, the speaker or microphone has been turned on or powered on, the charging or powering event is detected, and the mobile computing device is being used as a game controller. Sensor layer observations may also include collecting information from medical or health care sensors or scanning from the user's body, collecting information from external sensors inserted into the USB/audio jack, and collecting information from tactile or tactile sensors (eg, Information about the thermal state of the mobile computing device, etc., is collected via a vibrator interface or the like.In order to reduce the number of factors that can be monitored by the management layer, in one aspect, the behavior observer unit 202 can perform a rough observation by monitoring/observing an initial set of behaviors or factors, the initial set of behaviors or factors being possible A small subset of all factors contributing to the degradation of mobile computing devices. In one aspect, the behavior watcher unit 202 can receive an initial set of behaviors and/or factors from the network server 116 and/or components in the cloud service or network 118. In one aspect, an initial set of behaviors/factors can be specified in a data/behavior model received from web server 116 or cloud service/network 118. In one aspect, the initial set of behaviors/factors can be specified in a reduced feature model (RFM).Behavior analyzer unit 204 and/or classifier unit 208 can receive observation data from behavior observer unit 202, compare the received information (ie, observation data) with context information received from external context information unit 206, and Subsystems, processes, and/or applications associated with the received observation data that contribute to (or may contribute to) degradation of the device over time, or that may otherwise cause problems on the device (eg, malicious behavior) Identify.In one aspect, the behavior analyzer unit 204 and/or the classifier unit 208 can include, for utilizing a limited set of information (ie, coarse observation data), to facilitate (or possibly contribute to) degradation of the device over time, Or intelligence that can be identified in other ways by the behavior, process, or program of the problem on the device. For example, behavior analyzer unit 204 can be configured to analyze information collected from various units (eg, behavior observer unit 202, external context information unit 206, etc.) (eg, in the form of observed data) to learn mobile computing devices The normal operational behavior, and the generation of one or more behavior vectors based on the results of the comparison. Behavior analyzer unit 204 can send the generated behavior vector to classifier unit 208 for further analysis.The classifier unit 208 can receive the behavior vector and compare it to one or more behavioral modules to determine whether a particular mobile computing device behavior, software application, or process is malicious, benign, or suspicious.When the classifier unit 208 determines that the behavior, software application, or process is malicious, the classifier unit 208 can notify the executor unit 210, which can perform various actions or operations to correct the determination to be malicious or Performance degraded mobile computing device behavior, and/or operations to trim, eliminate, isolate, or otherwise repair the identified problem.In a further aspect, the behavior analyzer unit 204 and/or the classifier unit 208 can determine whether the current configuration of the mobile computing device 102 is a channel configuration with reference to a malicious and channel configuration database received from a network server (eg, the web server 116). . In one aspect, classifier unit 208 (or behavior analyzer unit 204) can compare the current configuration of the mobile computing device to one or more channel configurations included in the malicious and channel configuration database received from the network server. To determine if the current configuration of mobile computing device 102 matches the channel configuration included in the malicious and channel configuration database. For example, behavior analyzer unit 204 can generate a behavior vector for a particular application currently running on the mobile computing device, and classifier unit 208 can compare the application's behavior vector to the channel configuration included in the malicious and channel configuration database. To determine if the current configuration of the application is causing malicious behavior on the mobile computing device.When the classifier unit 208 determines that the current configuration of the mobile computing device 102 is included in the malicious and channel configuration database received from the network server (ie, the current configuration of the mobile computing device 102 is causing malicious behavior), the classifier unit 208 can notify The executor unit 210, which can perform various actions or operations to prevent malicious behavior or other performance degradation activities on the mobile computing device before such malicious behavior occurs.3 illustrates exemplary components and information flows in a system 300 of an aspect that is configured to operate in conjunction with a cloud service/network 118 to intelligently and efficiently configure maliciously and cause mobile computing The configuration of the malicious behavior on device 102 is identified by network server 116. In the example shown in FIG. 3, network server 116 includes cloud unit 302, malicious and channel configuration database generator unit 304, and training data unit 306. The mobile computing device 102 includes a behavior observer unit 202, a classifier unit 208, and an executor unit 210. In one aspect, classifier unit 208 can be included in or as part of behavior analyzer unit 204 (shown in Figure 2). In one aspect, the model generator 304 unit can be a real-time online classifier.Cloud unit 302 can be configured to receive a large amount of information from cloud service/network 118 and generate a complete or robust data/behavior model that includes all or most of the features, data points, and/or factors that result in malicious behavior. . In one aspect, the information from the cloud service/network 118 can include configuration information and configuration history reported by a plurality of mobile computing devices that detect some form of malicious behavior. For example, multiple mobile computing devices may have reported malicious behavior for a particular configuration and may also report their configuration/status/instructions that result in detected malicious behavior.The malicious and channel configuration database generator 304 can generate a malicious and channel configuration database including a behavioral model based on the complete behavioral model generated in the cloud unit 302. In one aspect, generating a behavioral model can include generating one or more reduced feature models (RFMs) including subsets of features and data points, the subset of features and data points being included in the cloud unit 302 In the complete model. In one aspect, the malicious and channel configuration database generator 304 can generate a malicious and channel configuration database including an initial set of features (ie, an initially reduced feature model), the initial set of features including being determined to have the highest likelihood of causing the classifier Unit 208 can decisively determine whether a particular mobile computing device behaving is causing malicious behavior. The malicious and channel configuration database 304 can send the generated malicious and channel configuration database to the classifier unit 208.The behavior observer unit 202 can monitor/observe the mobile computing device behavior on the mobile computing device 102, generate observation data, and send the observation data to the classifier unit 208. The classifier unit 208 can perform real-time analytics operations, which can include comparing the behavioral model in the malicious and channel configuration database with the configuration information collected by the behavior observer unit 202 to determine if the current state of the mobile computing device 102 is causing malicious behavior. When classifier unit 208 determines that the current configuration of mobile computing device 102 matches the channel configuration included in the malicious and channel configuration database, classifier unit 208 can determine that mobile computing device behavior is causing malicious behavior. As discussed above with respect to FIG. 2, when the classifier unit 208 finds a match, the classifier unit 208 can alert the executor unit 210 to begin taking measures to avoid future malicious behavior.In another aspect, mobile computing device 102 can transmit to web server 116 the results of its operation and/or success rate associated with the application of the model. For example, classifier unit 208 may not find a match in the malicious and channel configuration database, but malicious behavior may still occur, indicating that mobile computing device 102 may report previously undetected malicious behavior to network server 116 (ie, in protection) The gap) is included in the next distribution of the malicious and channel configuration database. The web server 116 can generate training data (e.g., via the training data unit 306) based on the result/computation for use by the model generator 304. The model generator can generate an updated malicious and channel configuration database based on the training data and periodically send the updated malicious and channel configuration database to the mobile computing device 102 and other mobile computing devices.4 illustrates an aspect method 400 that can be implemented on a network server to send a malicious and channel configuration database identifying a malicious configuration and a channel configuration to a mobile computing device. Upon execution of method 400, the network server can function as a centralized hub that receives, compiles, and analyzes information from a plurality of mobile computing devices to identify configurations for indicating malicious behavior and to cause those malicious configurations Channel configuration. The server can also provide reports to a plurality of mobile computing devices that enable the mobile computing device to detect whether its current behavior (or the behavior of an application or process running on the mobile computing device) is trending toward malicious behavior.In block 402, the network server can receive configuration information and configuration history from a plurality of mobile computing devices. In one aspect, when the mobile computing device detects a malicious behavior (eg, being hacked, malware, or a virus, etc.), the mobile computing device can transmit to the network server a mobile computing device indicating that the mobile computing device found a malicious behavior The configured behavior vector or similar information. In addition, the mobile computing device can also send a step-by-step configuration history for describing the configuration of the occurrence until the malicious behavior is detected.In one aspect, the mobile computing device can maintain a list of configuration changes starting from an initial configuration (e.g., a launch configuration). For example, when the behavior vector of the mobile computing device is [0, 2, 1, 0, ..., 4], the mobile computing device can detect malware activity. The mobile computing device can send a behavior vector [0, 2, 1, 0, ..., 4] to the network server and send it to return the configuration of the mobile computing device from [0, 2, 1, 0, ..., 4] Information from earlier configurations (for example, initial configuration (eg, [0,0,0,0,...,0])). In another aspect, the mobile computing device can conserve resources by merely maintaining a shortened configuration history (i.e., the mobile computing device can only record a certain number of previous configurations that result in malicious configuration).In block 404, the network server can analyze the configuration information to identify the malicious configuration. In one aspect, the network server can identify the malicious configuration by matching identical or similar behaviors reported by several mobile computing devices for representing malicious behavior. In other aspects, the network server may identify the configuration as malicious only when a certain number or percentage of mobile computing devices identify the configuration as malicious. In other words, the network server may use the confidence threshold to flag behavior as malicious only if there is some consensus between the reporting mobile computing devices.In another aspect, the network server can receive configuration information from mobile computing devices that may not share the same type or configuration of the same capacity or configuration, and thus, the mobile computing device can have dissimilar configuration information/behavior vectors. In such an aspect, the network server can identify a malicious configuration by implementing various pattern matching algorithms or policies to detect a malicious configuration, or a specific feature that is commonly reported by a plurality of mobile computing devices for representation The characteristics of malicious behavior. In other words, the web server can compile thousands of reports from mobile computing devices of different models and determine the configuration characteristics that are always present when the mobile computing device detects malicious behavior. For example, the web server may determine that various types of mobile computing devices almost always report malicious behavior when the configuration of various types of mobile computing devices includes "screen off," "access contact information," and "transfer data." .In block 406, the network server can identify the channel configuration based on the identified malicious configuration. In one aspect, the channel configuration can be a "leading" configuration that results in a malicious configuration. In other words, the channel configuration may be in danger of developing a malicious configuration in some cases. For example, the channel configuration may be one or two configuration changes away from malicious configuration.In one aspect, after receiving a plurality of configuration histories, the network server can implement pattern recognition or state machine analysis (if the configuration history is presented as a transition between states) to discover one or more of the resulting malicious configurations Mode or configuration. In other words, the web server can use the configuration history from various mobile computing devices to "return" from the malicious configuration (i.e., along the "configuration path") to identify an earlier configuration or a configuration that has resulted in a malicious configuration. When the analysis determines that there is a significant probability that a subsequent configuration will be malicious, these older configurations can be identified as channel configurations as defined above. As discussed below with respect to Figure 7A, any given configuration or state may evolve or be translated into any number of subsequent configurations or states, depending on the instructions or operations performed next. Thus, if other instructions or operations are performed, the configuration prior to malicious configuration may not necessarily result in a malicious configuration. In order to deal with this problem, the server analysis can determine from the reported information that the given configuration directly leads to the frequency of malicious configuration, and only those configuration identifiers that cause malicious configuration frequently (ie, the frequency exceeds the threshold or probability) For "channel configuration". For example, a network server can classify a configuration as a channel configuration only if there is a chance that more than 10% of the configuration will result in malicious behavior. Server analysis can also identify instructions/operations that turn the channel configuration into a malicious configuration when executed.In one aspect, the web server may first identify the malicious configuration/status, one or more intermediate configurations, and the starting configuration as discussed above with reference to block 404. For example, the web server may first identify "transfer address book information when the screen is off" as a malicious configuration, and may "return" to find "access to address book information when the display is off" is frequently caused to "transfer when the screen is off" Channel configuration of address book information.In one aspect, in order to increase the effectiveness of using the channel configuration as an early warning sign for future malicious behavior, the network server may only classify the configuration as "when configured to be within a threshold "step" from the malicious configuration. Channel configuration." Server analysis can also identify subsequent channel configurations that directly lead to malicious behavior, as well as instructions/operations that, when executed, cause the mobile computing device to go through a series of steps from the identified channel configuration to malicious configuration.In block 408, the network server can generate a malicious and channel configuration database including the identified malicious configuration and the identified channel configuration. In one aspect, as discussed below with respect to Figures 5, 6, 8, 10, and 11, the malicious and channel configuration database can include information that can enable the mobile computing device to assess whether the mobile computing device is at risk of entering a malicious configuration.As described above, in optional block 410, the network server can calculate the probability of transitioning to a malicious configuration for each identified channel configuration. To calculate the probability, the web server can analyze the configuration history of thousands of mobile computing devices to determine how often a transition from channel configuration to malicious configuration occurs. For example, after analyzing reports from 100,000 mobile computing devices, the network server can determine that 70,000 mobile computing devices are transitioning from a certain channel configuration to a benign configuration (ie, 70% or 0.7), and 30,000 mobile computing devices are A certain channel configuration is turned into a malicious configuration (ie, 30% or 0.3). In one aspect, the web server can represent the information using a Markov chain analysis that includes each configuration (ie, state) and the probability of transitioning from one configuration to another, such as Shown in Figure 9 below. In optional block 412, the network server may also include the calculated probabilities in the malware and channel configuration database.Additionally, as described above, in optional block 414, the network server can identify malicious channel instructions or operations that, when executed in the identified channel configuration, result in malicious configuration. In this operation, the web server can analyze the behavior vector information and configuration history to identify code, parameters, or other instructions that cause the channel configuration to become maliciously configured. The web server can identify such instructions in the context of a particular channel configuration. Thus, the network server can determine instructions that when configured to make the channel malicious, thereby enabling the mobile computing device to better determine if it is at risk of developing a malicious configuration. In other words, the network server can determine that a mobile computing device in a particular channel configuration will become malicious after performing certain instructions referred to herein as "malicious channel instructions." It should be noted that when the mobile computing device is in a channel configuration, the malicious channel instructions may only generate malicious behavior or malicious configuration when they are executed. In this way, because these aspects enable the identification and reaction of instructions/operations that are safe in most cases and not related to malicious behavior, each aspect is different from the traditional malware detection system.In optional block 416, the network server may include the identified list of instructions that, when executed, result in the identified malicious configuration, in the malicious and channel configuration database. In a further aspect, the network server may also include a channel configuration and a malicious channel instruction or an association between the instructions that will cause the channel configuration to become malicious. As further described below with respect to FIG. 11, the mobile computing device can avoid such malicious behavior with a malicious and channel configuration database that includes a list of instructions that result in malicious behavior.In block 418, the web server can send the malicious and channel configuration database to a plurality of mobile computing devices. In various aspects, a mobile computing device can use a malicious and channel configuration database to use when preemptively identifying channel configurations that may result in malicious behavior. In one aspect, the malicious and channel configuration database can present the malicious configuration and channel configuration as a state, path, or behavior analyzer unit 204 and/or classifier that can be run on the mobile computing device. The behavior vector value used by unit 208.In an optional aspect, as the network server continuously receives behavior vector information and configuration history from the mobile computing device in block 402, the network server can perform the process in the loop. In such an aspect, the web server can scroll through the behavior vector information and configuration history. In other words, the web server can continuously receive information about malicious behavior from the mobile computing device as it occurs, and the web server can continuously analyze and identify malicious configurations as more behavior vector information and configuration history are received. And channel configuration. As such, the web server can repeat the process to continuously issue updated malicious and channel configuration databases to the mobile computing device based on the received new information.FIG. 5 illustrates an aspect method 500 for preemptive identification of a malicious configuration that can be implemented by a mobile computing device. In one aspect, the mobile computing device can utilize a malicious and channel configuration database that identifies the malicious configuration and the channel configuration to determine when the state of the mobile computing device (or the current configuration of the mobile computing device's application, process, or component) is causing malicious behavior. Based on this determination, the mobile computing device can implement various measures to avoid or prevent such malicious activities.In block 502, the mobile computing device can receive a malicious and channel configuration database. As discussed above in block 418 of method 400 described above with respect to FIG. 4, the web server may use crowd-sourced configuration information and/or configuration history to have some form of maliciousness that results from reporting by other mobile computing devices. Certain configurations of the risk of behavior are identified. The web server can compile information about the malicious configuration and channel configuration into a malicious and channel configuration database, and can send one or more malicious and channel configuration databases to the mobile computing device. In a further aspect, the mobile computing device can routinely receive the malicious and channel configuration database as part of a periodic service managed by the network service (eg, the mobile computing device can register with the network server to receive the malicious and channel configuration database) .In block 504, the mobile computing device can determine its current configuration. As described above with respect to FIG. 2, in one aspect, the behavior observer unit 202 can collect various types of information (ie, "behavior observation data") regarding the current operation/condition/state of the mobile computing device, as well as mobile computing. The configuration or state change that the device has experienced.In one aspect, the mobile computing device can reference the behavior vector to determine the current configuration of the mobile computing device. In another aspect, the behavior analyzer unit 204 can receive behavioral observation data from the behavior observer unit 202, and the behavior analyzer unit 204 can use the behavioral observation data to generate a behavior vector or another indication of the current configuration of the mobile computing device. For example, behavior analyzer unit 204 can determine that the current configuration of the mobile computing device indicates that data is being transmitted and that the screen is off. Behavioral analyzer unit 204 can perform behavioral observation data for finite state machine analysis, and by behavioral observation data, behavior analyzer unit 204 can determine the mobile computing device by tracking a series of state transitions to the current state (ie, current configuration). Current configuration. As discussed with reference to Figure 7A, the use of finite state machine analysis to determine the current configuration is described in further detail below.In determination block 506, the mobile computing device can determine whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database. In other words, the mobile computing device can determine if its current configuration is a channel configuration. In one aspect, the behavior analyzer unit 204 and/or the classifier unit 208 can initialize the current configuration of the mobile computing device (eg, the behavior vector representing the current configuration of the mobile computing device) with the malicious sum received from the network server. The channel configuration in the channel configuration database is compared to the malicious configuration to determine if the current configuration matches the channel configuration included in the malicious and channel configuration database.When the mobile computing device determines based on the malicious and channel configuration database that the current configuration does not result in a malicious configuration (ie, determination block 506 = "No"), the mobile computing device can continue to perform normally in block 510. As the mobile computing device can continue by determining the current configuration of the mobile computing device in block 504, the process can continue into the loop. Thus, in one aspect, the mobile computing device can continually check its current configuration to ensure that it is not at risk of future malicious acts.When the mobile computing device determines based on the malicious and channel configuration database that the current configuration is causing a malicious configuration (ie, determination block 506 = "No"), the mobile computing device can determine in an optional determination block 507 whether a precautionary measure is implemented to avoid Malicious configuration.In some cases, whenever a device or a current configuration of components on a device is determined to result in a malicious configuration, the mobile computing device may experience poor or unacceptable performance due to implementing preventative measures. Because there is no certainty that the current configuration will actually evolve into a malicious configuration, the mobile computing device can selectively implement preventive measures only if there is a certain risk of malicious behavior that exceeds a predefined threshold in the near future. In order to achieve an effective balance of security and performance. For example, as further described below with respect to Figure 8, the mobile computing device can implement preventive measures only if the current configuration has a significant likelihood of causing a malicious configuration. Similarly, in another example, as further described below with respect to Figure 10, the mobile communication device can only implement preventive measures when the calculated risk of entering a malicious configuration from the current configuration exceeds a predefined probability/risk threshold.In a further aspect, the predefined threshold may be set based on user input received from a user interface component on the mobile computing device, and the predefined threshold may reflect the user's security preference for performance. For example, users at the intelligence office may require higher security (ie, mobile computing devices determine more preventive measures to avoid future malicious configurations) to ensure that all malicious applications are captured, and such users can Configure the mobile computing device to use a low threshold so that preventive action is taken most or all of the time. In another example, another user may decide to stop each malicious behavior from being unworthy of performance impact, and the mobile computing device can be configured to implement preventive measures only when the risk of entering a malicious configuration exceeds a high threshold.When the mobile computing device determines that preventive measures are not implemented to avoid malicious configuration (ie, optional determination block 507 = "No"), the mobile computing device may continue to perform normally in block 510. As the mobile computing device can continue by determining the current configuration of the mobile computing device in block 504, the process can continue into the loop. Thus, in one aspect, a mobile computing device can continually check its current configuration to ensure that it is not at risk of future malicious behavior.When the mobile computing device determines to perform preventative measures to avoid malicious configuration (ie, optional determination block 507 = "Yes"), the device may implement preventive measures in block 508 to avoid malicious configuration. In one aspect, the mobile computing device can perform various operations to avoid future malicious configurations, such as determining applications/processes associated with future malicious configurations, and terminating, isolating, and/or eliminating those applications/processes. The implementation of preventive measures is described in further detail below with reference to FIG.After the preventive measures are implemented, the mobile computing device can continue to perform normally in block 510. As the analysis engine can continue by determining the current configuration of the mobile computing device in block 504, the process can continue to enter the loop.Although the above description relates to determining whether the current configuration of the mobile computing device is causing a malicious configuration, in a further aspect, the mobile computing device or a component running on the mobile computing device can instead determine the separate hardware running on the mobile computing device. Or whether the current configuration of the software component is causing a malicious configuration. For example, the mobile computing device can determine that the current configuration of the application is a channel configuration that results in a malicious configuration. In these alternative aspects, the malicious and channel configuration database received from the network server can include a determination as to whether the individual application or hardware component is in a malicious behavior in the near future with respect to the mobile computing device (or components running on the mobile computing device). Information about the malicious configuration and channel configuration required in the risk.6 illustrates an aspect method 600 that can be implemented on a mobile computing device for implementing preventive measures to avoid malicious configuration in the near future when the mobile computing device is currently in a channel configuration. In one aspect, one or more processes associated with the current channel configuration may be slowed or suspended while the mobile computing device determines that it is currently in the channel configuration to provide the mobile computing device with sufficient time to check for occurrences on the mobile Calculate other behaviors on the device to assess whether it is actually at risk of malicious behavior in the near future.The operations of method 600 implement aspects of the operations of block 508 of method 500 described above with respect to FIG. Thus, after determining that the mobile computing device is currently in a channel configuration that causes malicious behavior (ie, determination block 506 = "Yes"), the mobile computing device can begin performing method 600.In block 602, the mobile computing device can identify one or more processes associated with the current configuration. In one aspect, one or more processes can be applications, programs, scripts, hardware components, or other components that run on a mobile computing device. For example, the current configuration may include: "camera on" and "screen off", and the mobile computing device can associate these characteristics with a camera application currently running on the mobile computing device.In another aspect, the mobile computing device can receive information that is included in the malicious and channel configuration database identifying one or more processes associated with the current channel configuration of the mobile computing device, and the mobile computing device can This information is used to identify one or more processes related to the current configuration. For example, after the mobile computing device determines that it is in the channel configuration, the malicious and channel configuration database can be referenced to discover that the social media application running on the mobile computing device is associated with the channel configuration, and thus may be directing the mobile computing device to The cause of malicious behavior. In a further aspect, the malicious and channel configuration database can include an identification of a process or component that is linked to more than one of the channel configurations.In block 604, the mobile computing device can slow down execution of one or more processes. In one aspect, mitigating one or more processes may be a preliminary preventive measure to prevent the development of ongoing malicious behavior, which may initially find that the current configuration is a channel configuration (as determined by the method 500 described above with respect to FIG. 5) Shortly after the description of 506). In another aspect, the mobile computing device can instead stop the execution of one or more processes altogether.While slowing or suspending execution of one or more processes can temporarily degrade the functionality and performance of the mobile computing device, the potential benefits of avoiding malicious behavior in the near future may exceed such a cost. Because the mobile computing device can reasonably interfere with the operation of the process only after it is determined that the mobile computing device is currently in the channel configuration, various aspects also reduce these costs. Thus, by taking action only when there is a risk of detected future malicious behavior (ie, when the mobile computing device is currently in the channel configuration), the mobile computing device can protect itself while running on the mobile computing device One or more processes or components have minimal impact.Returning to Figure 6, in block 606, the mobile computing device can check for other behaviors currently occurring on the mobile computing device. In one aspect, when one or more processes are slowed down or stopped, the mobile computing device can investigate other ongoing activities in an attempt to better predict whether the current configuration is trending toward malicious behavior. For example, a mobile computing device can scan for applications or hardware components associated with one or more processes or used by one or more processes in an attempt to discover other unusual or suspicious behavior. In another aspect, checking for other behaviors can include comprehensive or thorough scanning and analysis, and slowing down one or more processes can enable the mobile computing device to complete these comprehensive scans before the current configuration evolves into a malicious configuration.In optional determination block 608, the mobile computing device can determine if more checks are needed before making a decisive determination that the current configuration is becoming malicious. When the mobile computing device determines that more checks are needed (i.e., optional determination block 608 = "Yes"), the mobile computing device can continue to check other behaviors currently occurring on the mobile computing device in optional block 610. The mobile computing device can continue to check other behaviors until it is reasonable to make a decision as to whether the current configuration is becoming malicious, and/or what may cause the mobile computing device to trend toward malicious behavior.Thus, when the mobile computing device determines that more checks are not required (ie, optional determination block 608 = "No"), the mobile computing device can determine in the determination block 612 whether there is a significant amount based on a check of other behaviors. The current configuration is causing the possibility of malicious configuration. In an example where the one or more processes are taking a picture while the screen is off (ie, the current configuration is "camera on" and "screen off"), the mobile computing device can conclude that the current configuration has not entered a malicious state (eg, "Screen Off", "Camera On", and "Transfer Camera Data On") because other activity indications on the mobile computing device are required to turn on the screen in response to user input. In this example, it is expected that the current configuration transitions to "camera on" and "screen on", which may not be malicious configurations.When the mobile computing device determines that there is no likelihood that a significant current configuration is causing a malicious configuration (ie, determination block 612 = "No"), the mobile computing device may return one or more processes to normal operation in block 618. . In one aspect, one or more processes can continue to run at a normal rate.When the mobile computing device determines that the current configuration is becoming malicious (i.e., determination block 612 = "Yes"), the mobile computing device can interrupt one or more processes in optional block 614. In one aspect, interrupting one or more processes can include completely stopping execution of one or more processes or terminating one or more processes.In block 616, the mobile computing device can implement preventive measures for one or more processes. In one aspect, a mobile computing device can implement various techniques to avoid malicious behavior, including isolating one or more processes from interacting with other applications or components on the mobile computing device, and lifting from an initial, benign configuration. Set/restart one or more processes. Other techniques may include restoring one or more processes to an earlier processing point that is known to be benign (e.g., restoring the application to an earlier version). The mobile computing device can also return one or more processes to normal operation in block 618.The mobile computing device can continue by determining the current configuration in block 504 of method 500 described above with respect to FIG.Figure 7A shows a state diagram of a transition between configuration and configuration on a mobile computing device of one aspect, represented as a finite state machine ("FSM") 700.In one aspect, FSM 700 can include a state (ie, configuration) for each of the possible configurations ("screen on", "screen off", "send data", etc.) for the mobile computing device. The FSM 700 can also utilize a transition from one state/configuration to another to indicate how the configuration of the mobile computing device changes over time. For example, the FSM 700 can indicate that a particular configuration A 702 (eg, "screen on" and "sound off") can transition to another configuration B 704 (eg, "screen on" and "sound on").In one aspect, the network server can generate the FSM 700 based on configuration information and/or configuration history obtained from a plurality of mobile devices. For example, the network server can receive configuration information from a plurality of mobile computing devices and can compile the information to generate an FSM representing the configuration/status of the mobile computing device and the transitions between those configurations. In an example, the web server can receive configuration histories from thousands of mobile computing devices and can generate FSMs representing configuration transitions from an initial configuration (e.g., "power on" state) to various intermediate configurations and end configurations. In another aspect, because different mobile computing devices can have different features or functions, the network server can generate a dedicated FSM for mobile computing devices that share similar characteristics (e.g., the same model, manufacturer, etc.).In another aspect, the network server can classify each configuration/state in the FSM 700 into a benign configuration, a suspicious configuration, or a malicious configuration. In one aspect, the network server can perform behavior analysis based on configuration information received from a plurality of mobile computing devices. For example, the web server can identify configurations that are always linked to reports of malicious behavior (ie, malicious configurations), configurations that require more checks to determine if they are malicious (ie, suspicious configurations), and configurations that do not indicate malicious behavior ( That is, benign configuration). Thus, in a further aspect, the web server can generate a FSM that describes the configuration and transitions between those configurations, as well as the categories of each configuration.Moreover, in another aspect, as described above with respect to FIG. 4, after categorizing the configuration in the FSM 700, the network server can "return" from the malicious configuration to identify configurations having substantial risks that result in those malicious configurations. (ie, channel configuration). For example, the network server can use configuration information and configuration history received from a plurality of mobile devices to determine a particular configuration that always indicates the beginning of a trend toward malicious behavior.In another aspect, as described above with respect to FIG. 4, after the FSM 700 is generated, the configuration in the FSM 700 is classified, and the channel configuration in the FSM 700 that causes malicious configuration is identified, the network server can configure data, such as with malicious and channel. To send this information about the FSM 700 to the mobile computing device.In one aspect, the mobile computing device can utilize the FSM 700 to track its configuration in real time. For example, during normal operation, the mobile computing device can track configuration transitions in the FSM 700 to maintain contact with its current configuration. In another aspect, the mobile computing device can determine if its current configuration is a channel configuration (e.g., as indicated in the malicious and channel configuration database received from the network server). When the mobile computing device determines that it is currently in the channel configuration, the mobile computing device can analyze the FSM 700 to determine the potential configuration to which the mobile computing device may transition in the near future. In other words, the mobile computing device can perform finite state machine analysis and "return" from the current configuration of the mobile computing device to determine the configuration (i.e., potential future configuration) that may occur next.In one aspect, the potential future configuration may be a configuration that the mobile computing device may reach after a certain number of transitions. For example, the current configuration of the mobile computing device can be "screen off and sound off." Thus, a mobile computing device can have two potential future configurations, such as "screen on and sound off" and "screen off and sound on", which may be possible in one transition/configuration change. Up. In a further example, the mobile computing device can transition to another potential future configuration (eg, "screen on and sound on") that is reachable in two transitions.In one aspect, after identifying potential future configurations, the mobile computing device can determine its category and can take the necessary actions based on those potential future configured categories to prevent or avoid entering malicious configurations in the future. In an example, the mobile computing device can currently be in configuration A 702, which is benign and not a channel configuration (ie, given the current configuration, there is no risk of identifying future malicious behavior). In this event, the mobile computing device can continue to operate normally. In other words, without having to spend considerable computing resources, the mobile computing device can quickly check in real time (i.e., during actual, normal operation) whether its current state is causing malicious behavior.In a continuation of the above example, the mobile computing device may experience one or more configuration changes over time that may cause the mobile computing device to transition from configuration A 702 to configuration B 704. Upon entering configuration B704, the mobile computing device can determine that its current configuration is a channel configuration. Thus, the mobile computing device can determine that there is a risk of transitioning to a malicious configuration in the near future. At this point, the mobile computing device can slow down or stop one or more processes associated with configuration B 704 to allow additional time to assess the likelihood that malicious behavior will occur in the near future. Thus, by awaiting the initiation of preventive measures until the risk of identified future malicious behavior exists, the mobile computing device can avoid unnecessary calculations.In one aspect, as shown in the table 725 described with reference to FIG. 7B, after entering the channel configuration, the mobile computing device can determine whether in the near future based on the category of its current configuration and the category of potential future configurations. There is a possibility of considerable malicious behavior. In another aspect, further described below with respect to FIG. 9, the mobile computing device can utilize information received from the network server to determine the probability of malicious behavior occurring in the near future (eg, a value from 0.0 to 1.0), and when malicious When the probability of a behavior exceeds a certain threshold probability (eg, 25%), the device can determine the likelihood of a significant malicious behavior.In an example, after the mobile computing device determines that configuration B 704 is a channel configuration, the mobile computing device can determine that configuration B 704 is benign and the potential future configuration achievable in one step (ie, configuration C 706 and configuration D 708 ) are Benign and malicious. In one aspect, the mobile computing device can refer to table 725 and infer the likelihood of a substantial future malicious behavior based on the table lookup, as configuration B 704 directly results in a malicious configuration.In another example, when the current configuration of the mobile computing device is configuration E710, the mobile computing device can determine that there is a very small likelihood of ultimately causing malicious behavior because the only potential future configuration (ie, configuration F 712) is Benign. However, if the mobile computing device transitions to configuration F712, the mobile computing device can determine that there is a significant risk of future malicious behavior because the potential future configuration is malicious.In another example where the current configuration of the mobile computing device is configuration G714, the mobile computing device can determine that there is a significant likelihood that the current configuration is causing malicious behavior because configuration G714 is suspicious. In one aspect, when more information is needed, the mobile computing device (or network server) can classify the configuration as suspicious to determine if the configuration is benign or malicious.In one aspect, after determining the likelihood of future malicious behavior, the mobile computing device can determine whether to implement various preventive measures to avoid future malicious behavior based on the determined likelihood. The process of determining the appropriate measure to take based on the determined likelihood is described below with reference to FIG.8 illustrates an aspect method 800 that can be implemented on a mobile computing device for implementing preventive measures when there is a potential for a relatively large current configuration to cause malicious behavior. In one aspect, upon determining that the mobile computing device is currently in a channel configuration (ie, there is a risk of malicious behavior in the near future), the mobile computing device can be based on the current configuration of the mobile computing device and the mobile computing device may be from its current The configuration to which the configuration transitions determines whether there is a considerable likelihood of experiencing malicious behavior in the near future.The operations of method 800 implement aspects of the operations of block 508 of method 500 described above with respect to FIG. Thus, as described above with respect to FIG. 5, after the mobile computing device determines that the current configuration is a channel configuration (decision block 506 = "Yes"), it may begin performing method 800.As described above with respect to FIG. 6, in block 602, the mobile computing device can identify one or more processes associated with the current configuration. The mobile computing device can also slow down execution of one or more processes in block 604. In one aspect, by slowing down the execution of one or more processes, the mobile computing device can have additional time to determine if preventive measures are necessary to avoid malicious behavior in the near future.In block 802, the mobile computing device can determine the currently configured category. In one aspect, the mobile computing device can determine whether the current configuration is benign, malicious, or suspicious. In one aspect, as described with reference to Figures 7A-7B (eg, as part of an FSM describing the configuration of a mobile computing device and transitions between those configurations), the web server may have determined various configurations of the mobile computing device The category, and may have been sent as part of a malicious and channel configuration database received by the mobile computing device.In another aspect, as discussed above with respect to FIG. 2, the behavior analyzer unit 204 and/or the classifier unit 208 running on the mobile computing device can instead locally determine the current configured category of the mobile computing device, rather than The category of the current configuration of the mobile computing device that is part of the malicious and channel configuration database is received from the web server. For example, behavior analyzer unit 204 can receive current behavioral observation data from a behavior observer unit and can generate a behavior vector representative of the current configuration of the mobile computing device. The classifier unit 208 can then determine if the generated behavior vector indicates that the current configuration of the mobile computing device is benign, suspicious, or malicious.In optional determination block 804, the mobile computing device can determine if the current configuration is malicious. When the mobile computing device determines that the current configuration is malicious (ie, optional determination block 804 = "Yes"), the mobile computing device can implement remedial action on one or more processes associated with the current configuration to The current malicious configuration is eliminated in box 808. In one aspect, when the current configuration of the mobile computing device is malicious, it may not be possible to prevent malicious behavior, and thus the mobile computing device may need to implement remedial measures to stop the ongoing malicious behavior. For example, a mobile computing device can scan for and remove malware, viruses, corrupted files, and the like. The mobile computing device can also return one or more processes to normal operation in block 618. The mobile computing device can also continue execution by determining the current configuration in block 504 of method 500 described above with respect to FIG.When the mobile computing device determines that the current configuration is not malicious (ie, optional determination block 804 = "No"), the mobile computing device can determine the category of potential future configurations in block 806. In one aspect, the mobile computing device can determine the category of potential future configurations as discussed above with reference to determining the currently configured category in block 802. For example, a mobile computing device may have received a category of potential future configurations that are part of a malicious and channel configuration database sent from a network server. In another example, a mobile computing device (or one or more components running on a mobile computing device) can generate behavior vectors based on potential future configurations and classify those behavior vectors.In block 810, the mobile computing device can determine the likelihood that the current configuration will result in a malicious configuration based on the current configuration and the category of potential future configurations. In one aspect, the mobile computing device can refer to the lookup table of the image table 725 described above with respect to Figure 7B to determine the likelihood that the current configuration would result in malicious behavior.In determination block 812, the mobile computing device can determine if the current configuration has a substantial likelihood of causing a malicious configuration. For example, when the current configuration and all potential future configurations are benign, the mobile computing device can determine that there is a very small risk of malicious behavior in the near future. In another example, even if the current configuration is benign, the mobile computing device can determine that there is a significant risk of malicious behavior when one or more of the potential future configurations are malicious.When the mobile computing device determines that there is a very small likelihood of causing a malicious configuration (ie, determination block 812 = "No"), the mobile computing device may return one or more processes to normal operation in block 618. The mobile computing device can continue execution by determining the current configuration in block 504 of method 500 described above with respect to FIG.When the mobile computing device determines that there is a likelihood that a substantial current configuration will result in malicious behavior (ie, determination block 812 = "Yes"), the mobile computing device may optionally discontinue one or more in optional block 614 Execution of the process. In one aspect, the mobile computing device can stop execution of one or more processes to provide the mobile computing device with sufficient time to avoid any future malicious behavior.In block 616, the mobile computing device can implement preventive measures for one or more processes as described in block 616 of method 600 illustrated in Figure 6 above. In one aspect, implementing preventative measures can include adjusting the performance or configuration of one or more processes to avoid predicted malicious behavior. For example, one or more processes can be restored to an earlier, known, benign configuration.In block 618, the computing device can return one or more processes to normal operations. In one aspect (not shown), the mobile computing device can resume normal execution of one or more processes when the mobile computing device determines the likelihood that there will be no significant malicious behavior in the near future. The mobile computing device can continue execution by determining the current configuration in block 504 of method 500 described above with respect to FIG.Figure 9 illustrates a Markov chain analysis for use with predicting malicious behavior on a mobile computing device. In one aspect, like the FSM analysis described above with respect to Figure 7A, Markov chain analysis can describe various configurations of mobile computing devices and transitions between those configurations. In addition, Markov chain analysis can also include the probability of transitioning from one configuration to the next.In one aspect, the network server can generate the FSM 900 based on configuration information/configuration history received from a plurality of mobile computing devices as described above with respect to the FSM 700 illustrated in Figure 7A. Thus, the FSM 900 can include various configurations/states and transitions between those configurations. The network server may also determine the category (i.e., benign, malicious, or suspicious) for each configuration in the FSM 900 and determine one or more configurations (i.e., channel configurations) that result in malicious behavior.Moreover, in another aspect, the web server can calculate the probability that the channel configuration will transition directly to a particular potential future configuration (e.g., from "screen off" to "screen on"). For example, the network server can receive reports from a plurality of mobile computing devices and can determine the number of times each channel configuration among the total number of reported transitions transitions to a potential future configuration. For example, as shown in Figure 9, the web server may have calculated that the mobile computing device in configuration B904 has 10% of the time transitioning to configuration C906 and 90% of the time transitioning to configuration D908.In a further aspect, after determining that the current configuration is a channel configuration, the mobile computing device can determine the category of potential future configurations and the probability that the current configuration will transition to each of the potential future configurations. If the probability of a direct transition to a malicious potential future configuration exceeds a certain threshold, the mobile computing device can implement preventive measures to avoid predicted malicious behavior. For example, a mobile computing device in the configuration E910 may have a 0% chance of transitioning directly to a malicious configuration. In this case, the mobile computing device may not implement preventative measures because the probability of a direct transition to a malicious configuration is below a threshold probability. However, a mobile computing device in configuration F912 may have a 100% (0.7 + 0.3 = 1.0 = 100%) chance to transition directly to a malicious state, and thus, the mobile computing device can implement preventive measures to avoid very high current configurations. The probability that it will lead to malicious behavior.In another aspect, as described above with respect to optional block 410 in Figure 4, the network server can calculate the probability of a final transition to a malicious configuration for each channel configuration. In an example, the mobile computing device in configuration H916 can eventually transition to two malicious configurations (i.e., configuration D908 and configuration I918). The probability of transitioning from configuration H916 to configuration D is 9% (10% probability to transition to configuration B904, and then 90% probability to transition from configuration B904 to configuration D908). The probability of transitioning from configuration H916 to configuration I918 may be 2.5% (a probability of 5% transitioning from configuration H916 to configuration G914, and a 50% probability of transitioning from configuration G914 to configuration I918). Therefore, a mobile computing device in configuration H can have a total 11.5% (9% + 2.5% = 11.5%) chance of eventually causing malicious configuration.The network server can transmit probability information to the mobile computing device that enables the mobile computing device to determine the probability that the channel configuration will ultimately result in malicious behavior. In another aspect, the mobile computing device can receive a probability of transitioning from a channel configuration to a potential future configuration (ie, a probability of transitioning directly from the current configuration to the next configuration), and can calculate locally that the current channel configuration will eventually The probability of causing malicious behavior.10 illustrates an aspect method 1000 that can be implemented on a mobile computing device to implement preventive measures to avoid malicious behavior based on the probability that a current configuration would result in a malicious configuration. The operation of method 1000 implements aspects of the operation of method 500 described above with respect to FIG. In one aspect, after determining that the mobile computing device is currently in the channel configuration, the mobile computing device can implement preventive measures to avoid malicious behavior when the probability of transitioning from the current configuration of the mobile computing device to the malicious configuration exceeds a certain threshold. .In block 1002, the mobile computing device can receive a malicious and channel configuration database including configuration transition probabilities from the network server. As discussed with respect to Figure 9, the configuration transition probability may describe the likelihood that the mobile computing device will transition from one configuration to another, including the probability that the channel configuration will transition to a malicious configuration. In one aspect, the configuration transition probability can describe the probability of transitioning directly from a channel configuration to a malicious configuration (i.e., the probability of entering a malicious configuration in a single transition). In another aspect, the configuration transition probability may indicate a total probability of transitioning from a channel configuration to a malicious configuration (i.e., the probability of entering a malicious configuration in one or more transitions).As discussed with respect to block 504 of method 500 described above with respect to FIG. 5, in block 504, the mobile computing device can determine its current configuration. For example, the behavior analyzer unit can generate a behavior vector for describing the current configuration of the mobile computing device based on the behavioral observation data.In determination block 506, the mobile computing device can determine whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database. In other words, the mobile computing device can determine whether the mobile computing device is a channel configuration by comparing the current configuration of the mobile computing device to a list of channel configurations included in the malicious and channel configuration database received from the network server. When the mobile computing device determines that the current configuration is not a channel configuration (ie, determination block 506 = "No"), the process can continue into the loop until the mobile computing device determines that its current configuration is a channel configuration (ie, there is future malicious behavior) risks of).When the mobile computing device determines that the current configuration is a channel configuration (i.e., determination block 506 = "Yes"), the mobile computing device can identify one or more processes associated with the current configuration in block 602. The mobile computing device can also slow down execution of one or more processes in block 604. In one aspect, the mobile computing device can identify and mitigate execution of one or more processes in a manner similar to that described above with respect to FIG.In an optional aspect, the mobile computing device can determine in an optional determination block 804 whether the current configuration is malicious. For example, when the mobile computing device receives a malicious and channel configuration database from the network server, the current configuration of the mobile computing device may already be maliciously configured. When the mobile computing device determines that the current configuration is malicious (ie, optional determination block 804 = "Yes"), the mobile computing device can implement remedial action on one or more processes in optional block 808 to eliminate the current Malicious configuration. For example, mobile computing devices can employ traditional methods of scanning and deleting malware. The mobile computing device can return one or more processes to normal operation in block 618. As the mobile computing device can continue to recognize, the process can continue into the loop as the mobile computing device enters the channel configuration beginning in block 504.When the mobile computing device determines that the current configuration is not malicious (ie, optional determination block 804 = "No"), the mobile computing device can determine, in block 1004, the probability that the current configuration is causing the malicious configuration based on the current configuration and the configuration transition probability. . In one aspect, the mobile computing device can reference the configuration transition probability received from the network server and determine the probability that the mobile computing device will transition directly from its current configuration to a malicious configuration.In another aspect, the mobile computing device can track changes from the current configuration to one or more malicious configurations. The mobile computing device can utilize the configuration transition probability received from the network server to calculate the probability that the mobile computing device will transition from a current configuration to a malicious configuration after one or more transitions. For example, a mobile computing device can have 75% chance of transitioning from its current configuration to an intermediate configuration, and 50% of the opportunity to transition from an intermediate configuration to a malicious configuration. Therefore, there can be an opportunity for 37.5% (75%*50%=37.5%) of the current configuration that will eventually lead to malicious configuration.In determination block 1006, the mobile computing device can determine if the probability that the current configuration is causing the malicious configuration exceeds a risk threshold. In one aspect, the risk threshold may indicate that the cost of implementing preventive measures there may exceed the point of avoiding the benefit of malicious behavior. For example, when there is only a 5% chance that the current configuration will actually evolve into a malicious configuration, the cost of restoring one or more processes associated with the current configuration to the previous state or version may not be cost effective. The cost of implementing preventative measures can greatly benefit the overall performance of mobile computing devices when there is an opportunity for 95% of the current configuration to result in malicious behavior. As mentioned above with reference to Figure 5, the risk threshold can be set based on user input received from the user interface device, thereby enabling the user to specify a desired level of security.When the mobile computing device determines that the current configuration results in a malicious configuration that does not exceed the risk threshold (ie, determination block 1006 = "No"), the mobile computing device can place one or more processes in block 618 as described above with respect to FIG. Return to normal operation. As the mobile device can continue by determining the current configuration in block 504, the process can continue to enter the loop.When the mobile computing device determines that the current configuration results in a malicious configuration that exceeds the risk threshold (ie, determination block 1006 = "Yes"), the mobile computing device can optionally interrupt execution of one or more processes in optional block 614 . The mobile computing device can also implement preventative measures for one or more processes in block 616 as described above with reference to FIG. The mobile computing device can also return one or more processes to normal operation in block 618.As the mobile computing device can continue to recognize, the process can continue into the loop as the mobile computing device enters the channel configuration that begins in block 504.11 illustrates an aspect method 1100 that can be implemented on a mobile computing device to prevent execution of an instruction determined to result in malicious behavior. The operation of method 1100 implements aspects of the operation of method 500 described above with respect to FIG.In block 1102, the mobile computing device can receive a malicious and channel configuration database including a list of malicious channel instructions from the network server. As described above with respect to optional block 414 in FIG. 4, the network server can compile a list of instructions associated with causing malicious behavior based on configuration information and configuration history received from the plurality of mobile devices. For example, a mobile computing device can report its configuration at the moment it discovers a malicious configuration, as well as a list of instructions that the mobile computing device performs that result in malicious behavior. Thus, in one aspect, as described below, the web server can generate a malicious and channel configuration database that includes these potential malicious channel instructions, thereby enabling the mobile computing device to monitor and prevent execution of such instructions.In block 504, the mobile computing device can determine the current configuration as discussed with reference to block 504 of method 500 described above with respect to FIG. For example, the behavior analyzer unit can generate a behavior vector for describing the current configuration of the mobile computing device based on the behavioral observation data.In determination block 506, the mobile computing device can determine whether the current configuration is causing a malicious configuration based on the malicious and channel configuration database. In other words, the mobile computing device can determine whether the mobile computing device is a channel configuration by comparing the current configuration of the mobile computing device to a list of channel configurations included in the malicious and channel configuration database received from the network server. When the mobile computing device determines that the current configuration is not a channel configuration (ie, determination block 506 = "No"), the process can continue into the loop until the mobile computing device determines that its current configuration is a channel configuration (ie, there is a future malicious configuration) risks of).When the mobile computing device determines that the current configuration is a channel configuration (i.e., determination block 506 = "Yes"), the mobile computing device can identify one or more processes associated with the current configuration in block 602. The mobile computing device can also slow down execution of one or more processes in block 604. In one aspect, the mobile computing device can identify and mitigate execution of one or more processes in a manner similar to that described above with respect to FIG.In block 1104, the mobile computing device can determine one or more instructions to be executed by one or more processes. In one aspect, the mobile computing device can preview the instructions that one or more processes are about to execute and compare those instructions to a list of malicious channel instructions included in the malicious and channel configuration database received from the network server.In determination block 1106, the mobile computing device can determine if one or more instructions to be executed are in the list of malicious channel instructions. For example, the mobile computing device can discover the name of a function call that one or more processes are about to invoke, and the mobile computing device checks the malicious and channel configuration database to determine if those function call names are included in the list of malicious channel instructions.When the mobile computing device determines that one or more instructions to be executed are not in the malicious channel instruction list (ie, determination block 1106 = "No"), the mobile computing device can place one in block 618 as described above with reference to FIG. Or multiple processes return to normal operation. As the mobile computing device can continue by determining the current configuration in block 504, the process can continue into the loop.When the mobile computing device determines that one or more instructions to be executed are in the malicious channel instruction list (ie, determination block 1106 = "No"), the mobile computing device can optionally interrupt one or both in optional block 614. Execution of multiple processes.In block 1108, the mobile computing device can prevent execution of one or more instructions to be executed. In one aspect, the mobile computing device can reset/restart one or more processes or revert one or more processes to an earlier, benign configuration. In another aspect, the mobile computing device can only prevent one or more processes from executing one or more instructions that are determined to be malicious, and the mobile computing device can additionally allow one or more processes to be normal in block 618. Operation.The mobile computing device can perform an aspect process that returns to the continuous loop of block 504 such that the mobile computing device continuously monitors whether it has entered the channel configuration.Various aspects may be implemented with any of a wide variety of mobile computing devices, examples of which are shown in FIG. Mobile computing device 1200 can include a processor 1202 coupled to internal memory 1206. Processor 1202 may be one or more multi-core integrated circuits designated for general or specific processing tasks. Internal memory 1206 can be either volatile memory or non-volatile memory, and can also be secure and/or encrypted memory, or non-secure and/or unencrypted memory, or any combination thereof. Processor 1202 can also be coupled to touch screen panel 1212, such as a resistive touch screen, a capacitive sensing touch screen, an infrared sensing touch screen, and the like. In addition, the display of mobile computing device 1200 need not have touch screen capabilities.Mobile computing device 1200 can have one or more wireless signal transceivers 1208 (e.g.,Wi-Fi, RF radio) and antenna 1210 coupled to one another and/or coupled to processor 1202 for transmitting and receiving communications. Transceiver 1208 and antenna 1210 can be used with the circuits described above to implement various wireless transmission protocol stacks and interfaces. Mobile computing device 1200 can include a cellular network wireless modem chip 1216 that communicates via a cellular network and is coupled to a processor.Mobile computing device 1200 can include a peripheral device connection interface 1218 that is coupled to processor 1202. Peripheral device connection interface 1218 may be specifically configured to accept one type of connection, or may be configured to accept various types of physical and communication connections, common or dedicated, such as USB, Firewire, Thunderbolt, or PCIe. . Peripheral device connection interface 1218 may also be coupled to a similarly configured peripheral device connection port (not shown).Mobile computing device 1200 can also include a speaker 1214 for providing audio output. The mobile computing device 1200 can also include a housing 1220 constructed of a plastic article, metal, or combination of materials for housing all or some of the components discussed herein. Mobile computing device 1200 can include a power source 1222 coupled to processor 1202, such as a disposable or rechargeable battery. A rechargeable battery can also be coupled to the peripheral device connection port to receive charging current from a power source external to the mobile computing device 1200. Mobile computing device 1200 can also include a physical button 1224 for receiving user input. Mobile computing device 1200 can also include a power button 1226 for turning mobile computing device 1200 on or off.The various aspects described above can also be implemented in a wide variety of mobile computing devices (e.g., the laptop 1300 shown in Figure 13). Many laptop computers include a touchpad touch surface 1317 that acts as a pointing device for a computer, and thus, can be similar to those implemented on a mobile computing device equipped with a touch screen display and described above, receiving drag, scroll, and Tap gestures. Laptop 1300 will typically include a processor 1311 coupled to volatile memory 1312 and a large capacity non-volatile memory (e.g., disk drive 1313 of flash memory). In addition, computer 1300 can have one or more antennas 1308 for transmitting and receiving electromagnetic radiation that can be coupled to a wireless data link and/or to cellular telephone transceiver 1316 that is coupled to processor 1311. Computer 1300 can also include a floppy disk drive 1314 and a compact disk (CD) drive 1315 that are coupled to processor 1311. In a notebook configuration, the computer housing includes a touchpad 1317, a keyboard 1318, and a display 1319, each coupled to a processor 1311. As is well known, other configurations of computing devices can include a computer mouse or trackball coupled to a processor (e.g., via a USB input), which can also be used in conjunction with various aspects.Portions of the aspect method can be implemented in a client-server architecture in which some processing in the process occurs in the server, for example, a database that maintains normal operational behavior, which can be accessed by the mobile computing device processor when performing the aspect method. Such aspects can be implemented on any of a variety of commercially available server devices (e.g., server 1400 shown in Figure 14). Such a server 1400 typically includes a processor 1401 coupled to a volatile memory 1402 and a large capacity non-volatile memory (e.g., disk drive 1403). Server 1400 can also include a floppy disk drive, compact disk (CD) or DVD disk drive 1404 coupled to processor 1401. Server 1400 can also include a network access port 1405 coupled to processor 1401 for establishing a data connection with network 1406, such as a local area network coupled to other broadcast system computers and servers. The processor 1401 can be any programmable microprocessor, microcomputer, or multiple processor chips or chips that can be configured by software instructions (applications) to perform a variety of functions, including the various Aspect of the function. Generally, software applications can be stored in internal memory 1402, 1403 before being accessed and loaded into processor 1401. Processor 1401 can include internal memory sufficient to store application software instructions.The foregoing method descriptions and process flow diagrams are only provided as illustrative examples and are not intended to be required or imply that the steps of the various aspects must be performed in the order presented. As will be appreciated by those skilled in the art, the order of the steps in the foregoing aspects can be performed in any order. Words such as "after", "then", "next" are not intended to limit the order of the steps; these words are only used to guide the reader through the description of the method. In addition, any reference to the claim elements in the singular (e.g., "a", "an" or "the"As used in this application, the terms "component," "module," "system," "engine," "generator," "manager," etc. are intended to include a computer-related entity such as, but not limited to, configured Hardware, firmware, a combination of hardware and software, software, or software in execution to perform a specific operation or function. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and a computing device may be referred to as a component. One or more components can reside within a process and/or executed thread, and the components can be centralized on one processor or core and/or distributed between two or more processors or cores. Moreover, these components can execute in various non-transitory computer readable media having various instructions and/or data structures stored thereon. Components may be through local and/or remote processes, functions or procedure calls, electronic signals, data packets, memory read/write, and other known methods of network, computer, processor, and/or process related communication methodology. To communicate.The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally. Whether such functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system. Skilled artisans are capable of <Desc/Clms Page number> number> The hardware used to implement the various illustrative logic, logic blocks, modules, and circuits described in connection with the aspects disclosed herein may utilize a general purpose processor, digital signal processor (DSP), designed to perform the functions described herein, An application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, is implemented or executed. A general purpose processor may be a multi-processor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a multi-processor, a plurality of multi-processors, one or more multi-processors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry specific to a given function.In one or more exemplary aspects, the functions described can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer readable medium or non-transitory processor readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may be embodied in a non-transitory computer readable or processor readable storage medium. The non-transitory computer readable or processor readable storage medium can be any storage medium that can be accessed by a computer or processor. Such non-transitory computer readable or processor readable medium may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, disk storage, or other magnetic storage by way of example and not limitation. A device, or any other medium that can be used to store a desired program code in the form of an instruction or data structure and that can be accessed by a computer. As used herein, magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), floppy disks, and Blu-ray disks, in which disks typically magnetically replicate data, while optical disks utilize lasers to optically Copy the data. Combinations of the above are also included in non-transitory computer readable and processor readable media. Furthermore, the operations of a method or algorithm may be present on a non-transitory processor readable medium and/or computer readable medium as one or any combination or collection of code and/or instructions, which may be incorporated into a computer program product in.The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the invention. Various modifications to these aspects will be obvious to those skilled in the art, and the general principles defined herein may be applied to other aspects without departing from the spirit or scope of the invention. The present invention is therefore not intended to be limited to the details shown herein, but the scope of the inventions |
A method for implementing clock dividers includes providing, in response to detecting a voltage drop [408] at a processor core [1 14], an input clock signal to a transmission gate multiplexer [210] for selecting between one of two stretch-enable signals. In some embodiments, selecting between the one of two stretch-enable signals includes inputting a set of core clock enable signals into a clock divider circuit [202], and modifying the set of core clock enable signals to generate the stretch- enable signals. An output clock signal is generated based on the selected stretch- enable signal. |
WHAT IS CLAIMED IS:1 . A method comprising:in response to detecting a voltage drop [408] at a processor core [1 14],providing an input clock signal to a transmission gate multiplexer [210] for selecting between one of two stretch-enable signals; and generating an output clock signal based on the selected stretch-enable signal[412].2. The method of claim 1 , wherein selecting between the one of two stretch-enable signals comprises:inputting a set of core clock enable signals into a clock divider circuit [202]; andmodifying the set of core clock enable signals to generate the stretch-enable signals.3. The method of claim 2, wherein modifying the set of core clock enable signals comprises:logically combining the set of core clock enable signals in the clock divider circuit to generate the stretch-enable signals.4. The method of claim 2, further comprising:in response to detecting the voltage drop at the processor core, asserting a stretch assertion signal [406] to generate the output clock signal based on the selected stretch-enable signal.5. The method of claim 4, further comprising:after detecting the voltage drop at the processor core, in response to detecting a voltage increase [416] at the processor core, deasserting the stretch assertion signal to generate the output clock signal based on the set of core clock enable signals [410].6. The method of claim 1 , wherein generating the output clock signal comprises: changing a frequency of the output clock signal from a first frequency to a second frequency, wherein the second frequency is less than the first frequency.7. The method of claim 6, further comprising:after detecting the voltage drop at the processor core, in response to detecting a voltage increase at the processor core, modifying the output clock signal from the second frequency to a third frequency, wherein the third frequency is greater than the second frequency.A method, comprising:generating a set of core clock enable signals [404];providing the set of core clock enable signals to a processor core [1 14];generating a first output clock signal at a first frequency based on the set of core clock enable signals [406]; andin response to detecting a voltage drop at the processor core [408], providing an input clock signal to a transmission gate multiplexer [210] for selecting between one of two stretch-enable signals; and generating a second output clock signal [412] based on the selected stretch- enable signal.9. The method of claim 8, further comprising:in response to detecting the voltage drop at the processor core, inputting the set of core clock enable signals into a clock divider circuit [202]; and modifying the set of core clock enable signals to generate stretch-enable signals.A processor [102], comprising:a processor core [1 14];a droop detector circuit to detect a voltage drop at the processor core;a clock divider [202] circuit to receive a set of core clock enable signals, the clock divider circuit to generate an output clock signal based on the set of core clock enable signals.1 1. The processor of claim 10, wherein the clock divider circuit further comprises: a transmission gate multiplexer [210] for selecting between one of two stretch- enable signals.The processor of claim 1 1 , wherein the clock divider circuit is further to:receive, in response to the droop detector circuit detecting a voltage drop at the processor core, a stretch assertion signal to logically combine the set of core clock enable signals in the clock divider circuit to generate the two stretch-enable signals.The processor of claim 10, wherein the clock divider circuit is further to:in response to the droop detector circuit detecting a voltage drop at theprocessor core, change a frequency of the output clock signal from a first frequency to a second frequency, wherein the second frequency is less than the first frequency.The processor of claim 10, wherein the clock divider circuit is further to:subsequent to the droop detector circuit detecting a voltage drop, in response to detecting a voltage increase at the processor core, modify the output clock signal from the second frequency to a third frequency, wherein the third frequency is greater than the second frequency.15. The processor of claim 10, the clock divider circuit further comprising:a duty cycle adjuster [204] configured to change at least one of a rising edge rate or a falling edge rate of the output clock signal. |
CLOCK DIVIDER DEVICE AND METHODS THEREOFBACKGROUNDDescription of the Related ArtA data processing device, such as an integrated circuit (IC) microprocessor device, can include a large number of data subsystems fabricated at a single semiconductor die. For example, an IC microprocessor device can include a memory interface subsystem and a graphics acceleration subsystem in addition to a central processing unit. Each data subsystem can operate as a data processor and can include disparate operating frequency limitations. Therefore, the computational performance of the microprocessor device is typically improved if each data subsystem is configured to operate at a respective frequency that can be different from that of another data subsystem. Furthermore, it can be advantageous if the operating frequency of a particular data subsystem can be changed efficiently while the data subsystem continues to operate. For example, the microprocessor can transition a data subsystem between an active or nominal power operating mode and a low-power operating mode by altering the frequency of a clock signal provided to that data subsystem.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.FIG. 1 illustrates a block diagram of a processing system utilizing clock dividers in accordance with at least some embodiments. FIG. 2 illustrates a block diagram of a portion of the processor core of FIG. 1 in accordance with some embodiments.FIG. 3 illustrates a waveform diagram of various clock signals in accordance with some embodiments. [oooi] FIG. 4 is a flow diagram of a method of adjusting a clock signal at a processor in response to a voltage droop by adjusting enable signals used to generate the clock signal in accordance with some embodiments.DETAILED DESCRIPTION[0002] FIGs. 1 -4 disclose techniques for implementing clock dividers for supporting clock ramp ups and downs associated with, for example, changes in a power mode at a processor. Clock dividers can be built using cascaded flip flops with a multiplexer to control divided clocks for elock ramp up/down. However, the addition of flops in the clock path may increase jitter, which will have an impact on the maximum clock frequency (Fmax) that can be applied to at least one module of the processor.Accordingly, in some embodiments, a clock divider circuit includes a transmission gate multiplexer (mux) in which the clock signal (Clkln) acts as a select signal that picks between two enable (CKGEN_EnableA and CKGEN_EnableB) inputs. The clock divisor can be adjusted in, for example, 0.5 divider increments (e.g. , 1 .0, 1 .5, 2.0, 2.5, etc.) by modulating the CKGEN_EnableA and CKGEN_EnableB bits applied to the enable inputs. By performing clock divides with a threshold granularity (e.g., 0.5 divider increments), the clock divider supports slower clock ramp up/down during CC6 entry/exit and scan shift reset entry/exit. The slow ramp up/down of clock frequency enabled by the clock divider in turn provides mitigation of problems associated with rapid changes of supply current sometimes referred to herein as di/dt.[0003] The clock divider also enables clock stretching with reduced latency by modifying an existing enable stream of core clock enable signals upon receiving a stretch assertion signal (i.e., StretchEn) that is triggered by detecting a power supply droop. In operation, the CKGEN_EnableA and CKGEN_EnableB bits may be overridden by StrEn assertion (triggered by the power supply droop) to force a stretch in clock frequencies. Further, in some embodiments, the clock divider includes a duty cycle adjuster within the mux to enable duty cycle adjusting. Providing the duty cycle adjuster within the clock divider avoids adding additional stages to support duty cycle adjusting, thereby reducing jitter. FIG. 1 illustrates a block diagram of a processing system 100 utilizing clock dividers in accordance with at least some embodiments. In the depicted example, the processing system 100 includes a compute complex 102 (also known as a "core complex"), a cache hierarchy 104, a memory controller 106, and a southbridge 108. The compute complex 102 includes a plurality of processor cores, such as the four processor cores 1 1 1 , 1 12, 1 13, 1 14 depicted in the example of FIG. 1 . The processor cores may include central processing unit (CPU) cores, graphics processing unit (GPU) cores, digital signal processor (DSP) cores, or a combination thereof. It will be appreciated that the number of processor cores of the compute complex 102 may be fewer or more than four.The memory controller 106 operates as the interface between the cache hierarchy 104 and a system memory 1 10. Thus, data to be cached in the cache hierarchy 104 typically is manipulated as blocks of data referred to as "cache lines", and which are addressed or otherwise located in a memory hierarchy using a physical address of system memory 1 10. Cache lines are accessed from the system memory 1 10 by the memory controller 106 in response to memory requests from the cache hierarchy 104. Likewise, when a cache line containing modified data is evicted from the cache hierarchy 104 and thus needs to be updated in the system memory 1 10, the memory controller 106 manages this write-back process. The southbridge 108 operates as the interface between the cache hierarchy 104, the memory controller 106, and one or more peripherals (not shown) of the processing system 100 (e.g., network interfaces, keyboards, mice, displays, and other input/output devices).The cache hierarchy 104 includes two or more levels of caches. In the illustrated example, the cache hierarchy 104 includes three cache levels: level 1 (L1 ), level 2 (L2), and level 3 (L3). For L1 , the core complex 102 implements small private caches for each processing core, which are depicted as L1 caches 121 , 122, 123, 124, each associated with a corresponding one of processor cores 1 1 1 -1 14 as depicted in FIG. 1 . For L2, the core complex 102 implements larger private caches for each processor core, which are depicted as L2 caches 131 , 132, 133, 134 corresponding to processor cores 1 1 1 -1 14, respectively, as also illustrated in FIG. 1 . Each of the L2 caches 131 -134 is private to its corresponding processor core, but the cache hierarchy 104 operates to maintain coherency between the L2 caches 131 - 134. The L2 caches 131 -134 can be direct mapped or an n-way set associative cache in some embodiments. For the L3 caching level, the cache hierarchy 104 implements an L3 cache 140 that is shared by the processor cores of the core complex 102, and thus shared by at least the L2 caches 131 -134. Components of the L3 cache 140 include, but is not limited to, at least one level shifter 142. In some embodiments, such as illustrated in FIG. 3, the L3 cache 140 includes one level shifter 142 per processing core, such as when the processor cores 1 1 1 -1 14 have different frequencies and/or voltages. As illustrated in FIG. 1 , each the four processor cores 1 1 1 , 1 12, 1 13, 1 14 (e.g., processor core 1 14) includes a clock mesh 154 (also known as a "mesh clock" or a "clock tree"), a digital frequency synthesis logic (DFS) 164, a CKGEN logic 174, and a discrete Fourier transform (DFT) logic 184. The processor core 1 14 is generally configured to execute sets of instructions (e.g., computer programs) to carry out operations on behalf of an electronic device. To execute the sets of instructions, the processor core includes one or more modules, such as fetch states, dispatch stages, execution units, memory controllers, input/output interfaces, caches, and the like that are each composed of synchronous logic elements, logic gates, and othercomponents. The processor core 1 14 employs one or more clock signals to synchronize operation of these components. In some embodiments, the processor core 1 14 receives a synchronized version of a clock signal from the L3 cache, and the clock mesh 154 distributes various versions of the clock signal to the various components of the processor core 1 14.The level shifter 142 of the L3 cache 140 provides a P-state clock to the CKGEN logic 174. The CKGEN logic 174 manages problems associated with rapid changes of supply current (i.e., di/dt events) resulting from clock speed and power mode changes (e.g., C-state changes) of the processor core 1 14. In someembodiments, the DFS 164 is a 2-phase DFS for managing C-state and scan-shift reset behaviors. The DFS 164 performs clock dividing for various modules of the processor core 1 14, including operations such as clock ramp up or down for C-state entry and exit, clock divides for scan shift reset and two-phase stretch for droop. As further discussed with regards to FIG. 2, each DFS 164 further includes a clock divider circuit and duty cycle adjuster that provides each processor core with independent control of clock ramps, divides, and stretches.In at least one embodiment, the processor cores 1 1 1 , 1 12, 1 13, 1 14 ramp the clock frequencies gently to prevent di/dt issues during scan shift reset and when entering and exiting C-states. Switching to high frequency directly will cause a large change in power drawn and associated di/dt issues. In particular, when powering up the processor core 1 14, the scan shift frequency power is such that the power attach should be gentle (e.g., 100 ns or more from off to full power). For example, during CC6 exit (that is, exit from a given low-power mode), core clocks switch from an OFF state to full frequency. A clock divider circuit in the DFS 164 slowly ramps up the clock frequency by starting with a large divisor and incrementally reducing the divisor. Accordingly, the frequency of the output clock signal changes with the divisor.Similarly, during CC6 entry, the DFS 164 ramps the core clocks in a similar manner, by starting with a low divisor and incrementally ramping up the divisor. In other embodiments, switching to scan shift reset also ramps core clocks down/up in a manner similar to CC6 entry and exit.In some embodiments, power supply droops created by changes in power draw from power supply result in degradation of the maximum clock frequency (Fmax) or increase in voltage needed to operate the processors 1 1 1 , 1 12, 1 13, 1 14 (e.g., voltage identification, Vid) required for a particular frequency. The impact of power supply droop can be reduced by stretching the clock upon detection of power supply droop. Accordingly, in response to detecting a supply voltage at one or more locations in the processor core 1 14 has fallen by a specified threshold amount, a stretch control module (not shown) generates a stretch signal is generated to signal that clock signals should be "stretched", or have their frequencies reduced in response to the voltage droop. For example, upon receiving a stretch assertion signal (i.e., StretchEn) from a droop detector circuit, the DFS 164 stretches clock signals, thereby changing the frequency of clock signals in response to detected voltage droops. The clock stretching performed reduces the power draw, thereby reducing the droop, and allows the logic in the processor more time to stabilize before the next clock edge. Duty cycle compression introduced by process variation on the clock path impacts Fmax. Accordingly, phase timing paths are sensitive to the duty cycle of the clock. In some embodiments, the DFS 164 further includes a fuse- controlled duty cycle adjuster which modulates duty cycles in silicon.FIG. 2 illustrates a block diagram of a portion 200 of the processor core 1 14 of FIG. 1 in accordance with some embodiments. The portion 200 includes a clock divider circuit 202 which further includes a duty cycle adjuster 204 and a transmission gate multiplexer (mux) 210 in which a clock signal (Clkln) acts as a select signal that picks between two enable inputs. The clock divider circuit 202 includes latches 212, 222, 232, OR gates 242, 252, AND gates 262, 272, the duty cycle adjuster 204, and the transmission gate multiplexer (mux) 210. As discussed above with regard to FIG. 1 , the processor core 1 14 receives P-state clock frequencies (e.g., Clkln and ClkX) from the level shifter 142 of the L3 cache 140. Core clock (CCLK) enable signals (CKGEN_EnableA and CKGEN_EnableB) are driven from rising edge flops in the CKGEN (e.g., CKGEN 174 of FIG. 1) to meet setup time to rising edge flops in the DFS 164. Latches 206 and 208 of the DFS 164 receive and act on the CKGEN_EnableA and CKGEN_EnableB signals, respectively. The latch 206 includes a data input to receive the enable signal CKGEN_EnableA, a clock input to receive the clock signal ClkX, and an output. The latch 208 includes a data input to receive the enable signal CKGEN_EnableB, a clock input to receive the clock signal ClkX, and an output.In operation, the stretch assertion signal (i.e., StretchEn) is asserted upon detection of a power supply droop to enable clock stretching that picks between two stretch-enable EN signals (i.e., Str_ENA, Str_ENB). Any clock divide in 0.5increments (e.g., 1 .0, 1 .5, 2.0, 2.5, etc.) can be achieved by modulating the EN bits. Upon receiving the StretchEn signal, StrEn assertion overrides the CKGEN_EnableA and CKGEN_EnableB bits to force a 100% stretch for a single cycle. The system is designed such that when StretchEn may go high, CKGEN_EnableA=1 andCKGEN_EnableB=0. The clock divider circuit 202 supports clock ramp up/ramp down during CC6 Entry/Exit and scan shift reset entry/exit by performing clock divides with 0.5 granularity (1 .0, 1 .5, 2.0, 2.5, etc.). Accordingly, clock divider circuit 202 configures a transmission gate mux 210 in which the clock (i.e., ClkX) acts as a select which picks between two EN inputs (and associated logic controlling the two EN inputs). The slow ramp up/down of clock frequency enabled by the clock divider circuit 202 provides di/dt mitigation. The clock divider circuit 202 also includes a duty cycle adjuster 204 which provides the final EN inputs to the mux 210. Rise and fall edge rate at the output (i.e., ClkOutX) can be adjusted during operations by independently varying the p- channel field-effect transistor (pFET) and n-channel field-effect transistor (nFET) strength of inverters driving the transmission gates. Independent control of pFET and nFET strength using Fuse/JTAG bits (i.e., ENN[6:0], ENP[6:0]) enables duty cycle modulating for improving silicon frequency or testing phase path margin in silicon. Positioning the duty cycle adjuster 204 within the clock divider circuit 202 avoids adding stages to support duty cycle adjusting, thereby reducing jitter. FIG. 3 illustrates a waveform diagram 300 of various clock signals in accordance with some embodiments. In particular, the waveform diagram 300 shows waveforms for clock divide by 1 , followed by stretch. In the illustrated example, between a time 302 and a subsequent time 304, the StrEn signal is in a negated state, indicating that no voltage droop has been detected at the processor core 1 14. Accordingly, between time 302 and time 304, the frequency of the clock signal output ClkOutX is determined only by the clock divider circuit 202, wherein it generates the ClkOutX to have a frequency equal to the frequency of the input clock signal (i.e., CLK) divided by 1 .At time 304, the StrEn signal is asserted, indicating a voltage droop at the processor core 1 14. In response, the frequency of ClkOutX is controlled by the two enable (EN) inputs (i.e., Str_ENA, Str_ENB). The clock divider circuit 202 reduces the frequency of ClkOutX relative to its frequency prior to time 304 by 100%, thereby adjusting for the voltage droop. After the single reduced clock period for ClkOutX illustrated in FIG. 3, ClkOutX returns to the same frequency as CLK even though StrEn may remain high. In some embodiments, the CLK input is stretched by other means not included in this disclosure if StrEn remains high. In this way, ClkOutX is stretched faster than may be provided for in systems that stretches CLK.FIG. 4 illustrates a flow diagram of a method 400 of adjusting a frequency of a clock signal in response to detecting a voltage droop at a processor core in accordance with at least one embodiment. For purposes of description, the method 400 is described with respect to an example implementation at the processor core 1 14 of FIG. 1 and clock divider circuit 202 of FIG. 2. At block 402, the level shifter 142 at the L3 cache 140 provides a nominal frequency setting for the clock signal ClkX. At block 404, the CKGEN 174 drives CCLK enable signals (CKGEN_EnableA and CKGEN_EnableB) to meet setup time to rising edge flops in the DFS 164. At block 406, a droop detector circuit generates the stretch assertion signal StretchEn for setting the ClkX clock signal to a lower frequency relative to its nominal frequency.At block 408, the droop detector circuit monitors the voltage at one or more points of the processor core 1 14 to identify whether a voltage droop is present. If not, the droop detector circuit maintains the StretchEn signal in a negated state. In response, the method flow moves to block 410, and the DFS 164 generates an output clock signal based on the CCLK enable signals (i.e., CKGEN_EnableA and CKGEN_EnableB). The method flow then returns to block 408 as the droop detector circuit continues to monitor the voltage at processor core 1 14.Returning to block 408, in response to detecting a voltage droop the droop detector circuit asserts the StretchEn signal. In response, the method flow moves to block 412 and the clock divide circuit 202 generates an output clock signal based on the two stretch-enable EN signals (i.e., Str_ENA, Str_ENB), thus generating an output clock signal at a slower frequency. The input clock signal can be divided with a granularity of 0.5 increments (e.g., 1 .0, 1 .5, 2.0, 2.5, etc.) by modulating the EN bits. In one example, StretchEn assertion overrides the EN bits to force a 100% stretch for a single cycle. Accordingly, clock divider circuit 202 operates as a transmission gate mux in which the clock (i.e., ClkX) acts as a select which picks between two EN inputs (and associated logic controlling the two EN inputs). The slow ramp up/down of clock frequency enabled by the clock divider circuit 202 provides dl/dT mitigation.The method flow proceeds to blocks 414 and 416 and the droop detector circuit monitors whether the voltage at the processor core 1 14 has returned to its nominal level or range. If not, the method returns to block 414 as the clock divider circuit 202 maintains the output clock signal ClkOut on the Clkln frequency (Clkln may itself be stretched by a mechanism outside of clock divider circuit 202 after the initial clock stretch from block 412 has had effect). If, at block 416, the droop detector circuit identifies that the monitored voltage has returned to its nominal level or range, and the method flow proceeds to block 410, where the droop detector circuit negates the StretchEn signal, causing the DFS 164 to return to generating the output clock signal at its nominal input frequency.In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below. |
A dynamic performance profiler is operable to receive, in substantially real-time, raw performance data from a testing platform. A software-based image is executing on a target hardware platform (e.g., either simulated or actual) on the testing platform, and the testing platform monitors such execution to generate corresponding raw performance data, which is communicated, in substantially real-time, as it is generated during execution of the software-based image to a dynamic profiler. The dynamic profiler may be configured to archive select portions of the received raw performance data to data storage. As the raw performance data is received, the dynamic profiler analyzes the data to determine whether the performance of the software-based image on the target hardware platform violates a predefined performance constraint. When the performance constraint is violated, the dynamic profiler archives a portion of the received raw performance. |
CLAIMS What is claimed is: 1. A method for performing system profiling of an entity executing on a testing platform, the method comprising: receiving, by a profiler, performance constraint data, the performance constraint data defining boundary conditions for an event; receiving, in substantially real-time at the profiler, raw performance data from a testing platform about the execution entity to be profiled; analyzing, by the profiler, the received raw performance data to determine when the execution entity violates a performance constraint defined by the performance constraint data; and storing only a portion of the received raw performance data, wherein the portion corresponds to a time period of execution of the execution entity that overlaps when a determined performance constraint violation occurred. 2. The method of claim 1 wherein the execution entity comprises a software-based image executing on a target hardware platform. 3. The method of claim 2 wherein the target hardware platform comprises a digital signal processor. 4. The method of claim 2 wherein the target hardware platform comprises a simulation of a target hardware platform. 5. The method of claim 1 wherein the receiving, in substantially real-time, comprises: receiving the raw performance data from the testing platform as the raw performance data is generated by the testing platform during execution of the execution entity on the testing platform. 6. The method of claim 1 wherein a length of the time period is user- defined. 7. The method of claim 1 further comprising: generating, by the profiler, a graphical output of at least the portion of the received raw performance data. 8. The method of claim 1 further comprising: debugging said execution entity by the profiler based at least in part on the received raw performance data. 9. The method of claim 1 further comprising: determining, by the profiler, based on the received raw performance data, at least one of cache use by function and variable use by cache block; and presenting, by the profiler, a user interface displaying at least one of the determined cache use by function and the determined variable use by cache block. 10. A system for profiling performance of a software-based image on a target hardware platform, the system comprising: a testing platform for generating raw performance data for the software -based image executing on the target hardware platform; a dynamic profiler communicatively coupled to the testing platform for receiving the raw performance data in substantially real-time as it is generated by the testing platform, the dynamic profiler operable to determine, based at least in part on analysis of the received raw performance data, a portion of the received raw performance data to archive, thereby resulting in a determined portion of the received raw performance data; and data storage for archiving the determined portion of the received raw performance data. 11. The system of claim 10 wherein the dynamic profiler is operable to determine whether the received raw performance data indicates violation of a predefined performance constraint by the software-based image executing on the target hardware platform. 12. The system of claim 11 wherein, responsive to determining that the received raw performance data indicates violation of the pre-defined performance constraint, the dynamic profiler is operable to archive a corresponding portion of the received raw performance data, the portion encompassing the received raw performance data that indicated violation of the pre-defined performance constraint. 13. The system of claim 11 wherein the dynamic profiler comprises: a user interface for receiving input specifying the pre-defined performance constraint. 14. The system of claim 13 wherein the dynamic profiler comprises: a user interface for receiving input specifying an amount of raw performance data to archive responsive to detection of the pre-defined performance constraint. 15. The system of claim 10 wherein the target hardware platform comprises a digital signal processor. 16. The system of claim 15 wherein the software-based image comprises firmware for the digital signal processor. 17. The system of claim 10 wherein the target hardware platform comprises a simulation of a target hardware platform. 18. The system of claim 10 wherein the dynamic profiler comprises computer-executable software code stored to a computer-readable medium that when executed by a processor causes the processor to perform at least the receiving the raw performance data in substantially real-time. 19. A computer program product, comprising: a computer-readable medium comprising: code for causing a computer to receive raw performance data in substantially real-time when generated by a testing platform on which a software-based image is executing on a target hardware platform; code for causing the computer to determine whether the received raw performance data indicates violation of a pre-defined performance constraint; and code for causing the computer to, responsive to determining that the received raw performance data indicates violation of a pre-defined performance constraint, archive a corresponding portion of the received raw performance data, wherein the corresponding portion encompasses the received raw performance data that indicated violation of the performance constraint. 20. The computer program product of claim 19 wherein the target hardware platform comprises a digital signal processor. 21. The computer program product of claim 19 wherein the target hardware platform comprises a simulation of a target hardware platform. 22. The computer program product of claim 19 wherein the computer- readable medium further comprises: code for causing the computer to receive, via a user interface, input specifying the pre-defined performance constraint. |
DYNAMIC PERFORMANCE PROFILING TECHNICAL FIELD [0001] The following description relates generally to performance profiling of a software image on a target hardware platform, and more particularly to performance profiling systems and methods in which a profiler receives performance data from a testing platform in substantially real-time (i.e., as the performance data is generated by the testing platform). BACKGROUND [0002] Testing and analysis are important for evaluating the performance of individual components of computer systems, such as software, firmware, and/or hardware. For instance, during development of a software, hardware, or firmware component, some level of testing and debugging is conventionally performed on that individual component in an effort to evaluate whether the component is functioning properly. As an example, software applications under development are commonly debugged to identify errors in the source code and/or to otherwise evaluate whether the software application performs its operations properly, i.e. without the software application producing an incorrect result, locking up (e.g., getting into an undesired infinite loop), producing an undesired output (e.g., failing to produce an appropriate graphical or other information output arranged as desired for the software application), etc. As another example, hardware components, such as processors (e.g., digital signaling processors) and/or other functional hardware devices, are often tested to evaluate whether the hardware performs its operations properly, such as by evaluating whether the hardware produces a correct output for a given input, etc. [0003] Beyond testing of individual components of a system, such as individual software programs and individual hardware components, in isolation, in some instances the performance of certain software or firmware on a target hardware platform may be evaluated. The "target hardware platform" refers to a hardware platform on which the software or firmware is intended to be implemented (e.g., for a given product deployment). Such target hardware platform may be a given integrated circuit (IC), such as a processor, memory, etc., multiple ICs (e.g., coupled on a system board), or a larger computer system, such as a personal computer (PC), laptop, personal digital assistant (PDA), cellular telephone, etc. It may be desirable, for instance, to evaluatehow well certain software programs perform on a target hardware system, not only to ensure that both the software program and the target hardware system function properly but also to evaluate the efficiency of their operations. Such factors as memory (e.g., cache) utilization, central processing unit (CPU) utilization, input/output (I/O) utilization, and/or other utilization factors may be evaluated to determine the efficiency of the software programs on the target hardware platform. From this evaluation, a developer may modify the software programs in an effort to optimize their performance (e.g., to improve memory, CPU, and/or I/O utilization) on the target hardware platform. For instance, even though the software program and target hardware platform may each function properly (e.g., produce correct results), the software program may be modified in some instances in an effort to improve its efficiency of operations on the target hardware platform. [0004] Commonly, a program known as a "profiler" is used for evaluating the performance of a software program on a target hardware platform or in a simulation environment. Various profilers are known in the art, such as those commercially known as Qprof, Gprof, Sprof, Cprof, Oprofile, and Prospect, as examples. Profilers may evaluate the performance of a software program executing on a target hardware platform or executing on a simulation of the target hardware platform. Profilers are conventionally used to evaluate the performance efficiency of operations of a software program executing on a target hardware platform in an effort to identify areas in which the software program may be modified in order to improve its efficiency of operation on the target hardware platform. In other words, rather than evaluating the software program and/or target hardware platform for operational accuracy (e.g., to detect bugs), the profiler is conventionally used for evaluating performance of a software program on a target hardware platform. In certain situations, performance issues may cause the system to behave incorrectly. For example, if one application does not get enough execution time due to another (potentially higher priority) application taking longer than it is supposed to, then this may cause incorrect output to get generated. Optimization of the latter application would be a "bug fix" from the system point of view. [0005] Detecting "bugs" caused by performance issues is not an easy task because of at least two reasons. First, all performance issues may not cause bugs. For example, some applications may be sub-optimal, but their increased execution timemay not interfere with the meeting of real-time deadlines of other tasks (i.e., the increased execution time is at a time when the other tasks' work is not time critical). And, in some instances a performance issue may not cause "bugs" at all times during the program. For instance, the increased execution time due to sub-optimal implementation, for example, should occur at a time when other tasks are doing time critical work [0006] The performance is evaluated in an effort to optimize the efficiency of operations of the software program on the target hardware platform in order to improve the overall performance of the resulting deployed system. For instance, such profiling may permit a user of the profiler to evaluate where the software program spent its time and which functions called which other functions while it was executing. [0007] In addition, information regarding how the target hardware handled the various functions, including its cache utilization efficiency (e.g., cache hit/miss ratio, etc.) and CPU utilization efficiency (e.g., number of "wait" cycles, etc.), as examples, may be evaluated by the profiler. The evaluation provides the user with information about the efficiency of the performance of the software program's functions on the target hardware platform. Such operational parameters as cache utilization efficiency and CPU utilization efficiency vary depending on the specific target hardware platform's architecture (e.g., its cache size and/or cache management techniques, etc.). Thus, the profiler evaluation is informative as to how well the software program will perform on the particular target hardware platform. The user may use the profiler information to modify the software program in certain ways to improve its cache utilization efficiency, CPU utilization efficiency, and/or other operational efficiencies on the target hardware platform. [0008] FIGURE 1 is an exemplary block diagram of a system 100 that illustrates a conventional manner in which a profiler is typically employed. As shown, a testing platform 110 is provided on which a target hardware platform 101 resides. The testing platform 110 may be any suitable testing platform that is operable to evaluate operation of a software-based image 102 on a target hardware platform 101 and produce performance data about such execution as discussed further herein. The testing platform 110 may be a computer-based system having sufficient communication connections toportions of the target hardware 101 and/or the image 102 to observe the operations for determining the corresponding performance data. [0009] A software-based "image" 102 executes on the target hardware 101, and the testing platform 110 monitors its execution to generate performance data that is archived to a data storage 103 (e.g., hard disk, optical disk, magnetic disk, or other suitable data storage to which digital data can be written and read). The software- based image 102 may be any software application, firmware, operating system, and/or other product that is software based. The performance data generated and archived in the data storage 103 may include detailed information pertaining to the operational efficiency of the software image 102 on the target hardware platform 101. The information may detail the functions being executed at various times and the corresponding number of wait cycles of the target hardware platform's CPU, the hit/miss ratio in the target hardware platform's cache, and other operational efficiency details. [0010] The performance data generated by the testing platform and archived to the data storage 103 may be referred to as raw performance data. The raw performance data conventionally details information about function(s) performed over clock cycles of a reference clock of the target hardware platform 101, as well as corresponding information about utilization of CPU, cache, and/or other resources of the target hardware platform 101 over the clock cycles. The raw data is conventionally in some compressed format. As an example, the compression is commonly one of two types: 1) reduced information that can extrapolated to reconstruct the entire information, or 2) compression like zipping, etc. [0011] As an illustrative simple example, a portion of the raw performance data generated by the testing platform 110 may be similar to that provided in Table 1 below: Table 1[0012] In the above example, the raw performance data generated by the testing platform 110 notes that a memory data management operation (MMDM) started on the target hardware platform 101 in clock cycle 5, and such MMDM operation ended in clock cycle 12. Also, the raw performance data generated by the testing platform 110 notes that the target hardware platform's CPU entered a wait state in clock cycle 10, and then began processing a process "Pl" (of image 102) in clock cycle 12. It should be recognized by those of ordinary skill in the art that Table 1 provides a simplistic representation of the raw performance data for ease of discussion, and conventionally much more information may be contained in the raw performance data generated by the testing platform 110. [0013] A profiler 120 may then be employed to analyze (104) the raw performance data that is archived to the data storage 103 in order to evaluate the operational performance of the software image 102 on the target hardware platform 101. As discussed above, the profiler 120 may permit a user to evaluate execution of the software image 102 (e.g., where the software image spent its time and which functions called which other functions, etc.), as well as how the target hardware platform 101 handled the various functions of the software image 102, including its cache utilization efficiency (e.g., cache hit/miss ratio, etc.) and CPU utilization efficiency (e.g., number of "wait" cycles, etc.), as examples. That is, the profiler 120 analyzes the raw performance data generated by the testing platform 110 and may present that raw performance data in a user-friendly manner and/or may derive other information from the raw performance data to aid the user in evaluating the operational efficiency of the image 102 on the target hardware platform 101. The profiler 120 may present the information in a graphical and/or textual manner on a display to enable the user to easily evaluate the operational efficiency of the execution of the image 102 on the target hardware platform 101 over the course of the testing performed. The user may choose to use the performance information presented by the profiler 120 to modify the software image 102 in certain ways to improve the cache utilization efficiency, CPU utilization efficiency, and/or other operational efficiencies on the target hardware platform 101. [0014] Conventionally, profiling a software image 102 on a target hardware platform 101 in the manner illustrated in FIGURE 1 results in a large amount of raw performance data being generated and archived in the data storage 103 for lateruse in the profiler 120's analysis 104. For example, to profile execution of a 30-second video clip (of a software image 102) on the target hardware 101, the testing platform 110 may run for multiple days and generate a massive amount of raw performance data (e.g., approximately 10 terabytes of data). Thus, a large-capacity data storage 103 is needed for archiving the raw performance data for later use by the profiler 120 in performing the analysis 104. Also, loading and analyzing such large amounts of data is a non-trivial task. [0015] In some instances, certain steps may be taken in the testing platform 110 in an effort to reduce the amount of raw performance data generated by the testing platform, such as by focusing the testing on only a particular part of the software image 102 or configuring the testing platform 110 to only capture performance data pertaining to execution of a particular portion of the software image 102. The profiler 120 is then employed to analyze 104 performance of the particular portion of the software image 102 by evaluating the corresponding raw performance data archived to the data storage 103 by the testing platform 110 during the testing. Of course, by restricting the testing at the testing platform 110 in this manner requires the user to identify the portion of the execution of the image 102 on which the testing should be focused, and one risks potentially overlooking performance problems with other portions of the software image 102. For instance, when configuring the testing platform 110 the user may not possess sufficient information to make an intelligent decision regarding how best to restrict testing of the image 102 because it is conventionally during the later profiling process in which the user discovers areas of operational inefficiencies of the image 102 on the target hardware platform 101. Accordingly, there exists a need in the art for an improved profiler, particularly a profiler that does not require storage of all raw performance data generated but that enables full evaluation of performance for operational efficiency and/or debugging analysis. SUMMARY [0016] Embodiments of the present invention are directed generally to systems and methods for dynamic performance profiling. According to one embodiment, a method for performing system profiling is disclosed, wherein a profiler receives performance constraint data from a user. The performance constraint data defines boundary conditions for an event. The profiler receives, in substantially real-time, raw performance data from a testing platform on which an execution entity to be profiled is executing. The profiler analyzes the received raw performance data to determine when the execution entity violates a performance constraint defined by the performance constraint data, and only a portion of the received raw performance data is stored, wherein the portion corresponds to a time period of execution of the execution entity that overlaps when a determined performance constraint violation occurred. [0017] According to another embodiment, a system for profiling performance of a software-based image on a target hardware platform is provided. As used herein (except where expressly indicated otherwise), "target hardware platform" may refer to either an actual implementation of the target hardware platform or a simulation thereof. The system has a testing platform for generating raw performance data for a software-based image executing on a target hardware platform. A dynamic profiler is communicatively coupled to the testing platform for receiving the raw performance data in substantially real-time as it is generated by the testing platform. The dynamic profiler is operable to determine, based at least in part on analysis of the received raw performance data, a portion of the received raw performance data to archive. The system further includes data storage for archiving the determined portion of the received raw performance data. [0018] According to another embodiment, a computer program product includes a computer-readable medium to which computer-executable software code is stored. The code includes code for causing a computer to receive raw performance data in substantially real-time when generated by a testing platform on which a software- based image is executing on a target hardware platform. The code further includes code for causing the computer to determine whether the received raw performance data indicates violation of a pre-defined performance constraint. And, the code further includes code for causing the computer to, responsive to determining that the received raw performance data indicates violation of a pre-defined performance constraint, archive a corresponding portion of the received raw performance data, wherein the corresponding portion encompasses the received raw performance data that indicated violation of the performance constraint. [0019] The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description that follows may be better understood. Additional features and advantages will be describedhereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiments disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS [0020] For a more complete understanding of the present invention, reference is now made to the following description taken in conjunction with the accompanying drawings. [0021] FIGURE 1 is an exemplary block diagram of a system that illustrates a conventional manner in which a performance profiler is employed. [0022] FIGURE 2 is an exemplary block diagram of a system that illustrates application of a dynamic performance profiler. [0023] FIGURE 3 is an exemplary block diagram that illustrates application of a dynamic performance profiler in which a defined performance constraint is employed for determining raw performance data to be archived to data storage. [0024] FIGURE 4 is a block diagram of an exemplary implementation of a dynamic profiler for profiling performance of firmware on a digital signal processor (DSP).; [0025] FIGURE 5 is a screen shot showing a portion of an exemplary user interface presented to a user by a dynamic profiler, which enables a user to choose to configure the dynamic profiler to operate in either post-mortem mode, real-time mode, or constraint violation mode.[0026] FIGURE 6 is a screen shot showing a dialog box presented to a display by the dynamic profiler in response to a user selecting to configure the dynamic profiler to operate in real-time mode. [0027] FIGURE 7 is a screen shot showing an exemplary constraint window that may be presented by the dynamic profiler to allow a user to specify time limits between arbitrary system events and list all violations of the limits. [0028] FIGURE 8 is a screen shot showing an exemplary interface that may be presented by the dynamic profiler to allow a user to view the constraint violations for a given constraint. [0029] FIGURE 9 is a screen shot showing an exemplary constraint violations window that may be presented by the dynamic profiler to allow a user to view the original performance constraints and constraint violations generated by the constraint violation mode. [0030] FIGURE 10 is a screen shot showing an exemplary execution profile window that may be presented by the dynamic profiler. [0031] FIGURES 1 IA-11C are screen shots showing an exemplary cache profile window that may be presented by the dynamic profiler. [0032] FIGURE 12 is a screen shot showing an exemplary cache address history window that may be presented by the dynamic profiler. [0033] FIGURE 13 is a screen shot showing an exemplary cache line history window that may be presented by the dynamic profiler. [0034] FIGURE 14 is a screen shot showing an exemplary cache histogram window that may be presented by the dynamic profiler. [0035] FIGURES 15A-15B are screen shots showing an exemplary cache use by function window that may be presented by the dynamic profiler. [0036] FIGURES 16A-16B are screen shots showing an exemplary cache summary window that may be presented by the dynamic profiler. [0037] FIGURE 17A is a screen shot showing an exemplary menu for selecting variable use by cache block that may be presented by the dynamic profiler. [0038] FIGURE 17B is a screen shot showing an exemplary variable usage per cache block window that may be presented by the dynamic profiler. [0039] FIGURE 18 is an operational flow diagram.[0040] FIGURE 19 is a block diagram showing an exemplary computer system on which embodiments of a dynamic profiler may be implemented. DETAILED DESCRIPTION [0041] Embodiments of the present invention are directed generally to systems and methods for dynamic performance profiling. As discussed further below, a dynamic performance profiler is disclosed that is operable to receive, in substantially real-time, raw performance data from a testing platform. Thus, as a testing platform on which a software-based image is executing on a target hardware platform (e.g., either simulated or actual), the testing platform generates raw performance data that is communicated, in substantially real-time, as it is generated during execution of the software-based image to a dynamic profiler. The "testing platform", as used herein, refers generally to any logic for observing performance of the target hardware platform and generating performance data about the execution of the software-based image on the target hardware platform. The testing platform may be implemented in any desired manner (e.g., either as separate logic with which the target hardware platform is coupled, or in whole or in part as logic that is integrated within the target hardware platform). [0042] The dynamic profiler may be configured to archive select portions of the received raw performance data to data storage. For instance, in certain embodiments, the dynamic profiler may archive a moving window of the last "X" amount of raw performance data received. In certain embodiments, the amount "X" may be user-configurable, such as by a user specifying to archive raw performance data generated for the last "X" number of clock cycles of a reference clock signal of the target hardware platform under testing. [0043] In certain embodiments, the dynamic profiler supports a constraint-violation mode, wherein a user may define one or more performance constraints. As the raw performance data is received, the dynamic profiler analyzes the data to determine whether it indicates that the performance of the software-based image on the target hardware platform violates a defined performance constraint, and upon a performance constraint being determined as being violated, the dynamic profiler may archive a portion of the received raw performance data (which encompasses the raw performance data indicating the violation of the performance constraint) to data storage.[0044] Thus, embodiments of the dynamic profiler enable a user to configure the dynamic profiler to manage an amount of raw performance data that is archived. Accordingly, unrestricted testing on the testing platform may be performed, and the dynamic profiler may analyze the generated raw performance data, received in substantially real-time, to determine, based on performance of the software-based image on the target hardware platform under testing, appropriate portions of the generated raw performance data to archive to data storage. [0045] Further, in certain embodiments, because the dynamic profiler receives the generated raw performance data in substantially real-time, it may also be used for performing certain debugging operations. Thus, in addition to its ability to provide performance analysis (e.g., for performance optimization evaluation), in certain embodiments the dynamic profiler may further be employed for debugging the software-based image. As an example, in certain situations, performance issues may cause the system to behave incorrectly. For instance, if one application does not get enough execution time due to another (potentially higher priority) application taking longer than it is supposed to, then this may cause incorrect output to be generated. Optimization of the latter application would be a "bug fix" from the system point of view. Thus, the dynamic profiler may be utilized to perform this, as well as other types of debugging based on the performance data that it receives in substantially real-time. [0046] In some embodiments, a certain level of debugging may be performed by the dynamic profiler, for instance, to identify whether specific user- defined constraints are violated. The dynamic profiler may be configured to archive performance data pertaining to any such constraint violation that is detected, thereby enabling the user to evaluate data relating specifically to such a constraint violation (or "bug"). [0047] Certain embodiments provide superior debugging to that afforded by conventional profilers. As an example, in certain embodiments various information pertaining to CPU utilization, cache utilization (e.g., cache utilization by process, by variable, etc.) during the testing may be presented to the user, used as predefined constraint conditions, and/or otherwise used for debugging, as discussed further herein. The debugging capabilities of certain embodiments of the dynamic performance profiler are advantageous because embodiments of the dynamic performance profiler provides a constraint violation mode of operation (as discussed further herein). As mentionedabove, detecting "bugs" caused by performance issues is not an easy task. Use of constraint violation mode provided by embodiments of the dynamic performance profiler eases such detection of bugs caused by performance issues. That is, the constraint violation mode provides improved debugging capabilities because it enables detection of violation of certain predefined constraints on the performance of the image under test, as discussed further herein, which may aid in discovery of performance- related bugs. [0048] FIGURE 2 is an exemplary block diagram of a system 200 that illustrates application of a dynamic performance profiler 220 in accordance with one embodiment. As in the conventional system 100 of FIGURE 1, a testing platform 210 is provided on which a target hardware platform 201 resides. The testing platform 210 may be any computer-based logic (or "platform") for observing performance of the target hardware platform 201 and generating data about such performance. The testing platform 210 may, in some instances, be separate from the target hardware platform 201 (e.g., and communicatively coupled to the target hardware platform 201 for observing its operations), or in other instances, all or a portion of the testing platform 210 may be integrated within the target hardware platform 201 (e.g., such that the target hardware platform 201 may itself include logic for observing its performance and outputting its performance data). [0049] The target hardware platform 201 may be an actual implementation of the target hardware platform (e.g., an actual hardware implementation) or, in some instances, the target hardware platform 201 is simulated (e.g., by a program that simulates the operation of the target hardware platform). A software-based "image" 202 executes on the target hardware 201, and the testing platform 21 monitors its execution to generate raw performance data. [0050] However, in this embodiment, as such raw performance data is generated by the testing platform 210, it is communicated in substantially real-time (as real-time performance data 203) to the dynamic profiler 220. Thus, rather than being archived to the data storage 103 for later retrieval by the profiler 120 (as in the conventional implementation of FIGURE 1), the exemplary embodiment of FIGURE 2 communicates the real-time performance data 203 from the test platform 210 to the dynamic profiler 220, thus alleviating the conventional requirement of first archiving the raw performance data to a data storage 103.[0051] Of course, some data storage may occur for facilitating communication of the real-time performance data 203 from the testing platform 210 to the dynamic profiler 220. For instance, such real-time performance data 203 may be buffered or otherwise temporarily stored from a period when it is generated by the testing platform 210 until a communication agent can communicate it to the dynamic profiler 220. It should be recognized, however, that in accordance with certain embodiments portions of the real-time performance data 203 are communicated from the testing platform 210 to the dynamic profiler 220 during ongoing testing. That is, rather than waiting for the full testing by the testing platform 210 to complete before communicating the generated raw performance data to the dynamic profiler 220 (thus requiring the full raw performance data to be first archived, as in FIGURE 1), at least portions of the real-time performance data 203 are communicated from the testing platform 210 to the dynamic profiler 220 during the testing. Again, such real-time performance data 203 is preferably communicated from the testing platform 210 to the dynamic profiler 220 substantially as such data is generated by the testing platform 210 (except for temporary storage that may be performed for managing such communication). In certain embodiments, the real-time performance data 203 is streamed (i.e., communicated in a streaming fashion) from the testing platform 210 to the dynamic profiler 220. [0052] The software image 202 may be any software application, firmware, operating system, and/or other component that is software based. The realtime performance data 203 generated by the testing platform 210 may be detailed information pertaining to the operational efficiency of the software image 202 on the target hardware platform 201. The information may detail the functions being executed at various times and the corresponding number of wait cycles of the target hardware platform's CPU, corresponding cache hits and misses for the functions in the target hardware platform's cache, and other operational efficiency details. Such real-time performance data 203 may correspond to raw performance data commonly generated by a testing platform 210 (such as the commercially available testing platforms identified above), but is supplied in substantially real-time from the testing platform 210 to the dynamic profiler 220, rather than first being archived to a data storage 103. [0053] The dynamic profiler 220 receives the real-time performance data 203 and analyzes (block 204) the received performance data to evaluate theperformance of the software image 202 on the target hardware platform 201. Such dynamic profiler 220 may evaluate execution of the software image 202 (e.g., where the software image spent its time and which functions called which other functions, etc.), as well as how the target hardware platform 201 handled the various functions of the software image 202, including its cache utilization efficiency (e.g., cache hit/miss ratio, etc.) and CPU utilization efficiency (e.g., number of "wait" cycles, etc.), as examples. Thus, the dynamic profiler 220 may provide the user with information about the efficiency of the performance of the software image 202 on the target hardware platform 201. The user may choose to use the profiler information to modify the software image 202 in certain ways to improve its cache utilization efficiency, CPU utilization efficiency, and/or other operational efficiencies on the target hardware platform 201. As with conventional dynamic profilers, the dynamic profiler 220 may be implemented as computer-executable software code executing on a computer system, such as a personal computer (PC), laptop, workstation, mainframe, server, or other processor- based system. [0054] The dynamic profiler 220 may choose to archive certain portions of the received performance data to a data storage 205. For instance, based on its analysis in block 204, the dynamic profiler 220 may identify performance data that pertains to a potential performance problem that is of interest to a user, and the dynamic profiler 220 may archive only the identified performance data that pertains to the potential performance problem (rather than archiving all of the received performance data). In this way, the amount of performance data that is archived to the data storage 205 may be greatly reduced from the full amount of raw performance data generated by the testing platform 210. Further, as discussed below, the decision of what performance data to archive can be made based on analysis in block 204 of operational efficiency of the software image 202 on the target hardware platform 201, rather than requiring a user to restrict testing on the testing platform 210. Thus, according to this embodiment, the dynamic profiler 220 permits full testing of the software image 202 on the target hardware platform 201 to be conducted by the testing platform 210, and the dynamic profiler 220 is operable to receive and analyze the full raw performance data generated by the testing platform 210 to identify operational inefficiencies. Also, the dynamic profiler 220 can archive only portions of the raw performance data that are obtained fora window(s) of time (e.g., clock cycles) that encompass those identified operational inefficiencies. [0055] As discussed further below, in certain embodiments, the dynamic profiler 220 allows a user to define certain performance constraints, and when determined by the analysis in block 204 that the performance of the software image 202 on the target hardware platform 201 violates any of the defined performance constraints, the dynamic profiler 220 archives corresponding performance data pertaining to the performance constraint violation to the data storage 205. For instance, a user may define that upon a given performance constraint being determined by the analysis in block 204 as being violated, the dynamic profiler 220 is to archive performance data received for some user-defined window of time that encompasses the constraint violation. For example, a user may define that upon a given performance constraint being determined by the analysis in block 204 as being violated, the dynamic profiler 220 is to archive performance data received for some user-defined number (e.g., one million) of clock cycles leading up to the constraint violation as well as some user- defined number (e.g., one million) of clock cycles following the constraint violation. This feature allows unrestricted testing and profile analysis of the software image 202 on the target hardware platform 201, while restricting the archiving of raw performance data to only that raw performance data that is related to a portion of the testing in which some user-defined performance constraint is violated. Various illustrative examples of performance constraints that may be employed are provided further herein. [0056] FIGURE 3 is an exemplary block diagram illustrating application of a dynamic performance profiler 220 according to one embodiment in which a defined performance constraint is employed for determining raw performance data to be archived to the data storage 205. Various elements shown in the example of FIGURE 3 correspond to elements described above for FIGURE 2 and are thus numbered/labeled the same as in FIGURE 2. The additional elements 301-305 introduced in the exemplary embodiment of FIGURE 3 are described further below. [0057] In the exemplary embodiment of FIGURE 3, the dynamic profiler 220 allows a user to define certain performance constraints 301. For instance, as discussed further herein, the dynamic profiler 220 may provide a user interface with which a user may interact to define performance constraints. For example, in a real timesystem, it would be desirable to know when processing of a certain event occurs more than a particular number of cycles after detection of the event. [0058] Also, the dynamic profiler 220 allows a user to define, in block 302, an amount of performance data to archive when a given performance constraint violation is detected. For instance, a user may define that upon a given performance constraint being determined by the analysis in block 204 as being violated, the dynamic profiler 220 is to archive performance data received for some user-defined window of time that encompasses the constraint violation. For example, a user may define that upon a given performance constraint being determined by the analysis in block 204 as being violated, the dynamic profiler 220 is to archive performance data received for some user-defined number (e.g., one million) of clock cycles leading up to the constraint violation as well as some user-defined number (e.g., one million) of clock cycles following the constraint violation. Again, as discussed further herein, the dynamic profiler 220 may provide a user interface with which a user may interact to define the amount of performance data to archive for a given performance constraint violation. [0059] In block 204, the dynamic profiler 220 receives the real-time performance data 203 and analyzes such raw performance data. As part of the analysis in block 204, the dynamic profiler 220 determines, in block 304, whether a predefined performance constraint (defined in block 301) is violated. When such a violation is detected, then the predefined amount of performance data (defined in block 305) pertaining to the performance constraint violation detected is archived by the dynamic profiler 220 to the storage 205. The dynamic profiler 220 may be used thereafter by a user to analyze (in block 204) the archived performance data. For instance, the dynamic profiler 220 may output, in block 303, information detailing a performance analysis for such archived performance data. For example, in certain embodiments a graphical and/or textual output to a display may be generated to inform the user about the performance data observed during testing for portions of the testing that violated the user's pre-defined performance constraints. Illustrative examples of such output that may be presented in certain embodiments are provided further herein. [0060] Various testing platforms and profilers are known in the art for testing and evaluating performance of software images on a target hardware platform, which may be adapted for enabling communication of performance data from the testingplatform to the profiler in substantially real-time during testing in accordance with the embodiments disclosed herein. [0061] In one implementation, the testing platform 210 includes such a DSP simulator as the target hardware platform 201, which is operable to generate raw performance data for the execution of a software image 202 on the DSP. The tools further include a profiler, which will be referred to as Dynamic Prof. FIGURE 4 is a block diagram showing such an exemplary implementation in which the QDBX simulator 401 executes a software image (e.g., software image 202 of FIGURES 2-3) and generates corresponding raw performance data. As discussed above, the raw performance data is conventionally stored to data storage, e.g., as a program trace file 402, which can be retrieved for analysis by the Dynamic Prof 403. As illustrated by the dashed arrow in FIGURE 4, in certain embodiments, the generated raw performance data may be communicated in substantially real-time from the QDBX simulator 401 to the Dynamic Prof 403, rather than requiring the generated raw performance data for an entire testing session to be first archived. [0062] Thus, as discussed further herein, the Dynamic Prof 403 may be implemented as a dynamic profiler (such as the dynamic profiler 220 discussed above). In certain embodiments, the profiler can operate either in post-mortem mode (using a program trace file 402 generated by a completed simulation performed by the QDBX simulator 401) or real-time mode (using live data generated by a running simulation of the QDBX simulator 401). In addition, in the real-time mode execution (or "performance") constraints are supported, which may be used to limit the amount of profile data archived for a simulation. [0063] In certain embodiments, the dynamic profiler supports three modes of operation: 1) post-mortem mode, 2) real-time mode, and 3) constraint violation mode. In the post-mortem mode, the dynamic profiler uses an archived trace file (containing raw performance data) generated by a completed testing session on the testing platform (e.g., a completed simulation) for performing its analysis (e.g., the analysis of block 204 of FIGURE 2). Thus, such post-mortem mode of operation employs the conventional profiling technique discussed generally above with FIGURE 1. According to one embodiment, the post-mortem mode supports complete system traces which can be accessed repeatedly without having to re -run the testing/simulation on the testing platform, and can display any point in the testing time. However, longtesting/simulations on the testing platform can generate arbitrarily large trace files which either load too slowly or (if they exceed system memory) cannot be loaded. [0064] In the real-time mode, the dynamic profiler uses raw performance data generated by a running testing platform (e.g., a running QDBX simulation), and the dynamic profiler may log at least portions of the execution history and/or information derived from the received raw performance data in a trace file. In one embodiment, the real-time mode supports arbitrarily long testing/simulations, but can display (and save) only partial system traces (i.e., raw performance data generated by the testing platform). In certain implementations, partial traces are saved in "zip" format to minimize the trace file size, and the maximum trace file length is user- specifiable. Partial trace files are accessible in the dynamic profiler via the conventional post-mortem mode. [0065] The constraint-violation mode is really a sub-set of the real-time mode. In other words, it works like the real-time mode, but the dynamic profiler is configured to log only performance data for specified performance constraint violations detected in the profiler's analysis of the received raw performance data. Such constraint violation mode may be used to analyze long testing/simulations for limiting the amount of raw performance data that is archive to instances where the raw performance data violates a set of predefined constraints. The resulting raw performance data (or "trace file") that is archived can be later accessed using the post-mortem mode of the profiler. [0066] FIGURE 5 is a screenshot illustrating a portion of an exemplary user interface presented to a user by the profiler 403 according to one embodiment, which enables a user to choose to attach the profiler 403 to the QDBX simulator 401 for receipt of raw performance data generated by the QDBX simulator 401 in substantially real-time. In this exemplary interface, an option 501 to Open Trace File can be selected by a user (e.g., by clicking a pointing device, such as a mouse on the option), which enables a user to choose to open a program trace file such as program trace file 402 that has been generated and archived from prior testing, as in conventional profiling techniques. In other words, the option 501 enables the user to select to run the profiler in the above-mentioned post-mortem mode. [0067] Alternatively, an option 502 to Attach to QDBX Simulation can be selected by a user (e.g., by clicking a pointing device, such as a mouse on the option), which results in the profiler 403 setting up a communication channel with the QDBX simulator 401 for receiving generated raw performance data in substantiallyreal-time (e.g., via the dashed line shown in FIGURE 4). In other words, the option 502 enables the user to select to run the profiler in the above-mentioned real-time mode. [0068] As another alternative, an option 503 to Attach With Constraints can be selected by a user (e.g., by clicking a pointing device, such as a mouse on the option), which not only results in the profiler 403 setting up a communication channel with the QDBX simulator 401 for receiving generated raw performance data in substantially real-time (e.g., via the dashed line shown in FIGURE 4) but also allows performance constraints to be defined (as discussed above in block 301 of FIGURE 3) for use by the profiler 403 in identifying portions of the received raw performance data to be archived to data storage. In other words, the option 503 enables the user to select to run the profiler in the above-mentioned constraint-violation mode. [0069] The option 502 may be selected by a user to place the profiler into a real-time mode for use in analyzing a running test/simulation on the testing platform, such as a running simulation on the QDBX simulator 401. For instance, for the exemplary Dynamic Prof example of FIGURE 4, in the real-time mode the Dynamic Prof profiler 403 connects to a running simulation (on the QDBX simulator 401) using a User Datagram Protocol (UDP) socket interface. The Dynamic Prof profiler 403 may output to a display user-interface window(s) (as in block 303 of FIGURE 3), which displays information that is updated continuously based on trace information (or "raw performance data") generated by the QDBX simulator 401. The trace information may be saved by the Dynamic Prof profiler 403 in a trace file for later analysis in post-mortem mode. [0070] In one embodiment, in response to a user choosing the real-time mode of operation (by selecting the option 502 of FIGURE 5), a dialog box 600 as shown in FIGURE 6 is presented to a display by the profiler, which allows a user to input an archive file name (in the input box 601) and history limit (in the input box 602). The archive file name specifies the name of a trace file that the profiler creates and writes the real-time trace information to. In one embodiment, the archive file is written in "zip" format to conserve disk space. Archive files can later be opened as trace files and analyzed in post-mortem mode of the profiler. A browse button 603 can be used to browse existing files and directories before creating an archive file. [0071] The history limit (input to the box 602) restricts how much trace information (or "raw performance data") is written to the archive file. For example,given a history limit X, only the X most recent cycles of trace information are saved in the archive file. [0072] After the user specifies the archive file name and history limit, the user may click on the Connect button 604 to ready the profiler for operation in realtime mode. The user may then initiate execution of a software image (e.g., the software image 202 of FIGURE 2) on a target hardware platform (e.g., the target hardware platform 201 of FIGURE 2) on a testing platform, and the profiler receives generated raw performance data in substantially real-time, as generated by the testing platform during execution of the software image on the target hardware platform. For instance, in the exemplary embodiment of FIGURE 4, a user may execute a software image on the QDBX simulator 401 with the following commands: 1. load - this command triggers QDBX to read the executable file containing the DSP firmware instructions along with related data; 2. trace log socket - this command informs QDBX that it should send logging/profiling information over a socket (as opposed to a log file); 3. trace socket open - this command causes QDBX to "listen" for UDP socket connections; this is employed so that QDBX is ready for the dynamic profiler to connect to it; 4. trace execution on - this command triggers "streaming" of logging/profiling information from QDBX over the socket; 5. continue - QDBX continues execution of the instructions of the executable file. [0073] The profiler then proceeds to display the trace information received from the testing platform (e.g., from the QDBX simulator 401) and generate a trace file containing trace information for the last X cycles, as defined in the box 602 of FIGURE 6. When program execution is completed, in the exemplary embodiment of FIGURE 4, the user may use the command trace socket close on the QDBX simulator 401 to close the socket connection between the simulator 401 and the profiler 403. [0074] A user may choose to place the profiler into the constraint violation mode, by selecting the option 503 of FIGURE 5. According to one embodiment, before using the profiler in constraint violation mode, a user first creates a text file that specifies the desired performance constraints. In certain embodiments, each constraint in the file is specified by the following text format:Start: process: event End: process: event MaxCycles: limit In the above, "process" specifies the kernel or process in which an event occurs. The kernel is specified with the literal value kernel, while processes are specified by their process name. "Event" specifies a kernel- or process-specific event. "Limit" specifies the maximum cycles allowed between the occurrences of the start and end events. The following is an illustrative example of one possible constraint violation file that may be employed: Start: ADECTASK: Execute process End: AENCTASK: Execute process MaxCycles: 200000 Start: AFETASK: afe cmd ccfg End: AFETASK: sys start timer MaxCycles: 2000 [0075] In the above example, ADECTASK, AENCTASK and AFETASK are user-defined tasks in the executable file loaded into QDBX (using the "load" command). The first constraint specifies that there should be a maximum of 200000 cycles between when ADECTASK starts execution and when AENCTASK starts execution. The second constraint specifies that there should be a maximum of 2000 cycles between the start of execution of the function afe cmd ccfg and the start of execution of the function sys start timer in the AFETASK task. [0076] In addition to a user manually-creating constraint files, the dynamic profiler may have certain pre-defined constraint files that are available for selection by the user. [0077] In certain embodiments, the profiler allows users to specify time limits between arbitrary system events and to list all violations of the limits. Time limits may be specified (and time limit violations listed) in a constraint window presented by the profiler to a display, such as the exemplary constraint window 700 of FIGURE 7. In the exemplary constraint window 700, there is a top pane 701, which lists the current execution (or "performance") constraints that are active for a running test/simulation. A bottom pane 702 can be used to perform the following tasks:Edit individual execution constraints; List the constraint violations for the selected constraint; and Save or load the current constraints to a file. [0078] In this example, execution constraints are specified in the Edit Constraint tab 703 of the constraint window 700. In this example, an execution constraint contains the following items: a) Start event 704 and end event 705 (which can be any of the following): i) A call to a specific kernel function within the kernel; ii) A call to a specific kernel or process function within a specific process; or iii) When a specific process begins executing; and b) Time limit 706 (maximum cycles allowed between the occurrence of the start and end events). [0079] To create a new constraint, the user enters values for these items and clicks on the Add button 707. To modify an existing constraint, a user can select it in the top pane 701 of the constraint window 700 (none are listed in the example of FIGURE 7), and then change one or more values presented (in the lower pane 702) for the selected constraint. [0080] In one embodiment, the profiler automatically searches the trace file for any violations of the selected constraint. To view the constraint violations for a given constraint, a user can click on its entry in the top pane 701 of the constraint window 700 and then click on the Constraint Violations tab 708 in the bottom pane 702, which may present a listing of constraint violations such as that shown in FIGURE 8. As shown in the example of FIGURE 8, in certain embodiments, the following information is listed for each violation occurrence: a) The starting and ending cycle of the violation; and b) The number of cycles between the start and end events. [0081] In one embodiment, selecting a violation in the bottom pane of the constraint window 700 causes the profiler to mark the position of the violation in the history window. For instance, a vertical line (which may be colored brown) may be drawn at the start cycle, and a vertical line (which may be colored red) may be drawn at the end cycle, in the graphical execution history window presented by the dynamic profiler.[0082] In one embodiment, the dynamic profiler allows a user to view the original performance constraints and constraint violations generated by the constraint violation mode in a constraint violation window, such as the exemplary constraint violation window 900 of FIGURE 9. The constraint violation window 900 has an upper pane 901 and a bottom pane 902. The upper pane 901 lists performance constraints that are/were pre-defined for a given testing/simulation, and the bottom pane 902 shows the violations of a performance constraint selected in the upper pane 901. For instance, in the illustrated example of FIGURE 9, a first performance constraint 903 is selected in the upper pane 901, and the corresponding violations 904 of that selected constraint that were detected during testing/simulation are shown in the bottom pane 902. [0083] In accordance with certain embodiments, the dynamic profiler may present various profile and/or debug information to the user. For instance, various information pertaining to CPU utilization, cache utilization, etc. by the software-based image on the target hardware platform during the testing may be presented to the user. As one example, in certain embodiments, an execution history window can be presented by the dynamic profiler, as is conventionally presented by profilers, e.g., to display a graphical indication of which functions executed for how long, etc. Such execution history window may present data as it is received in substantially real-time, or the execution history window may be employed to display history data captured for a constraint violation, as examples. Of course, the execution history window may also be employed in a conventional post mortem mode when so desired. Various other information that may be presented are briefly described below. [0084] In one embodiment, the dynamic profiler is operable to display a pie chart of the CPU usage in a CPU usage profile window, such as shown in the exemplary CPU usage profile window 150 of FIGURE 10. The CPU usage (or "execution") profile window 150 shows the relative amounts of CPU usage by the kernel and processes, and labels each one with the number of cycles executed by the kernel or process and the corresponding percentage of CPU usage observed during the testing. In one embodiment, when a user moves a cursor over a segment of the pie chart, a popup note showing the number of cycles executed by the corresponding kernel or process may be generated and presented to the user.[0085] In one embodiment, a user can limit the display of cache event information to specific processes, caches, or event types, such in the exemplary cache profiling information window 1100 shown in FIGURES 1 IA-11C. To bring up the cache profiling information window 1100 in one embodiment, the user can select "Cache Profiling Information" from a "View menu" option presented by the user interface of the dynamic profiler. [0086] The processes tab 1101 is shown as selected in FIGURE HA, which enables a user to select one or more of the various processes for which their respective cache usage/event information is to be presented. Clearing a checkbox hides the cache events for the corresponding process; setting a checkbox displays the processes' cache events. [0087] Similarly, a user can choose to filter events by cache, using the caches tab 1102, such as shown in FIGURE HB. Thus, when the caches tab 1102 is selected, the user can select one or more of the various caches for which their respective usage/event information is to be presented. [0088] Similarly, a user can choose to filter for specific event types by selecting the events tab 1103, such as shown in FIGURE 11C. Thus, when the events tab 1103 is selected, the user can select one or more of the various event types for which their respective cache usage/event information is to be presented. [0089] In either case, after the user clicks the OK button, all cache profiling windows presented by the dynamic profiler may update to show only the information specified. [0090] In one embodiment, the dynamic profiler is operable to display the cache memory address events across time in a cache address history window, such as the exemplary cache address history window 1200 shown in FIGURE 12. Cache address information is used to determine if certain memory locations are not being efficiently accessed through the cache. Displayed cache events may be color-coded by event type. The hex values in the cache type lines indicate the address where the cache hit or miss occurred. By clicking on the graphical information presented, in certain embodiments the user is allowed to zoom in to receive a more detailed view of a selected portion of the information. [0091] In one embodiment, the dynamic profiler is operable to display the cache line events across time in a cache line history window, such as the exemplarycache line history window 1300 shown in FIGURE 13. Cache line history is used to determine how efficiently the cache is being used. Displayed cache events may be color-coded by event type. The hex values in the cache type lines indicate the line where the cache hit or miss occurred. By clicking on the graphical information presented, in certain embodiments the user is allowed to zoom in to receive a more detailed view of a selected portion of the information. [0092] In one embodiment, the dynamic profiler is operable to display a histogram of cache line events in a cache histogram window, such as the exemplary cache histogram window 1400 shown in FIGURE 14. The horizontal axis in this example indicates the cache line number, and the vertical axis indicates the number of cache events in a particular cache line. [0093] In one embodiment, the dynamic profiler is operable to display cache event counts by function in a cache use by function window, such as the exemplary cache use by function window 1500 shown in FIGURE 15 A. The user can select an event type from the Cache Event pull-down menu 1501 and may select the cache line from the Cache Line pull-down menu 1502. By selecting the display button 1503, information detailing the observed cache usage by the selected function, as defined in window 1500, is presented by the dynamic profiler. For instance, in this example, responsive to a user selecting the display button 1503, the lower pane of the window displays the number of cache hits and misses in each function that uses the cache line, such as shown in the exemplary output 1504 of FIGURE 15B. In certain embodiments, if the user selects a line in the window pane, the dynamic profiler's history window highlights all the occurrences of the cache event with a vertical black bar. [0094] In one embodiment, the dynamic profiler is operable to display cache event counts over a given cycle range in a cache summary window, such as the exemplary cache summary window 1600 shown in FIGURE 16A. Another example of such cache summary window 1600 is shown in FIGURE 16B (with a different clock cycle range selected than in FIGURE 16A). In the examples of FIGURES 16A-16B, the user can set the start and end cycles for the range the user wants to examine. The lower pane of the window 1600 displays the cache event's percentages of hits versus misses (or misses versus hits) within the cycle range.[0095] Certain embodiments enable an analysis of variable use by cache block. That is, cache use of individual variables by the software-based image under test can be analyzed. FIGURES 17A-17B show one exemplary user interface of the dynamic profiler for such variable use by cache block. In this embodiment, the Variable Use by Cache Block item 1701 (of the user interface window of FIGURE 17A) may be selected to cause the dynamic profiler to display the total access of variable per cache block. Then, the user is presented with the exemplary window 1702 of FIGURE 17B, in which the user may choose the cache event (via drop down menu 1703) and cache block (via drop down menu 1704), and then click the display button 1705 to display all variables access details, such as shown in the exemplary output 1706 in the lower pane of the window of FIGURE 17B. [0096] Presentation of information by a profiler may be performed, in certain embodiments, irrespective of whether the dynamic profiler is operating in post mortem mode or in real-time or constraint violation modes. [0097] FIGURE 18 shows an operational flow diagram according to one embodiment. In block 1801, a testing platform (e.g., the testing platform 210 of FIGURES 2-3) generates raw performance data for a software -based image (e.g., the software image 202 of FIGURES 2-3) executing on a target hardware platform (e.g., the target hardware platform 201 of FIGURES 2-3). In block 1802, a dynamic profiler (e.g., the dynamic profiler 220 of FIGURES 2-3) receives the raw performance data in substantially real-time, as it is generated by the testing platform (e.g., during an ongoing test execution of the software -based image on the target hardware platform). [0098] In block 1803, the dynamic profiler determines, based at least in part on analysis of the received raw performance data, a portion of the received raw performance data to archive. For instance, in certain embodiments, as indicated in the optional dashed block 1804, the dynamic profiler determines whether the received raw performance data indicates a violation of a pre-defined performance constraint. [0099] In block 1805, the determined portion of the received raw performance data is archived to data storage (e.g., to hard disk, magnetic disk, optical disk, or other suitable digital data storage device). In certain embodiments, as indicated in the optional dashed block 1806, responsive to determining that the received raw performance data indicates violation of a pre-defined performance constraint, a corresponding portion of the received raw performance is archived. The portionencompasses the received raw performance data that indicated violation of the performance constraint. As discussed above, in certain embodiments a user defines an amount of performance data that is to be archived for a detected performance constraint violation (e.g., in the input box 706 of FIGURE 7). [00100] Embodiments of a dynamic profiler as described above, or portions thereof, may be embodied in program or code segments operable upon a processor-based system (e.g., computer system) for performing functions and operations as described herein. The program or code segments making up the various embodiments may be stored in a computer-readable medium, which may comprise any suitable medium for temporarily or permanently storing such code. Examples of the computer-readable medium include such physical computer-readable media as an electronic memory circuit, a semiconductor memory device, random access memory (RAM), read only memory (ROM), erasable ROM (EROM), flash memory, a magnetic storage device (e.g., floppy diskette), optical storage device (e.g., compact disk (CD), digital versatile disk (DVD), etc.), a hard disk, and the like. [00101] FIGURE 19 illustrates an exemplary computer system 1900 on which embodiments of a dynamic profiler may be implemented. A central processing unit (CPU) 1901 is coupled to a system bus 1902. The CPU 1901 may be any general- purpose CPU. The dynamic profiler is not restricted by the architecture of the CPU 1901 (or other components of the exemplary system 1900) as long as the CPU 1901 (and other components of the system 1900) supports the operations as described herein. The CPU 1901 may execute the various logical instructions according to embodiments. For example, the CPU 1901 may execute machine-level instructions for performing processing according to the exemplary operational flows of a dynamic profiler as described above in conjunction with FIGURES 2-3 and 18. [00102] The computer system 1900 also preferably includes random access memory (RAM) 1903, which may be SRAM, DRAM, SDRAM, or the like. The computer system 1900 preferably includes read-only memory (ROM) 1904 which may be PROM, EPROM, EEPROM, or the like. RAM 1903 and ROM 1904 hold user and system data and programs, as is well known in the art. [00103] The computer system 1900 also preferably includes an input/output (I/O) adapter 1905, a communications adapter 1911, a user interface adapter 1908, and a display adapter 1909. The I/O adapter 1905, the user interfaceadapter 1908, and/or the communications adapter 1911 may, in certain embodiments, enable a user to interact with the computer system 1900 in order to input information to the dynamic profiler, such as inputs discussed with the above-described exemplary user interface windows. [00104] The I/O adapter 1905 preferably connects to the storage device(s) 1906, such as one or more of hard drive, compact disc (CD) drive, floppy disk drive, tape drive, etc. to the computer system 1900. The storage devices may be utilized when the RAM 1903 is insufficient for the memory requirements associated with storing data for operations of the dynamic profiler. The data storage of the computer system 1900 may be used for archiving at least portions of received raw performance data by the dynamic profiler, as discussed above (e.g., as the storage 205 in FIGURES 2-3). The communications adapter 1911 is preferably adapted to couple the computer system 1900 to a network 1912, which may enable information to be input to and/or output from the system 700 via such network 1912 (e.g., the Internet or other wide-area network, a local-area network, a public or private switched telephony network, a wireless network, any combination of the foregoing). The user interface adapter 1908 couples user input devices, such as a keyboard 1913, a pointing device 1907, and a microphone 1914 and/or output devices, such as speaker(s) 1915 to the computer system 1900. A display adapter 1909 is driven by the CPU 1901 to control the display on the display device 1910 to, for example, display output information from the dynamic profiler, such as the exemplary output windows discussed above. [00105] It shall be appreciated that the dynamic profiler is not limited to the architecture of the system 1900. For example, any suitable processor-based device may be utilized for implementing all or a portion of embodiments of the dynamic profiler, including without limitation personal computers, laptop computers, computer workstations, and multi-processor servers. Moreover, embodiments of the dynamic profiler may be implemented on application specific integrated circuits (ASICs) or very large scale integrated (VLSI) circuits. In fact, persons of ordinary skill in the art may utilize any number of suitable structures capable of executing logical operations according to the embodiments of the dynamic profiler. [00106] Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of theinvention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps. |
To provide a method of forming a magnetic electrode of a magnetic tunnel junction.SOLUTION: A method is for forming a non-magnetic MgO containing material 16 onto a conductive material 12 of a magnetic electrode formed. An amorphous metal 18 is formed onto the MgO containing material. The amorphous metal contains at least one of Mo and Cr, and an alloy of at least one of Fe, Co, and Ni or an alloy of Al and Ni. An amorphous magnetic electrode material 20 containing Co and Fe is formed onto the amorphous metal. The amorphous magnetic electrode material does not contain B. A non-magnetic tunnel insulation material 22 containing Mgo is formed by being directly contacted to the amorphous magnetic electrode material. The tunnel insulation material not contain B. After the formation of the tunnel insulation material, the non-magnetic electrode material is annealed at a temperature of at least about 250 degrees, and a crystalline magnetic electrode material containing Co and Fe is formed from a front surface containing MgO of the tunnel insulation material. The crystalline magnetic electrode containing Co and Fe does not contain B.SELECTED DRAWING: Figure 1 |
A method of forming a magnetic electrode of a magnetic tunnel junction, comprising: forming a nonmagnetic MgO-containing material on the conductive material of the magnetic electrode to be formed; and forming an amorphous metal on the MgO-containing material. Forming the amorphous metal is a) an alloy of at least one of Mo and Cr with at least one of Fe, Co, and Ni; or b) an alloy of Al and Ni. Forming an amorphous magnetic electrode material containing Co and Fe on the amorphous metal, wherein the amorphous magnetic electrode material does not contain B, Directly contacting the amorphous magnetic electrode material to form a non-magnetic tunnel insulator material comprising MgO, wherein the tunnel insulator material does not comprise B, and the tunnel insulator material At least after formation Annealing the amorphous magnetic electrode material at a temperature of 250 ° C. to form a crystalline magnetic electrode material comprising Co and Fe from the MgO-containing surface of the tunnel insulator material, the Co and Fe comprising The crystalline magnetic electrode material comprising is free of B.Forming the material containing Co, Fe, and B on the conductive material, and forming the MgO-containing material on the material containing Co, Fe, and B. The method according to Item 1.The method of claim 1, wherein the amorphous magnetic electrode material is formed at a temperature of 0 ° C to about 30 ° C.The method of claim 1, wherein the amorphous magnetic electrode material is formed at a temperature of about −250 ° C. to less than 0 ° C. 6.5. The method of claim 4, wherein the amorphous magnetic electrode material is formed at a temperature of about -250 <0> C to about -20 <0> C.The method according to claim 1, wherein the amorphous metal comprises an alloy of Al and Ni.The method according to claim 1, wherein the amorphous metal comprises an alloy of Mo and at least one of Fe, Co, and Ni.The method according to claim 1, wherein the amorphous metal comprises an alloy of Cr and at least one of Fe, Co, and Ni.The method according to claim 1, wherein the amorphous metal comprises an alloy containing Fe and at least one of Mo and Cr.The method according to claim 1, wherein the amorphous metal comprises an alloy containing at least one of Mo and Cr and Co.The method according to claim 1, wherein the amorphous metal comprises an alloy containing at least one of Mo and Cr and Ni.The method of claim 1, wherein the amorphous metal comprises an alloy comprising Mo and Cr.A magnetic tunnel junction incorporating the magnetic electrode produced using the method of claim 1.A method of forming a magnetic electrode of a magnetic tunnel junction, comprising forming an amorphous metal on a substrate, the amorphous metal comprising: a) at least one of Mo and Cr; B. an alloy with at least one of Co, and Ni, or b) an alloy of Al and Ni, and on the amorphous metal at a temperature of about -250 ° C. to about 30 ° C., Forming an amorphous magnetic electrode material comprising Co and Fe, wherein the amorphous magnetic electrode material is free of B, and in direct contact with the amorphous magnetic electrode material to form MgO. Forming a nonmagnetic tunnel insulator material, wherein the tunnel insulator material does not include B, and, after forming the tunnel insulator material, the amorphous material at a temperature of at least about 250 ° C. Anneal the magnetic electrode material before Forming a crystalline magnetic electrode material comprising Co and Fe from the MgO-containing surface of the tunnel insulator material, wherein said crystalline magnetic electrode material comprising Co and Fe does not comprise B. .15. The method of claim 14, wherein the amorphous metal comprises an alloy of Al and Ni.15. The method of claim 14, wherein the amorphous metal comprises an alloy of Mo and at least one of Fe, Co, and Ni.15. The method of claim 14, wherein the amorphous metal comprises an alloy of Cr and at least one of Fe, Co, and Ni. |
Method of forming a magnetic electrode of a magnetic tunnel junction and method of forming a magnetic tunnel junctionEmbodiments disclosed herein relate to a magnetic tunnel junction, a method of forming a magnetic electrode of the magnetic tunnel junction, and a method of forming a magnetic tunnel junction.A magnetic tunnel junction is an integrated circuit component having two conductive magnetic electrodes separated by a thin nonmagnetic tunnel insulator material (e.g., a dielectric material). The insulator material is sufficiently thin so that electrons can tunnel from one magnetic electrode to another through the insulator material under appropriate conditions. At least one of the magnetic electrodes can switch its overall magnetization direction between two states at normal operating write current / voltage or operating erase current / voltage, "free" electrode or "recording" It is generally called an electrode. Other magnetic electrodes are commonly referred to as "reference", "fixed" or "pinned" electrodes, and their overall magnetization orientation will not switch with the application of normal operating write current / voltage or erase current / voltage . The reference electrode and the recording electrode are electrically coupled to the respective conductive nodes. The resistance of the current between two nodes through the reference electrode, the insulator material and the recording electrode depends on the overall magnetization direction of the recording electrode relative to the overall magnetization direction of the reference electrode. Thus, the magnetic tunnel junction can be programmed to one of at least two states, which can be detected by measuring the current through the magnetic tunnel junction. Magnetic tunnel junctions have been proposed for use in memory integrated circuits because they can be "programmed" between two current conduction states. Thus, magnetic tunnel junctions can be used in logic circuits or other circuits separately or in addition to memory.The overall magnetization direction of the recording electrode can be switched by a current induced external magnetic field or by using a spin polarization current that results in a spin transfer torque (STT) effect. Charge carriers (such as electrons) have a property known as "spin", which is a small amount of angular momentum inherent to the carrier. The current is usually unpolarized (with about 50% "spin up" electrons and about 50% "spin down" electrons). The spin polarization current is a current having a very large number of electrons of either spin. A spin polarization current can be generated by passing an electrical current through a magnetic material (sometimes called a polarizer material). When a spin-polarized current is directed into a magnetic material, spin angular momentum may be transferred to the material, thereby affecting its magnetization orientation. This can also be used to excite oscillations or to reverse (i.e., switch) the orientation / domain orientation of the magnetic material if the spin polarization current is of sufficient magnitude.Alloys of Co and Fe or other mixtures are one of the common materials proposed for use as a polarizer material and / or as at least part of the magnetic recording material of the recording electrode in a magnetic tunnel junction. One more specific example is CoxFeyBz, where x and y are each 10-80, z is 0-50, and may be abbreviated as CoFe or CoFeB. MgO is an ideal material as a nonmagnetic tunnel insulator. Ideally, such materials are each crystalline with a body-centered cubic (bcc) 001 lattice. Such materials can be deposited, for example, by physical vapor deposition, using any suitable technique. One useful technique to ultimately form the bcc 001 lattice in such materials involves first forming CoFe to be amorphous, on which the MgO-containing tunnel insulator material is deposited Be done. During and / or after deposition, MgO tunnel insulator, CoFe, and tunnel insulator ideally achieve a uniform bcc 001 lattice structure individually.Boron is usually deposited as part of CoFe, ensuring or causing the first amorphous deposition of CoFe. The crystallization of CoFe can occur during or after the deposition of MgO by annealing the substrate at a temperature of at least about 350 ° C. This induces the diffusion of B atoms out of the CoFe matrix to be formed, which allows the crystallization of bcc 001 CoFe. bcc 001 MgO functions as a template during the crystallization of CoFe. However, B in the finished magnetic tunnel junction structure unnecessarily reduces the tunneling magnetoresistance (TMR) of the magnetic tunnel junction, specifically at the CoFe / MgO interface or within the MgO lattice.One aspect of the present invention is a method of forming a magnetic electrode of a magnetic tunnel junction, comprising forming a nonmagnetic MgO-containing material on the conductive material of the magnetic electrode to be formed; Forming an amorphous metal thereon, wherein the amorphous metal is a) an alloy of at least one of Mo and Cr with at least one of Fe, Co, and Ni, or b) containing an alloy of Al and Ni, and forming an amorphous magnetic electrode material containing Co and Fe on the amorphous metal, wherein the amorphous magnetic electrode material is B And b. Forming a nonmagnetic tunnel insulator material comprising MgO in direct contact with the amorphous magnetic electrode material, wherein the tunnel insulator material is free of B. Forming the tunnel insulator material Annealing the amorphous magnetic electrode material at a temperature of at least about 250 ° C. to form a crystalline magnetic electrode material comprising Co and Fe from the MgO-containing surface of the tunnel insulator material, Said crystalline magnetic electrode material containing Co and Fe does not contain B.Another aspect of the invention is a magnetic tunnel junction incorporating the magnetic electrode produced using the method according to the above aspect of the invention.Yet another aspect of the invention is a method of forming a magnetic electrode of a magnetic tunnel junction, comprising forming an amorphous metal on a substrate, the amorphous metal comprising a) Mo and An alloy of at least one of Cr and at least one of Fe, Co, and Ni, or b) an alloy of Al and Ni, and at a temperature of about -250 ° C. to about 30 ° C. Forming an amorphous magnetic electrode material containing Co and Fe on the amorphous metal, wherein the amorphous magnetic electrode material does not contain B, and the amorphous magnetic electrode Directly contacting the material to form a nonmagnetic tunnel insulator material comprising MgO, wherein the tunnel insulator material is free of B, and after forming the tunnel insulator material at least about The amorphous magnetic electrode at a temperature of 250.degree. Annealing the material to form a crystalline magnetic electrode material comprising Co and Fe from the MgO-containing surface of said tunnel insulator material, wherein said crystalline magnetic electrode material comprising Co and Fe is free of B ,, And.It is a schematic sectional drawing of a board | substrate fragment. It is a schematic sectional drawing of a board | substrate cross section. FIG. 7 is a schematic cross-sectional view of a substrate fragment during processing in the fabrication of a magnetic tunnel junction, according to one embodiment of the present invention. FIG. 4 is a view of the substrate fragment of FIG. 3 in a processing step after the processing step illustrated by FIG. 3; 5 is a view of the substrate fragment of FIG. 4 in a processing step after the processing step illustrated by FIG. 4;Embodiments of the present invention include methods of forming magnetic electrodes of magnetic tunnel junctions and methods of forming magnetic tunnel junctions. Thus, embodiments of the present invention include magnetic tunnel junctions regardless of the method of fabrication. An exemplary method according to some embodiments of the present invention is first described for a substrate piece 10 with reference to FIG. 1, which may comprise a semiconductor substrate. In the context of this document, the terms "semiconductor substrate" or "semiconductive substrate" are not limited, but semiconductive wafers (either alone or an assembly containing other materials thereon) It is defined to mean any structure comprising a semiconductive material, including bulk semiconductive materials such as and semiconductive material layers (either alone or an assembly comprising other materials). The term "substrate" refers to any support structure including, but not limited to, the semiconductive substrates described above. The substrate fragment 10 includes a base or a substrate 11 on which various materials are formed as a stack in the height direction. The material can be offset from the material shown in FIG. 1, internal in the height direction, or external to the height direction. For example, other components of the integrated circuit, partially or wholly manufactured, may be provided around or anywhere within the piece 10. The substrate 11 may be a conductive (multiple) material (i.e. electrically conductive herein), a semiconductive material, or an insulating / insulator (i.e. electrically conductive herein) material Can include any one or more of Regardless, any of the materials, regions and structures described herein may or may not be homogeneous, and regardless of which of the materials they overlie. It may be continuous or discontinuous. Furthermore, unless indicated otherwise, each material can be formed using any suitable or untapped technique, examples of which include atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth , Diffusion doping and ion implantation.The conductive material 12 of the magnetic (i.e., here ferrimagnetic or ferromagnetic) electrode material to be formed is formed on the substrate 11. Any conductive material can be used, such as one or more elemental metals, alloys of two or more elemental metals, semiconductive materials and conductive metal compounds that are doped to be conductive. In one embodiment, the conductive material 12 is not magnetic. One specific example of material 12 is elemental tantalum. An exemplary maximum thickness for conductive material 12 is about 5 angstroms to about 500 angstroms. In this document, "thickness" is defined by itself (without an adjective indicating the preceding direction), either directly from adjacent materials of different composition, or vertically from the closest surface of the directly adjacent area. Defined as the average linear distance through the material or area. Furthermore, the various materials or regions described herein may be of substantially constant thickness or of varying thickness. In the case of various thicknesses, thickness refers to the average thickness unless otherwise indicated. As used herein, what is needed for "different compositions" is, for example, the case where the two mentioned materials or areas that can be in direct contact with each other are not homogeneous, Only the relevant parts differ chemically and / or physically. If the two mentioned materials or regions are not in direct contact with each other, what is required for the "different composition" is that the two mentioned materials or regions closest to each other are not homogeneous In some cases, it is only that the relevant parts of the material or area differ chemically and / or physically. In the present document, a material, region or structure "directly contacts another material, region or structure if there is at least some physical contact of the materials, regions or structures mentioned with respect to one another" "Directly against". In contrast, “directly” do not precede “over”, “on”, and “against” with “directly in contact” By interposing a plurality of) intermediate materials, intermediate regions or intermediate structures, there are cases where there is no physical contact of the mentioned materials, regions or structures with one another.A material 14 comprising Co, Fe and B is formed on the conductive material 12. In one embodiment, material 14 comprises an alloy of Co and Fe, with amorphous Co40Fe40B20 being one example. As used herein, when characterizing a material or region as "amorphous", at least 90% of the volume of the material mentioned needs to be amorphous. An exemplary maximum thickness for the material 14 when used is about 2 angstroms to about 6 angstroms.Nonmagnetic MgO-containing material 16 is formed on conductive material 12 (regardless of the presence of material 14). The material 16 may comprise, consist essentially of, or consist of MgO. An exemplary maximum thickness for the MgO-containing material 16 is about 3 angstroms to about 10 angstroms. The purpose of including material 14 is to facilitate the formation of bcc 001 MgO during its deposition. The purpose of including material 16 is to facilitate the perpendicular magnetic anisotropy in the magnetic material of the conductive magnetic electrode to be formed, which is a desirable operational characteristic of some magnetic tunnel junctions .Amorphous metal 18 is formed on MgO-containing material 16 and, in one embodiment, is formed in direct contact with MgO-containing material 16 as shown. In one embodiment, the amorphous metal 18 comprises an alloy of transition metals, and in one embodiment, is substantially comprised of an alloy of transition metals, or comprised of an alloy of transition metals. In one embodiment, the amorphous metal 18 comprises an alloy comprising Fe, Co and another transition metal. In one embodiment, the amorphous metal 18 comprises an alloy of at least one of Hf, Zr, W, Mo, Al, Cr and Ta and at least one of Fe, Co and Ni. In one embodiment, the amorphous metal 18 comprises a W alloy, eg, an alloy of W and any one or more of Fe, Co and Ni. In one embodiment, the amorphous metal 18 has a maximum thickness of about 3 angstroms to about 5 angstroms.An amorphous magnetic electrode material 20 comprising Co and Fe is formed on the amorphous metal 18, in one embodiment in direct contact with the amorphous metal 18. The amorphous magnetic electrode material 20 does not contain B. As used herein, "does not include B" means 0 atomic percent B to 0.1 atomic percent B or less. When referring to "having magnetism" herein, it is not necessary to have magnetism when the mentioned magnetic material or region is initially formed, but in the completed circuit structure of the magnetic tunnel junction it is mentioned It is necessary that certain parts of the magnetic material or area that are functionally "magnetically". In one embodiment, Co and Fe of amorphous magnetic electrode material 20 are formed in direct contact with amorphous metal 18. In one embodiment, the amorphous magnetic electrode material 20 is formed at a temperature of 0 ° C. to about 30 ° C., and in one such embodiment, at a temperature of at least about 20 ° C. In one embodiment, the amorphous magnetic electrode material 20 is formed at a temperature of about -250 ° C to less than 0 ° C, and in one such embodiment, at a temperature of about -250 ° C to about -20 ° C. It is formed. The formation of the electrode material 20 at less than 30 ° C., ideally less than 0 ° C., results in the formation of the electrode material 20 as amorphous when the electrode material 20 does not contain B and the amorphous metal 18 is present. Make it easy. An exemplary maximum thickness for material 20 is about 7 angstroms to about 15 angstroms.A nonmagnetic tunnel insulator material 22 comprising MgO is formed in direct contact with the amorphous magnetic electrode material 20. The tunnel insulator material 22 does not contain B. The nonmagnetic tunnel insulator material 22 may comprise, consist essentially of, or consist of MgO. An exemplary maximum thickness for tunnel insulator material 22 is about 5 angstroms to about 25 angstroms.The materials 12, 14, 16, 18 and 20 will be used together to finally form the conductive magnetic electrode 25 of the magnetic tunnel junction to be formed. The material 24 is shown formed on the outside of the tunnel insulator material 22, in one embodiment in direct contact with the material 22, and another conductive magnetic electrode 27 of the magnetic tunnel junction to be formed. Will eventually be used to form the One of the electrodes 25 and 27 is configured to include a magnetic recording material, while the other of the electrodes 25 and 27 may be configured to include a magnetic reference material. The electrodes 25 and 27 can individually include nonmagnetic insulator materials or regions, semiconductive materials or regions, and / or conductive materials or regions. However, individually considered, the electrodes 25 and 27 collectively may collectively be magnetic and even though they may have one or more regions therein that are locally nonmagnetic and / or nonconductive in nature. It is characterized as being conductive. An exemplary maximum thickness for electrode 27 is about 20 angstroms to about 150 angstroms. By way of example only, material 24 comprises 13 angstroms of Co40 Fe 40 B 20 in direct contact with tunnel insulator material 22, 40 angstroms of Co and 40 angstroms of Ta in direct contact with Co 40 Fe 40 B 20, and directly in contact with Ta. The electrode 27 functions as a magnetic reference electrode in such an example, including an alloy / layer of Pd / Pt. Such materials collectively constitute the magnetic reference material in such instances. The electrode 25 in such an example functions as a magnetic recording electrode, and for example, the material 20 finally functions as a magnetic recording material during crystallization.After formation of the tunnel insulator material 22, the amorphous Co and Fe containing magnetic electrode material 20 is annealed (eg, in an inert atmosphere) at a temperature of at least about 250 ° C. to contain MgO in the tunnel insulator material 22. From the surface (e.g., from surface 23), crystalline Co and Fe containing magnetic electrode material 20 is formed. The crystalline Co- and Fe-containing magnetic electrode material 20 does not contain B. An exemplary desired upper temperature limit for annealing is 450.degree. As used herein, when characterizing a material or region as "crystalline", at least 90% of the volume of the material or region mentioned is required to be crystalline. In one embodiment, the crystalline Co and Fe containing magnetic electrode material 20 has a maximum thickness of about 7 angstroms to about 15 angstroms.Materials 12, 14, 16, 18, 20, 22 and 24 are blanket formed on substrate 11 to form the desired completed circuit structure of the magnetic tunnel junction to be formed and then patterned together. Can be Alternatively, such patterning of one or more materials may be performed before, during or after formation of any of the materials on substrate 11 and / or during any annealing, annealing or It may occur after annealing. Regardless, in one embodiment, the conductive magnetic electrode 25 comprises a magnetic recording material (eg, crystalline Co and Fe containing material 20) and the conductive magnetic electrode 27 comprises a magnetic reference material. In addition or alternatively, the positions of the electrodes 25 and 27 in the height direction may be reversed and / or oriented other than stacking in the height direction (eg, lateral, oblique, and One or more combinations of height direction, horizontal direction, oblique direction, etc.) may be used. In this document, "elevational", "upper", "lower", "top" and "bottom" are based on the vertical direction. "Horizontal" refers to the direction generally along the major surface with respect to the surface on which the substrate is processed during fabrication, and the vertical direction is the direction generally orthogonal thereto. Furthermore, as used herein, "vertical" and "horizontal" are directions substantially perpendicular to one another in three-dimensional space, and do not relate to the orientation of the substrate.Next, another exemplary method of forming the magnetic electrode of the magnetic tunnel junction will be described for the substrate piece 10a with reference to FIG. Similar reference numbers to the above embodiments are used where appropriate, and some structural differences are indicated by the subscript "a". Amorphous metal 18a is formed on substrate 11 (regardless of the presence of conductive material 12 or other material). In one embodiment, as shown, the amorphous metal 18a is formed in direct contact with other physically and / or chemically different conductive materials 12 of the magnetic electrode 25a formed. In one embodiment, the amorphous metal 18a has a maximum thickness of about 10 angstroms to about 100 angstroms.An amorphous magnetic electrode material 20 comprising Co and Fe (not B) is formed on the amorphous metal 18a at a temperature of about -250 ° C to about 30 ° C. In one embodiment, the amorphous magnetic electrode material 20a is formed at a temperature of 0 ° C to about 30 ° C. In one embodiment, the amorphous magnetic electrode material 20 is formed at a temperature of about -250 ° C to less than about 0 ° C, and in one embodiment, at a temperature of less than about -20 ° C.A nonmagnetic tunnel insulator material 22 containing MgO (not containing B) is formed in direct contact with the amorphous magnetic electrode material 20. After forming the tunnel insulator material 22, the amorphous Co and Fe-containing magnetic electrode material 20 is annealed at a temperature of at least about 250 ° C. to form the MgO-containing surface of the tunnel insulator material 22 (eg, from the surface 23) 2.) Form crystalline Co and Fe containing magnetic electrode material 20 (not including B). Any other attribute (s) or aspect (s) as described above and / or as shown in FIG. 1 can be used in the embodiment of FIG.Next, a method of forming a magnetic tunnel junction according to some embodiments of the present invention will first be described for the substrate piece 10b with reference to FIG. Similar reference numbers to the embodiments described above are used where appropriate, and some structural differences are indicated by the subscript "b" or different reference numbers. An internal magnetic electrode material 25 b is formed on the substrate 11. The electrodes 25b may be of any of the materials 12, 14, 16, 18 / 18a and 20 (not shown) and / or additional or other material (s) as in the embodiments described above. One or more may be included and may be formed using any of the processes described above or other (s). A nonmagnetic tunnel insulator material 22 containing MgO (not containing B) is formed on the inner magnetic electrode material 25b.After forming the tunnel insulator material 22, the tunnel insulator material 22 is annealed at a temperature of at least about 250 ° C, and in one embodiment at a temperature of about 300 ° C to about 550 ° C. This can be performed to induce MgO crystallization of the tunnel insulator material 22 and / or to produce therein the desired uniform crystals, such as bcc 001 lattice orientation.Referring to FIG. 4, after annealing, in one embodiment, the outer crystalline magnetic electrode material 30 is at least about 150 ° C. (eg, from surface 29) from the MgO-containing surface of the annealed tunnel insulator material 22 (eg, from surface 29). In one embodiment, it is formed at a temperature of less than about 250). The outer crystalline magnetic electrode material 30 contains Co and Fe and does not contain B. Any of the Co and Fe containing materials described above (not including B) can be used.In another embodiment, after annealing the tunnel insulator material 22, the external amorphous magnetic electrode material 30 is in direct contact with the annealed tunnel insulator material 22 and is about -250 ° C to about 0 ° C. Formed at temperatures below. Such an external amorphous magnetic electrode material 30 contains Co and Fe and does not contain B. It is at least about 250 ° C. to form a Co and Fe containing outer crystalline magnetic electrode material 30 (not including B) from the MgO containing surface of the annealed tunnel insulator material 22 (eg, from surface 29) It is then annealed at the temperature of. In one embodiment, the Co and Fe of the outer amorphous magnetic electrode material 30 are formed in direct contact with the annealed tunnel insulator material 22 at a temperature of about −20 ° C. or less. In one embodiment, the anneal to form the outer crystalline magnetic electrode material 30 is performed at a temperature of at least about 300 ° C., and in one embodiment at a temperature of about 400 ° C. or less.Referring to FIG. 5, the additional material 24b is deposited on the outer crystalline magnetic electrode material 30 so as to include a portion of the conductive magnetic electrode 27b. In one embodiment, the outer crystalline magnetic electrode material 30 has a maximum thickness of about 5 angstroms to about 15 angstroms. Any other attribute (s) or aspect (s) described above and / or shown in FIGS. 1 and 2 can be used in the embodiments of FIGS. 3-5.Embodiments of the invention include the magnetic electrode of a magnetic tunnel junction manufactured according to any of the above descriptions. Embodiments of the invention also include a magnetic tunnel junction manufactured according to any of the above descriptions.Furthermore, embodiments of the present invention include magnetic tunnel junctions regardless of the method of manufacturing and the discussion so far concluding. Such an embodiment includes a first conductive magnetic electrode comprising a magnetic recording material and a second conductive magnetic electrode spaced from the first electrode and comprising a magnetic reference material. The exemplary electrodes 25, 25a, 25b, 27 and 27b described above can include such first or second electrodes. Alternatively or additionally, when the magnetic tunnel junction is fabricated as a stack of materials, either the height external electrode or the height internal electrode comprises a magnetic recording material or a magnetic reference material it can. Regardless, a nonmagnetic tunnel insulator material (eg, tunnel insulator material 22) comprising MgO is between the first electrode and the second electrode. The tunnel insulator does not contain B. In one embodiment, the nonmagnetic tunnel insulator material has a maximum thickness of about 20 angstroms or less.In one embodiment, at least one of the magnetic recording material and the magnetic reference material comprises a crystalline magnetic region without B, comprising Co and Fe, such a region being less than about 30 angstroms And, in one embodiment, has a maximum thickness of about 20 angstroms or less, and in one embodiment about 15 angstroms or less. Co and Fe in such crystalline magnetic regions are in direct contact with MgO of the tunnel insulator. As an example, the component 20 and / or 30 (in the absence of B) is a crystalline magnetic region of the magnetic recording material or magnetic reference material that is part of one of the electrodes 25 / 25a / 25b or 27 / 27b. Can be included. In one embodiment, both the magnetic recording material and the magnetic reference material are in direct contact with MgO of the tunnel insulator material without Co and Fe and without B, with a maximum thickness of about 30 Angstroms or less Having a crystalline magnetic region. Other optional attribute (s) or aspect (s) as described above and / or shown in the drawings may be used.In one embodiment, the nonmagnetic tunnel insulator material comprising MgO has a maximum thickness of about 20 angstroms or less. The magnetic recording material and the magnetic reference material of the first electrode and the second electrode may be formed of Co and Fe regardless of whether or not Co and Fe in such crystalline magnetic regions are in direct contact with MgO of the tunnel insulator material. Each includes a crystalline magnetic region containing Fe, not B, and having a maximum thickness of about 30 angstroms or less. In one embodiment, the B-free, Co- and Fe-containing crystalline magnetic regions often have a maximum thickness of about 20 angstroms or less, and in one embodiment a maximum thickness of about 15 angstroms or less Have In one embodiment, the B-free Co and Fe-containing crystalline magnetic region of the second electrode has a maximum thickness greater than the maximum thickness of the first electrode. Any other attribute (s) or aspect (s) described above and / or shown in the drawings can be used.In one embodiment, the magnetic recording material or magnetic reference material of at least one of the first electrode and the second electrode comprises a Co and Fe containing crystalline magnetic region not containing B (e.g. material 20) )including. In one such embodiment, such a region has a maximum thickness of about 20 angstroms or less. Such at least one of the first and second electrodes also includes non-magnetic MgO-containing regions (e.g., material 16) and amorphous metal regions (e.g., material 18). Co and Fe containing magnetic regions (eg, material 20) that do not contain B are between the tunnel insulator material (eg, material 22) and the MgO containing region (eg, material 16). The amorphous metal region (e.g., material 18) is between the MgO-containing region (e.g., material 16) and the B-free Co and Fe-containing magnetic region (material 20). In one such embodiment, the MgO-containing region has a maximum thickness of about 3 angstroms to about 10 angstroms. In one embodiment, the amorphous metal region has a maximum thickness of about 3 angstroms to about 5 angstroms. In one embodiment, Co and Fe in the crystalline magnetic region are in direct contact with the MgO of the tunnel insulator material, and in one embodiment are in direct contact with the amorphous metal region. In one embodiment, the amorphous metal region is in direct contact with the MgO of the MgO-containing region. In one embodiment, at least one of the first electrode and the second electrode includes another region (eg, material 14) comprising Co, Fe and B. In one embodiment, the other region has a maximum thickness of less than about 10 angstroms. In one embodiment, another region of Co, Fe and B is in direct contact with MgO in the MgO-containing region. Any other attribute (s) or aspect (s) described above and / or shown in the drawings can be used.In one embodiment, the magnetic recording material or the magnetic reference material of at least one of the first electrode and the second electrode comprises Co and Fe and does not contain B crystalline magnetic regions (e.g. contains B) Material 20 or material 30) when not included. At least one of the first electrode and the second electrode may be formed of a conductive material (e.g., material 12) and an amorphous metal region (e.g., material 18 / 18a) different from the conductive material. And). B-free Co and Fe-containing crystalline magnetic regions (eg, material 20) are between the tunnel insulator material (eg, material 22) and the conductive material (eg, material 12). The amorphous metal region (eg, material 18 / 18a) is between the conductive material (eg, material 12) and the B-free Co and Fe-containing crystalline magnetic region (eg, material 20). In one embodiment, the Co and Fe of the crystalline magnetic region are in direct contact with the amorphous metal region, and in one embodiment, with the MgO of the tunnel insulator material. In one embodiment, the amorphous metal region is in direct contact with the conductive material, and in one embodiment, has a maximum thickness of about 10 angstroms to about 100 angstroms. In one embodiment, the crystalline magnetic region has a maximum thickness of about 7 angstroms to about 15 angstroms. Any other attribute (s) or aspect (s) described above and / or shown in the drawings can be used.In one embodiment, the magnetic recording material and the magnetic reference material of the first and second electrodes are each crystalline magnetic directly in contact with the MgO of the tunnel insulator material (eg, material 20 and material 30) Including the area. The crystalline magnetic region of at least one of the first electrode and the second electrode contains Co and Fe and does not contain B. Such at least one of the first electrode and the second electrode including a B-free Co and Fe-containing crystalline magnetic region comprises a conductive material (eg, material 12) and a conductive material And an amorphous metal region (e.g., material 18 / 18a) different from The B-free Co and Fe-containing crystalline magnetic region (eg, material 20) is between the tunnel insulator material (eg, material 22) and the conductive material (eg, material 12) and is less than about 30 angstroms It has the largest thickness. The amorphous metal region (eg, material 18 / 18a) is between the conductive material (eg, material 12) and the B-free Co and Fe-containing crystalline magnetic region (eg, material 20) and is about It has a maximum thickness of 100 angstroms or less. Any other attribute (s) or aspect (s) described above and / or shown in the drawings can be used.Each of the embodiments of the magnetic tunnel junction structure described above which are not related to the method of manufacture is any of the structural features or attributes shown and / or described with reference to the method embodiments. One can, of course, be manufactured and manufactured using any aspect (s) or attribute (s) of such method embodiments.The exemplary embodiments of FIGS. 1-4 show a single magnetic tunnel junction (SMTJ). However, dual magnetic tunnel junctions (DMTJ) or more than dual magnetic tunnel junctions are contemplated.[Conclusion] In some embodiments, the method of forming the magnetic electrode of the magnetic tunnel junction includes forming a nonmagnetic MgO-containing material on the conductive material of the magnetic electrode to be formed. An amorphous metal is formed on the MgO-containing material. An amorphous magnetic electrode material comprising Co and Fe is formed on an amorphous metal. The amorphous magnetic electrode material does not contain B. A nonmagnetic tunnel insulator material comprising MgO is formed in direct contact with the amorphous magnetic electrode material. The tunnel insulator material does not contain B. After forming the tunnel insulator material, the amorphous Co and Fe containing magnetic electrode material is annealed at a temperature of at least about 250 ° C. to form crystalline Co and Fe containing magnetic from the MgO containing surface of the tunnel insulator material. Form an electrode material. The crystalline Co and Fe containing magnetic electrode material does not contain B.In some embodiments, a method of forming a magnetic electrode of a magnetic tunnel junction includes forming an amorphous metal on a substrate. An amorphous magnetic electrode material comprising Co and Fe is formed on the amorphous metal at a temperature of about -250 ° C to about 30 ° C. The amorphous magnetic electrode material does not contain B. A nonmagnetic tunnel insulator material comprising MgO is formed in direct contact with the amorphous magnetic electrode material. The tunnel insulator material does not contain B. After forming the tunnel insulator material, the amorphous Co and Fe containing magnetic electrode material is annealed at a temperature of at least about 250 ° C. to form crystalline Co and Fe containing magnetic from the MgO containing surface of the tunnel insulator material. Form an electrode material. The crystalline Co and Fe containing magnetic electrode material does not contain B.In some embodiments, a method of forming a magnetic tunnel junction includes forming an inner magnetic electrode material on a substrate. A nonmagnetic tunnel insulator material comprising MgO is formed on the inner magnetic electrode material. The tunnel insulator material does not contain B. After forming the tunnel insulator material, the tunnel insulator material is annealed at a temperature of at least about 250.degree. After annealing, the external crystalline magnetic electrode material is formed at a temperature of at least about 150 ° C. from the MgO-containing surface of the annealed tunnel insulator material. The external crystalline magnetic electrode material contains Co and Fe and does not contain B.In some embodiments, a method of forming a magnetic tunnel junction includes forming an inner magnetic electrode material on a substrate. A nonmagnetic tunnel insulator material comprising MgO is formed on the inner magnetic electrode material. The tunnel insulator material does not contain B. After forming the tunnel insulator material, the tunnel insulator material is annealed at a temperature of at least about 250.degree. After annealing of the tunnel insulator material, the outer amorphous magnetic electrode material is formed at a temperature of about -250 ° C to less than 0 ° C, in direct contact with the annealed tunnel insulator material. The external amorphous magnetic electrode material is in direct contact with the annealed tunnel insulator material, contains Co and Fe and does not contain B. The Co and Fe containing external amorphous magnetic electrode material is annealed at a temperature of at least about 250 ° C. to form a Co and Fe containing external crystalline magnetic electrode material from the MgO containing surface of the annealed tunnel insulator material Do. The outer crystalline magnetic electrode material containing Co and Fe does not contain B.In some embodiments, the magnetic tunnel junction comprises a first conductive magnetic electrode comprising a magnetic recording material. The second conductive magnetic electrode is spaced from the first electrode and includes a magnetic reference material. A nonmagnetic tunnel insulator material comprising MgO is between the first and second electrodes. The tunnel insulator material does not contain B and has a maximum thickness of about 20 angstroms or less. At least one of the magnetic recording material and the magnetic reference material comprises a B free crystalline magnetic region comprising Co and Fe. Co and Fe containing crystalline magnetic regions that do not contain B have a maximum thickness of about 30 angstroms or less. Co and Fe in the crystalline magnetic region are in direct contact with the MgO of the tunnel insulator material.In some embodiments, the magnetic tunnel junction comprises a first conductive magnetic electrode comprising a magnetic recording material. The second conductive magnetic electrode is spaced from the first electrode and includes a magnetic reference material. A nonmagnetic tunnel insulator material comprising MgO is between the first and second electrodes. The tunnel insulator does not contain B and has a maximum thickness of about 20 angstroms or less. The magnetic recording material and the magnetic reference material of the first electrode and the second electrode contain each of the crystalline magnetic regions containing Co and Fe, without B, having a maximum thickness of about 30 angstroms or less. Each contains.In some embodiments, the magnetic tunnel junction comprises a first conductive magnetic electrode comprising a magnetic recording material. The second conductive magnetic electrode is spaced from the first electrode and includes a magnetic reference material. A nonmagnetic tunnel insulator material comprising MgO is between the first and second electrodes. The tunnel insulator material does not contain B. The magnetic recording material or the magnetic reference material of at least one of the first electrode and the second electrode includes a B-free crystalline magnetic region containing Co and Fe. At least one of the first electrode and the second electrode includes a nonmagnetic MgO-containing region and an amorphous metal region. The B-free Co and Fe-containing crystalline magnetic region is between the tunnel insulator material and the MgO-containing region. The amorphous metal region is between the MgO-containing region and the B-free Co and Fe-containing crystalline magnetic region.In some embodiments, the magnetic tunnel junction comprises a first conductive magnetic electrode comprising a magnetic recording material. The second conductive magnetic electrode is spaced from the first electrode and includes a magnetic reference material. A nonmagnetic tunnel insulator material comprising MgO is between the first and second electrodes. The tunnel insulator material does not contain B. The magnetic recording material or the magnetic reference material of at least one of the first electrode and the second electrode includes a B-free crystalline magnetic region containing Co and Fe. At least one of the first electrode and the second electrode includes a conductive material and an amorphous metal region different from the conductive material. A B-free Co and Fe-containing crystalline magnetic region is between the tunnel insulator material and the conductive material. The amorphous metal region is between the conductive material and the Co- and Fe-containing region not containing B.In some embodiments, the magnetic tunnel junction comprises a first conductive magnetic electrode comprising a magnetic recording material. The second conductive magnetic electrode is spaced from the first electrode and includes a magnetic reference material. A nonmagnetic tunnel insulator material comprising MgO is between the first and second electrodes. The tunnel insulator material does not contain B. The magnetic recording material and magnetic reference material of the first electrode and the second electrode comprise respective crystalline magnetic regions in direct contact with the MgO of the tunnel insulator material. The crystalline magnetic region of at least one of the first electrode and the second electrode contains Co and Fe and does not contain B. At least one of the first electrode and the second electrode including the B-free Co- and Fe-containing crystalline magnetic region comprises a conductive material and an amorphous metal region different from the conductive material. Including. The B-free Co and Fe-containing crystalline magnetic region is between the tunnel insulator material and the conductive material and has a maximum thickness of about 30 angstroms or less. The amorphous metal region is between the conductive material and the B-free Co and Fe-containing crystalline magnetic region and has a maximum thickness of about 100 angstroms or less.In compliance with the statute, the subject matter disclosed herein has been described in more or less specific terms in terms of structural and methodical features. However, it should be understood that the claims should not be limited to the particular features illustrated and described. This is because the means disclosed herein include exemplary embodiments. Therefore, the claims should be given the full scope expressed by the words, and should be properly interpreted in accordance with the doctrine of equivalents.Reference Signs List 10, 10a, 10b Substrate fragment 11 Substrate 12 Conductive material 14 Material 16 Nonmagnetic MgO-containing material 18, 18a Amorphous metal 20 Magnetic electrode material 22 Tunnel insulator material 24, 24b Material 25, 25a, 25b Conductive magnetic electrode 27, 27b conductive magnetic electrode |
Systems and methods described in this disclosure relate, generally, to analyzing electronic circuitry, and more specifically, to analyzing efficiency of clock gating in electronic circuitry. Analysis may include identifying wasted propagation of clock signals by clock gates and/or for a circuitry as a whole. In some embodiments, modified gating logic may be determined that improves clock gating efficiency, for example, by eliminating at least some wasted propagation of clock signals. |
CLAIMSWhat is claimed is:1. A method of analyzing an electronic system, comprising:identifying a propagating period of a clock gate;identifying an idle period of a gated-device operatively coupled to the clock gate;identifying a wasted propagation period responsive to an overlap of the propagating period and the idle period;configuring a modified clock gating logic responsive to the wasted propagation period; comparing a first activity of the gated-device to a second activity of the gated-device; and confirming the first activity and the second activity are consistent.2. The method of claim 1, wherein comparing the first activity of the gated- device to the second activity of the gated-device comprises:determining the first activity of the gated-device responsive to activity associated with the gated-device prior to configuring the modified clock gating logic; anddetermining the second activity of the gated-device responsive to activity associated with the gated-device subsequent to configuring the modified clock gating logic.3. The method of claim 1, wherein identifying an idle period of a gated-device comprises:identifying activity on a data path that corresponds to a clock path that includes the clock gate and the gated-device; andidentifying the idle period responsive to the identified activity.4 The method of claim 3, wherein identifying the idle period comprises observing one or more state changes at an output of the gated-device.5. The method of claim 3, wherein identifying the idle period further comprises:identifying clock cycles corresponding to the observed state changes; andidentifying a series of clock cycles between two consecutive state changes of the observed state changes.6. The method of claim 1, wherein identifying the propagation period of the clock gate comprises identifying a series of clock cycles during which the clock gate is propagating a received clock.7. The method of claim 1, wherein the identifying the wasted propagation period responsive to the overlap of the propagating period and the idle period comprises: identifying a first series of clock cycles corresponding to the propagating period of the clock gate;identifying a second series of clock cycles corresponding to the idle period of the gated- device; andidentifying at least one clock cycle that is the same for the first series of clock cycles and the second series of clock cycles. 8. The method of claim 1, wherein configuring the modified clock gating logic responsive to the wasted propagation period comprises configuring the modified clock gating logic to not propagate a received clock for at least some of the wasted propagation period. 9. A system for analyzing electronic circuitry, comprising:a non-transitory storage medium configured to store electronic files of waveformscorresponding to operation of clock gates and gated-devices of an electronic circuitry;a processor for processing the electronic files of waveforms stored at the non-transitory storage medium, wherein the processor is configured to:identify a propagating period of a clock gate;identify an idle period of a gated-device operatively coupled to the clock gate;identify a wasted propagation period responsive to an overlap of the propagating period and the idle period;configure a modified clock gating logic responsive to the wasted propagation period; compare a first activity of the gated-device to a second activity of the gated-device; and confirm the first activity and the second activity are consistent.10. The system of claim 9, wherein the processor is configured to compare the first activity of the gated-device to the second activity of the gated-device by:determining the first activity of the gated-device responsive to activity associated with the gated-device prior to configuring the modified clock gating logic; anddetermining the second activity of the gated-device responsive to activity associated with the gated-device subsequent to configuring the modified clock gating logic.11. The system of claim 10, wherein the processor is configured to identify the idle period of a gated-device by:identifying activity on a data path that corresponds to a clock path that includes the clock gate and the gated-device; andidentifying the idle period responsive to the identified activity.12. The system of claim 11, wherein the processor is configured to identify the idle period comprises observing one or more state changes at an output of the gated-device.13. The system of claim 11, wherein the processor is configured to identify the idle period further by:identifying clock cycles corresponding to the observed state changes; andidentifying a series of clock cycles between two consecutive state changes of the observed state changes.14. The system of claim 9, wherein the processor is configured to identify the propagation period of the clock gate by identifying a series of clock cycles during which the clock gate is open.15. The system of claim 9, wherein the processor is configured to identify the wasted propagation period responsive to the overlap of the propagating period and the idle period by:identifying a first series of clock cycles corresponding to the propagating period of the clock gate;identifying a second series of clock cycles corresponding to the idle period of the gated- device; and identifying at least one clock cycle that is the same for the first series of clock cycles and the second series of clock cycles.16. The system of claim 9, wherein the processor is configured to configure the modified clock gating logic responsive to the wasted propagation period by configuring the modified clock gating logic to close the clock gate for at least some of the wasted propagation period.17. A computer program product, comprising:a computer-readable medium; andinstructions stored on the computer-readable medium, the instructions configured to enable a processor to perform operations of:identifying a propagating period of a clock gate;identifying an idle period of a gated-device operatively coupled to the clock gate;identifying a wasted propagation period responsive to an overlap of the propagating period and the idle period;configuring a modified clock gating logic responsive to the wasted propagation period; comparing a first activity of the gated-device to a second activity of the gated-device; and confirming the first activity and the second activity are consistent.18. The computer program product of claim 17, wherein the instructions are configured to enable the processor to compare the first activity of the gated-device to the second activity of the gated-device by:determining the first activity of the gated-device responsive to activity associated with the gated-device prior to configuring the modified clock gating logic; anddetermining the second activity of the gated-device responsive to activity associated with the gated-device subsequent to configuring the modified clock gating logic.19. The computer program product of claim 18, wherein the instructions are configured to enable the processor to identify the idle period of a gated-device by:identifying activity on a data path that corresponds to a clock path that includes the clock gate and the gated-device; andidentifying the idle period responsive to the identified activity.20. The computer program product of claim 19, wherein the instructions are configured to enable the processor to identify the idle period comprises observing one or more state changes at an output of the gated-device.21. The computer program product of claim 19, wherein the instructions are configured to enable the processor to identify the idle period further by:identifying clock cycles corresponding to the observed state changes; andidentifying a series of clock cycles between two consecutive state changes of the observed state changes.22. The computer program product of claim 17, wherein the instructions are configured to enable the processor to identify the propagation period of the clock gate by identifying a series of clock cycles during which the clock gate is open.23. The computer program product of claim 17, wherein the instructions are configured to enable the processor to identify the wasted prorogation period responsive to the overlap of the propagating period and the idle period by:identifying a first series of clock cycles corresponding to the propagating period of the clock gate;identifying a second series of clock cycles corresponding to the idle period of the gated- device; andidentifying at least one clock cycle that is the same for the first series of clock cycles and the second series of clock cycles.24. The computer program product of claim 17, wherein the instructions are configured to enable the processor to configure the modified clock gating logic responsive to the wasted propagation period by configuring the modified clock gating logic to close the clock gate for at least some of the wasted propagation period.25. A method of analyzing an electronic circuitry design, comprising:identifying logic cells comprising a gate-level logic model of an electronic circuitry design; generating first simulation commands for simulating the electronic circuitry designresponsive to the identified logic cells;performing a first simulation responsive to generating the first simulation commands; identifying clock gate behavior responsive to performing the first simulation;generating second simulation commands for simulating the electronic circuitry design responsive to identifying the clock gate behavior;performing a second simulation responsive to generating the second simulation commands; collecting dynamic efficiency information for the electronic circuitry design responsive to performing the second simulation; andscoring one or more of the electronic circuitry design, clock-gates of the electronic circuitry design, and gated-devices of the electronic circuitry design responsive to collecting the dynamic efficiency information.26. The method of claim 25, wherein collecting dynamic efficiency information comprises collecting one or more of:a number of clock cycles where a gated-device is active;a number of cycles where a gated-device is static and a clock-gate corresponding to such gated-device is propagating a received clock; anda number of gated-devices in a fan out of a clock gate.27. The method of claim 25, wherein the generating the first simulation commands for simulating the electronic circuitry design responsive to the identified logic cells comprises:identifying clock gates described in the logic cells; andgenerating the first simulation commands responsive to the identified clock gates.28. The method of claim 25, wherein the generating the first simulation commands responsive to the identified clock gates comprises generating simulation commands for simulating signaling changes at the identified clock gates.29. The method of claim 28, wherein generating simulation commands for simulating signaling changes at the identified clock gates comprises: generating the simulation commands for simulating signaling changes at the identified clock gates but not for simulating signaling changes at other devices the electronic circuitry design.30. The method of claim 25, wherein identifying clock gate behavior responsive to performing the first simulation comprises identifying sampling frequencies for performing a simulation of clock gates and gated-devices of the electronic circuitry design.31. The method of claim 30, wherein identifying sampling frequencies for performing the simulation of the clock gate and the gated-devices of the electronic circuitry design comprises:identifying a lowest clock frequency and a highest clock frequency during an analysis period; anddetermining a second clock frequency responsive to the identifying the lowest clockfrequency and the highest clock frequency during the analysis period.32. The method of claim 31, wherein determining the second clock frequency responsive to the identifying the lowest clock frequency and the highest clock frequency during the analysis period comprises:identifying a multiple of the lowest clock frequency that is the same or higher to thehighest clock frequency; anddefining the second clock frequency responsive to identifying the multiple of the lowest clock frequency.33. The method of claim 30, wherein identifying sampling frequencies for performing the simulation of the clock gate and the gated-devices of the electronic circuitry design comprises:identifying all clock frequencies during an analysis period; anddetermining a least-common multiple clock frequency responsive to identifying all clock frequencies during the analysis period.34. The method of claim 25, wherein generating second simulation commands for simulating the electronic circuitry design responsive to identifying the clock gate behavior comprises:generating simulation commands for simulating clock gates and gated-devices of the electronic circuitry using a sampling frequency. |
METHOD OF CLOCK GATE ANALYSIS OF ELECTRONIC SYSTEM DESIGNS AND RELATED SYSTEMS, METHODS AND DEVICESPRIORITY CLAIMThis application claims the benefit of the filing date of United States Provisional Patent Application Serial Number 62/723,589, filed August 28, 2018, for“Method of Clock Gate Analysis of Electronic System Designs and Related Systems, Methods and Devices,” and claims the benefit of the filing date of U.S. Patent Application Serial 16/228,445, filed December 20, 2018, for“Method of Clock Gate Analysis of Electronic System Designs and Related Systems, Methods and Devices,” pending, which also claims priority to U.S. Provisional Patent Application Serial No. 62/723,589, the contents and disclosure of each of which is hereby incorporated herein in its entirety by this reference.TECHNICAL FIELDEmbodiments of this disclosure relate, generally, to analysis of clock gates in electronic circuitry designs, and more specifically, in some embodiments to analysis of clock gates inserted into electronic circuitry designs by electronic circuitry design tools.BACKGROUNDElectronic circuitry design tools, such as tools for electronic computer-aided design and electronic design automation, are commonly used for design of electronic systems.For example, design and/or evaluation of electronic circuits, integrated circuits, application-specific integrated circuits, and printed circuit boards. The designs they generate are used for many purposes including, manufacturing of semiconductor devices as well as programming design functionality into configurable programmable logic blocks such as used in field-programmable gate arrays (FPGAs).Prior to manufacture or release of an electronic system, electronic circuitry designs are typically evaluated and verified. Evaluation and verification typically involves performing a simulation of an electronic circuitry design to analyze a function (or functions) of a system - i.e., given a set of inputs does a system generate the expected output? In addition, a simulation may be used to measure an efficiency of a system according to predefined metrics, including related to power consumption. By way of example, clocking registers of an integrated circuit when there is no change in data stored at those registers is an inefficient use of power by an electronic circuitry design.Clock gating is a technique used in synchronous circuits to reduce power dissipation. It saves power by adding logic to circuitry (i.e., a“clock gate”) to disable portions of the circuitry so that clocks are disabled to flip-flops or other downstream logic in the circuitry that do not, or are not intended to, switch states. Electronic circuitry design tools will sometimes insert thousands of clock gates into an electronic circuitry design. However, the inventors of this disclosure now understand that if incorrectly configured or if a use-case is marginal, then a clock gate may save less power than a correctly configured clock gate, or, in some cases, a clock gate may cost more power than it saves.The inventors of this disclosure have recognized a need for methods of analysis of clock gates in electronic circuitry designs, and more specifically, analysis of clock gates inserted into electronic circuitry designs by electronic circuitry design tools.BRIEF DESCRIPTION OF THE DRAWINGSPurposes and advantages of the embodiments of the disclosure will be apparent to one of ordinary skill in the art from the detailed description in conjunction with the appended drawings, including:FIG. 1 shows a simplified circuit diagram of an example electronic circuitry that has not been improved in accordance with one or more embodiments of the disclosure.FIG. 2A shows a timing diagram that corresponds to a contemplated operation of the electronic circuitry of FIG. 1.FIG. 2B shows a timing diagram that corresponds to an improved electronic circuitry, in accordance with one or more embodiments of the disclosure.FIG. 3 shows a simplified circuit diagram of an example improved electronic circuitry that corresponds to the timing diagram of FIG. 2B.FIG. 4 shows a flowchart of a clock-gating analysis process, in accordance with one or more embodiments of the disclosure.FIG. 5 shows a functional block diagram of an example clock-gating analyzer, in accordance with one or more embodiments of the disclosure.FIG. 6 shows a flowchart of an example combinational logic interpretation process, in accordance with one or more embodiments of the disclosure.FIG. 7 shows a flowchart of an example clock gating interpretation process, in accordance with one or more embodiments of the disclosure. FIG. 8 shows a flowchart of an example efficiency interpretation process, in accordance with one or more embodiments of the disclosure.MODE(S) FOR CARRYING OUT THE INVENTIONIn the following detailed description, reference is made to the accompanying drawings, which form a part hereof, and in which are shown, by way of illustration, specific example embodiments in which the present disclosure may be practiced. These embodiments are described in sufficient detail to enable a person of ordinary skill in the art to practice the present disclosure. However, other embodiments may be utilized, and structural, material, and process changes may be made without departing from the scope of the disclosure.The illustrations presented herein are not meant to be actual views of any particular method, system, device, or structure, but are merely idealized representations that are employed to describe the embodiments of the present disclosure. The drawings presented herein are not necessarily drawn to scale. Similar structures or components in the various drawings may retain the same or similar numbering for the convenience of the reader; however, the similarity in numbering does not mean that the structures or components are necessarily identical in size, composition, configuration, or any other property.It will be readily understood that the components of the embodiments as generally described herein and illustrated in the drawings may be arranged and designed in a wide variety of different configurations. Thus, the following description of various embodiments is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments may be presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.The following description may include examples to help enable one of ordinary skill in the art to practice the disclosed embodiments. The use of the terms“exemplary,” “by example,”“for example,”“e.g.,” and the like means that the related description is explanatory, and though the scope of the disclosure is intended to encompass the examples and legal equivalents, the use of such terms is not intended to limit the scope of an embodiment or this disclosure to the specified components, steps, features, functions, or the like. Thus, specific implementations shown and described are only examples and should not be construed as the only way to implement the present disclosure unless specified otherwise herein. Elements, circuits, and functions may be shown in block diagram form in order not to obscure the present disclosure in unnecessary detail. Conversely, specific implementations shown and described are exemplary only and should not be construed as the only way to implement the present disclosure unless specified otherwise herein.Additionally, block definitions and partitioning of logic between various blocks is exemplary of a specific implementation. It will be readily apparent to one of ordinary skill in the art that the present disclosure may be practiced by numerous other partitioning solutions. For the most part, details concerning timing considerations and the like have been omitted where such details are not necessary to obtain a complete understanding of the present disclosure and are within the abilities of persons of ordinary skill in the relevant art.Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, and symbols that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal for clarity of presentation and description. It should be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, wherein the bus may have a variety of bit widths and the disclosure may be implemented on any number of data signals including a single data signal.It should be understood that any reference to an element herein using a designation such as“first,”“second,” and so forth does not limit the quantity or order of those elements, unless such limitation is explicitly stated. Rather, these designations are used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements can be employed or that the first element must precede the second element in some manner. Also, unless stated otherwise a set of elements may comprise one or more elements. Likewise, sometimes elements referred to in the singular form may also include one or more instances of the element.The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a special purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor (may also be referred to herein as a host processor or simply a host) may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A general-purpose computer including a processor is considered a special-purpose computer while the general-purpose computer is configured to execute computing instructions (e.g., software code) related to embodiments of the present disclosure.Also, it is noted that the embodiments may be described in terms of a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram.Although a flowchart may describe operational acts as a sequential process, many of these acts may be performed in another sequence, in parallel, or substantially concurrently. In addition, the order of the acts may be re-arranged. A process may correspond to a method, a thread, a function, a procedure, a subroutine, a subprogram, etc. Furthermore, the methods disclosed herein may be implemented in hardware, software, or both. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on computer-readable media. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.Any characterization in this disclosure of something as“typical,”“conventional,” or“known” does not necessarily mean that it is disclosed in the prior art or that the discussed aspects are appreciated in the prior art. Nor does it necessarily mean that, in the relevant field, it is widely known, well-understood, or routinely used.As used herein a“gated-device” is circuitry within an electronic system that may be enabled/disabled and that includes synchronous circuitry. Examples of synchronous circuitry include synchronous combinational logic such as flip-flops, a register and a latch. In the case of a register, a gated-device may be part of a register, for example, the least- significant-bits of a register (e.g., a subset of a number of flip-flops that form a register). As a matter of convention, a gated-device may be described herein as“driven” when it receives a clock. Moreover, a clock gate may drive a gated-device or groups of gated-devices in its fan out when propagating a clock. Notably, a first clock gate may “drive” gated-devices in its fan but that does not mean some or even all such gated-devices receive a clock signal - for example, one or more clock gates may be in a clock path between such first clock gate and various gated-devices.In some cases, a clock gate may be described herein in terms of a state, e.g.,“open” or“closed,” operationally, e.g.,“propagating” and“not propagating,” and combinations thereof.As used herein, a“clock” is a signal that oscillates between high and low states. Often an amplitude of a“high” and a“low” as well as a frequency of oscillation are predictable, but that is not necessarily always the case. By way of example, a clock may be used to coordinate actions of an electronic system and the circuitry of an electronic system. In this disclosure, for consistency of description, edge-triggered circuits should be assumed to be“low-to-high” or“rising-edge” triggered, and level-triggered circuits should be assumed to be“open” when a clock level is high. However, one of ordinary skill in the art will understand that any number of conventions may be used to trigger a circuit based on a clock.A clock cycle is a time period measured from a first triggering event to a next triggering event. In the case of rising-edge triggered circuitry of this disclosure, the next triggering event may be the immediately successive rising-edge, or some multiple, e.g., every 2nd, every 3rd, without limitation. A period of time may be expressed in terms of number of clock cycles. For example, a relevant period may be expressed as two clock cycles long, three clock cycles long, etc. A number of successive clock cycles that form a period may be described as a series of clock cycles.Typical electronic circuitry design tools insert clock gates into electronic circuitry designs without considering usage, notwithstanding that clock gates consume power and use physical space. Electronic circuitry design tools may configure a clock gate to disable a gated-device for many potential reasons. Electronic circuitry design tools may configure a clock gate to control (as used herein, controlling a gated-device refers to both enabling and disabling a gated-device) a gated-device to account for a clock stabilizing, but may not consider other conditions where clock-gating would improve efficiency. For example, it may not be efficient for a clock gate to propagate a clock if a gated-device is not changing state (e.g., changing stored information in the case of registers).One or more embodiments of the disclosure relate, generally, to a method of analyzing clock-gating in an electronic circuitry design. During a simulation of an electronic circuitry design, for a given clock gate, state changes in one or more gated- devices in a fan-out of the clock gate are observed and compared to operation of the clock gate - that is, whether the clock gate is propagating the clock while gated-devices should change state and propagating a clock while gated-devices should not change state. If a propagating period overlaps with idle period(s) at gated-devices (i.e., corresponds to one or more of the same clock cycles), and a change to a propagating period would improve efficiency of an electronic circuitry design, then a clock gate may be reconfigured based on, at least in part, a desired change to a propagating period. Characterized another way, if changes to when a clock is enabled to reach a gated-device and disabled to reach a gated- device would improve efficiency of an electronic circuitry design then clock gate control logic may be reconfigured based on, at least in part, such timing information.One or more embodiments of the disclosure relate, generally, to a clock gate analyzer (the“CGAnalyzer”) configured for clock gating analysis of an electronic circuitry design. The CGAnalyzer may build a gating model associated with an electronic circuitry design, creates clock gating analysis parameters that are usable for simulation of the electronic circuitry design, and performs clock gating analysis during simulation of the electronic circuitry design. The CGAnalyzer may simulate and analyze each clock gate in an electronic circuitry design, and/or analyze simulation results of a simulator. The CGAnalyzer may output the results in human and/or computer-readable format that identifies clock gates based on efficiency thresholds. In one embodiment, the CGAnalyzer may output changes to a configuration of analyzed clock gates that would result in higher efficiency. The CGAnalyzer may compare an output of a simulation of an electronic circuitry design having a reconfigured clock gates to an output of a simulation of an original electronic circuitry design to verify that an electronic circuitry design does not behave differently with a reconfigured clock gates/clock gating logic.An example clock-gating analysis will now be described with reference to FIGS. 1, 2A, 2B, and 3, in accordance with one or more embodiments of the disclosure.FIG. 1 shows a simplified circuit diagram of an example electronic circuitry 100 that has not been improved in accordance with one or more embodiments of the disclosure, and which may be a complete electronic circuitry or part of a larger, more complex, electronic system. Circuitry 100 includes clock gate 102, N-bit register 104 and gated- device 108 (which is also an N-bit register and may sometimes be referred to herein as“N- bit register 108”). A cloud of combinational logic 110 is operatively coupled to an input of N-bit register 104, and another cloud of combination logic 106 is operatively coupled between N-bit register 104 and N-bit register 108. N-bit register 108 may be considered a gated-device, and flip-flops that form N-bit register 108 may each individually be considered a gated-device.Also operatively coupled between N-bit register 104 and N-bit register 108 are gating logic 114 and clock gate 102. Clock gate 102 is also operatively coupled to a main clock that supplies clock 112 for the circuitry 100. Clock gate 102 is configured to receive, at one or more inputs, enable 116 supplied by gating logic 114 and clock 112 supplied by the main clock. Clock gate 102 is configured to supply an enabled clock 118 to N-bit register 108. In a contemplated operation, clock gate 102 is configured to switch between propagating and gating modes responsive to enable 116 and/or clock 112, where clock gate 102 propagates clock 112 during a propagating mode and gates clock 112 during a gate mode, i.e., clock gate 102 does not propagate clock 112 during gate mode. Notably, clock gate 102 is a simplified block diagram of a clock gate in accordance withembodiments of this disclosure. Common elements such as a flip-flops, AND gates, and other combinational logic are not necessarily called out in discussion and figures for clock gates in this disclosure, but, for avoidance of doubt, as used herein“clock gate” is intended to include all arrangements for clock gates, and legal equivalents thereof, even if certain elements are not mentioned.Clock gate 102 is configured to supply enabled clock 118 to one or more gated- devices in its fan out, including N-bit register 108. N-bit register 108 is operatively coupled to clock gate 102 such that individual register elements (e.g., flip-flops) may be clocked by enabled clock 118. First bus 120 is operatively coupled betweencomputational logic 106 and N-bit register 108, and supplies data to N-bit register 108. Additional buses may be operatively coupled between an output of N-bit register 108 and downstream elements from circuitry 100, and be configured to transmit information. A second bus (e.g., second bus 122) is shown operatively coupled between N-bit register 108 and whatever circuitry is downstream. FIG. 2A shows an example timing diagram 200 that corresponds to a contemplated operation of circuitry 100, in accordance with one or more embodiments of the disclosure. Shown are signals for clock 112, clock gate enable 116, and enabled clock 118. Also shown are register input 124 and register output 126. Clock cycles are shown along an axis of the timing diagram 200 (i.e., Cycleo - CycleN). These labels align with rising edges of clock 112 and denote a start of an indicated clock cycle and an end of a previous clock cycle.Generally, activity along a data path (e.g., logic 110, N-bit register 104, logic 106, N-bit register 108, etc.) is observed, and used to configure the clock path (e.g., enabled clock 118).In one or more embodiments, register output 126, which corresponds to a signal on bus 122, may be observed to determine when N-bit register 108 is active and inactive. Activity may be detected responsive to detected state changes at register output 126 over a number of clock cycles. By way of example, bus 122 may be operatively coupled to an output of N-bit register 108, and state changes based on, at least in part, register output 126 may be observed at the bus 122.Register input 124 corresponds to a signal on bus 120, and is shown for informational purposes, but is not necessarily used to detect activity at N-bit register 108. Simply by way of explanation, register input 124 would not be used for clock gating analysis because when a clock gate is open, an input and an output of a register is (or will become) identical in relevant cases. When a clock gate is closed, a register input could be different from a register output.Enabled clock 118 is active during a propagating period 202 that is defined by enable 116 being asserted (e.g., high). During the propagating period 202, enabled clock 118 and clock 112 are substantially the same although there may sometimes be a small propagation delay. A gating period 204 follows the propagating period 202, and is defined by enable 116 being negated (e.g., low). One clock cycle following the beginning of gating period 204, enabled clock 118 becomes inactive, and enabled clock 118 is inactive for a same number of clock cycles (one) as a length of the gating period 204.Turning to register output 126, notably, after activity 212 (in this example, a change from signal“a” to signal“b”), register output 126 is in an idle period 214 where no activity is associated with N-bit register 108 (e.g., no information is changing on the outputs of these gated-devices, register 108 holds input“b” but is not clocked for input“c”), and so N-bit register 108 may also be characterized as being in idle period 214 (which may also be characterized as a“static” period) until a second activity 218. To detect activity at register output 126, a state at each clock cycle may be observed and then compared to the state of the previous clock cycle. For example, state 212 at Cycle3 may compared to state 210 at Cycle2, and a changed state may be detected responsive to detecting a difference between state 212 and state 210.So, a portion of propagating period 202 that overlaps with idle period 214 is potentially wasted propagation time and an opportunity to improve operation of clock gate 102. Clock gating logic may be configured based on the wasted propagation time. This and other wasted propagation time may be recorded, for example, clock cycles associated with start and end times of wasted propagation times may be observed and stored.FIG. 2B shows a timing diagram 230 that corresponds to an improved circuitry 300 (see FIG. 3), which may be created (or a design for which may be created) based on clock gating analysis described in this disclosure. FIG. 2B shows an example of a contemplated operation of improved circuitry 300, and how that contemplated operation is an improvement over pre-improved circuitry 100. Shown are signals for clock 312, clock enable 316, and enabled clock 318. Also shown are buses 320 and 322, which, in the example shown in FIG. 2B, are observed to determine register input 324 and register output 326, where register input 324 corresponds to a signal on bus 320 and register output 326 corresponds to a signal on bus 322.Referring to clock enable 316, there is a short propagating period 232 followed by a gating period 234 and then another short propagating period 236. Due to modified gating logic 314, which will be described further in relation to FIG. 3, that supplies enable 316 to clock gate 302, gating period 234 corresponds more closely to idle period 250 of register output 326 as compared to gating period 204 and idle period 214. Notably, clock cycle 240 of enabled clock 318 is propagated, but no other clock cycles of clock 312 are propagated during idle period 250 as compared to the timing diagram 200 shown in FIG. 2A where two clock cycles are propagated during idle period 214. Also notably, in the example shown in FIGS. 2A and 2B, signal change“b” to“c” at register input 324 is purposefully suppressed by clock gate logic 314, which clock gate logic 314 comprises logic 314-1 and logic 314-2, for gating period 234. A consistency check may be performed that shows that activity of register input 124 and register output 126 for the gating logic 114 of circuitry 100 is the same as register input 324 and register output 326 for modified gating logic 314 of improved circuitry 300. For example, state changes at register output 126 for corresponding clock cycles may be observed and determined to be consistent with state changes at register output 326, and, thus, a contemplated operation of improved circuitry 300 is consistent with a contemplated operation of circuitry 100.FIG. 3 shows a simplified circuit diagram of an example improved circuitry 300, in accordance with one or more embodiments of the disclosure. Notably, improved circuitry 300 is one example of a generalized circuitry that may be created based on a clock gating analysis of this disclosure. One of ordinary skill in the art would understand that many other circuitries may be used to achieve a similar improvement in clock gating efficiency from circuitry 100 to improved circuitry 300.Improved circuitry 300 includes modified gating logic 314, which supplies enable 316, and is responsible for the differences between the gating periods and propagating periods shown in FIG. 2A and FIG. 2B. Notably, modified gating logic 314 is illustrated in a simplified format and comprises logic 314-1 and logic 314-2. Logic 314- 2 is an XOR gate configured to output a“1” if any of the input signals to N-bit register 308 are different from a corresponding output signal of N-bit register 308, and a“0” if all of the input signals to N-bit register 308 are the same as a corresponding output signal of N-bit register 308. Logic 314-1 is (previous) gating logic 114 and an AND gate, and the AND gate receives an output of logic 314-2 and an output of gating logic 114. This is a simplified example to illustrate a contemplated example, and one of ordinary skill in the art would understand it could be represented by other combinations of combinational logic.One technique for selecting new gating logic is to observe an exclusive-or (XOR) operation between a data input and a data output of a flip flop. Only when an input and an output differ does a clock need to be supplied. One advantage of such a technique is to save power. However, a circuitry design may take into account many different considerations and combinations of considerations, including power usage, timing, signal strength, etc. For example, if an enable signal is supplied from a finite state machine (FSM), the FSM may be redesigned to only output an enable at a time a state is changing (e.g., data is changing).Clock gating analysis techniques and clock gating improvement techniques that are described in this disclosure may be implemented in hardware, software, and combinations thereof. Moreover, they may be used to analyze an electronic circuit or parts of an electronic circuit, for example, implemented in a configurable processor, a field- programmable-gate array, or analog circuits; and also to analyze a design of an electronic circuit such as may be described in a logic gate model.FIG. 4 shows a flowchart of a clock gating analysis process 400, in accordance with one or more embodiments of the disclosure. In operation 402, a propagating period of a clock gate and an idle period of a gated-device are identified. In one embodiment, an idle period may correspond to a period of inactivity at a gated-device, which may be detected by observing bus lines operatively coupled to a gated-device. In operation 404, wasted clock propagation time is identified responsive to an overlap of a propagating period and an idle period. Wasted clock propagation time may be identified and recorded, for example, using a start time and a stop time. Notably, a propagating period may actually comprise multiple propagating periods and an idle period may comprise multiple idle periods, and so multiple periods of overlap and multiple periods of wasted propagation time may be identified. In operation 406, modified clock gating logic is configured responsive to the wasted propagation time. In some cases, a clock gate may be a particular element in a standard cell library and there may be many types of clock gates with different behavior (e.g., associated clock gating logic). In one embodiment, a list of clock gates and their associated behavior may be used to identify candidate clock gates and then a standard cell library may be searched for the candidate clock gates. If available, a candidate clock gate may be investigated to see if it improves operation of a circuitry, and, more specifically, clock gating efficiency.In operation 408, a first activity of the gated-device before configuring the modified clock gating logic is compared to a second activity of the gated-device subsequent to configuring the modified clock gating logic. In operation 410, the first activity and the second activity are confirmed to be consistent.In one or more embodiments, a clock gate analyzer may simulate electronic circuitry for all relevant devices (e.g., clock gates and gated-devices) without using a sampling period - in other words, sampling for all signaling changes. However, in some cases there may be a trade-off in simulation efficiency. More particularly, the more complex a circuitry in terms of number clock gates and/or gated-devices, and the more signaling changes that are tracked, then the higher the cost in storage space (e.g., to store waveforms), processing power, and/or simulation run time.So, in one or more embodiments, a clock gate analyzer may first simulate clock gates without using a specific sampling period, in other words, sampling for all signaling changes. A clock gate analyzer may determine relevant clock frequencies and/or sampling periods for simulation of all relevant devices (e.g., clock gates and/or gated-devices) based on results of the first simulation. A second simulation may be performed for all relevant devices using the determined frequencies and/or sampling periods. Notably, using the determined sampling period to simulate all relevant devices has a lower cost than simulating all relevant gated-devices for all signaling changes.FIG. 5 shows a functional block diagram of an example embodiment of a clock gate analyzer 500 configured, generally, to analyze clock gating for an electronic circuitry (such as circuitry 100 of FIG. 1), in accordance with one or more embodiments of the disclosure. More specifically, clock gate analyzer 500 may be configured to analyze clock gating associated with an electronic circuitry design, for example, a gated level logicimplementation (e.g., a gated level netlist file in Verilog) inferred from a behavioral design of an electronic system (e.g., a System Verilog file), for example, using a compiler.In the example shown in FIG. 5, clock gate analyzer 500 includes, generally, logic interpreter 502, simulator 520, clock gating interpreter 524, full circuitry simulator 528, and efficiency interpreter 524. Logic interpreter 502 may be configured to interpret a gate-level logic model 504 to identify logic cells, extract logic cell information, and provide logic cell information to populate logic cells records 506.Logic cells records 506 may include fields for pre-defmed logic cell types, including clock gates 508, gated-devices 510, and connections 512. In one embodiment, logic cells records 506 may also store information about timing elements, such as buffers (not shown). In one or more embodiments, gate-level logic model 504 may describe connectivity and behavior of components of an electronic circuitry design. In other embodiments, gate-level logic model 504 may describe connectivity (e.g., a list of cells, nodes, and/or some attributes of cells), and interpreter 502 may also include a description (e.g., library files) of behaviors for one or more cells that described in the gate-level logic model 504.Connectivity in a gate-level logic model 504 may describe a physical connection (e.g., a wire) or a signal and a signal path (e.g., a signal and an identifier for a connected device that receives the signal). In one embodiment, logic interpreter 502 may include a terminology table that describes naming conventions used for different types of logic cells in gate-level logic models, including gate-level logic model 504. In one embodiment, a terminology table may be, or based on, a standard cell library. Logic interpreter 502 may use terminology described in the terminology table to identify clock gates, gated-devices, and connections described in gate-level logic model 504 and extract logic cell information 514. Logic cell information 514 may include full modules (e.g., detailed sub-blocks and high-level functionality that encapsulates the sub-blocks) as well as instance paths and wire aliases for each module.Logic interpreter 502 may be configured to parse gate-level logic model 504 to identify clock gates and trace connectivity (e.g., wires or signals) from a clock output of a clock gate to every gated-device that is driven (i.e., the clock output of the clock gate is received at a clock input of the gated-device) by the clock gate. Information about each clock gate and gated-device(s) it drives may be stored by logic interpreter 502 as extracted logic cell information 514 in logic cells records 506. Logic interpreter 502 may use terminology described in a terminology table to identify clock gates, gated-devices, and connections described in gate-level logic model 504 and extract logic cell information 514.Logic interpreter 502 stores the logic cell records 506 in a connection model 516, which includes a description of clock gates and the gated-devices that they drive.In addition to extracting logic cell information 514 from gate-level logic model 504, logic interpreter 502 is configured to generate clock gate (CG) simulation commands 518, which are parsed commands for running a simulation of relevant clock gates atsimulator 520. CG simulation commands 518 may include simulation commands, a sampling frequency determined by logic interpreter 502, and a gate-level logic model 504. Logic interpreter 502 may determine CG simulation commands 518 based on connectivity and behavior information stored in connection model 516. In one embodiment, CG simulation commands 518 may be stored as a command file.Simulator 520 is configured to output clock gate (CG) behavior 522 (e.g., waveforms) of relevant clock gate signals, responsive to CG simulation commands 518, and more specifically, gate-level logic model 504, simulation commands, and sampling frequency. CG behavior 522 may describe signals that may be logged and analyzed in more detail to understand an impact of clock gates in the electronic circuitry design.CG behavior 522 may also include state information for all connections whenever a connection changes state during an analysis period. In one or more embodiments, sampling periods (e.g., corresponding to sampling frequencies as noted above) may be defined to simplify clock gate and/or clock gate and gated-device simulation and analysis.In one or more embodiments, logic interpreter 524 may be configured to determine a sampling frequency to be used during a second simulation (e.g., by full circuitry simulator 528) using CG behavior 522. Because a frequency of a system clock can change or a system clock may have different clocks with different frequencies during an analysis period, in one embodiment, logic interpreter 524 identifies a sampling frequency for which no, or inconsequential, data will be missed (e.g., will not be logged).Any suitable technique known to those having ordinary skill in the art may be used to identify one or more sampling frequencies. For example, according to a general identification process contemplated in this disclosure, logic interpreter 502 identifies a “lowest” clock frequency fs and a“highest” clock frequency^//during an analysis period, determines a multiple in of that“lowest” clock frequency that is the same or faster than the identified“highest” clock frequency, and then defines a sampling frequency as m xfs for analyzing a circuitry (which may also be referred to as a sampling rate, and having a corresponding sampling period).According to another general identification process contemplated in this disclosure, logic interpreter 502 identifies all clock frequencies during an analysis period, determines a lowest-common-multiple (LCM) frequency of the clock frequencies, and defines an analysis frequency as an LCM frequency.To identify clock frequencies (whether all or just the lowest and highest frequencies), in one embodiment, for each clock gate for which an analysis run is performed over an analysis period, logic interpreter 524 may be configured to identify greatest common divisors of clock frequencies of sub-periods of the whole analysis period. In one embodiment, relevant sub-periods may be found by checking a timestamp for each toggle of a clock and recording a time interval between each such timestamp. For example, if there is a first toggle at time tl and a second toggle at time t2, then time elapsed between tl and t2 may be recorded as a found half-period, where a found period would be from rising-edge to rising-edge.Clock gating interpreter 524 may be configured to generate ECD simulation commands 526 and provide ECD simulation commands 526 to full circuitry simulator 528. ECD simulation commands 526 may include, for example, all relevant signals for simulating relevant devices (e.g., clock gates and gated-devices) and sampling frequencies for simulating relevant devices.A series of descriptions of clock gate and gated-device (CG-GD) behavior 530 are output by full circuitry simulator 528 in response to ECD simulation commands 526. For each clock gate, a CG-GD behavior 530 may describe a series of waveforms output by the clock gate and waveforms output by gated-devices in its fan out.Each waveform of CG-GD behavior 530 (e.g., each waveform file) may be analyzed by efficiency interpreter 532. Efficiency interpreter 532 may be configured to check, for each clock gate: (1) how many cycles in a row a clock gate is driving gated- devices; and (2) an efficiency of a clock gate by measuring the number of clock cycles during propagating periods gated-devices are idle or active. Efficiency interpreter 532 may also be configured to determine a number of gated-devices in its fan out, for example, by looking at a number of connections.Notably, since a period for a clock that is propagated by a clock gate could vary, efficiency interpreter 532 may be configured to detect active clock edges, and to check for activity at gated-devices responsive to detected active clock edges.In one or more embodiments, evaluation information may be collected for each clock gate, compared, and stored, and then output in a human and/or computer-readable format in report 534, which may be stored.In one or more embodiments, each of logic interpreter 502, CG interpreter 524, and efficiency interpreter 532 may be a computer program (e.g., a compiled program in object code), a script written in a scripting language, and combinations thereof (e.g., a script that controls or invokes one or more computer programs, and vice versa).While simulators, such as simulator 520 and full circuitry simulator 528, are shown and described as separate functional blocks, in one or more embodiments they may be part of the same functional module or software package. Moreover, an analysis process may be described herein, generally, as a first simulation where clock gates and signals they provide are identified based on the first simulation and then clock gates, signals they provide, and gated-devices, are more fully monitored and analyzed in a second simulation. However, in one or more embodiments, clock gates and gated-devices may be analyzed together.FIGS. 6, 7 and 8 show flow-charts for example processes 600, 700 and 800 implemented as scripts for logic interpreter 502, CG interpreter 524, and efficiency interpreter 532, respectively, in accordance with one or more embodiment of the disclosure.FIG. 6 shows a flow chart of an example logic interpretation process 600(performed, for example, by logic interpreter 502), in accordance with one or more embodiments of the disclosure. In operation 602, clock gate cell names, register cell names, and buffer cell names, one by one or in combination, are used as arguments (i.e., to define a parameter) for cell types of interest, and a logic model is searched for the cell types of interest. In operation 604, bottom level instantiations of logic cells corresponding to each logic cell type name is returned. In one embodiment, a bottom level instantiation is a place in a netlist where a logic cell is picked from a standard cell library, and is hence the only place where the cell’s module name is used. The whole netlist is parsed and all the logic cell instances corresponding to logic cell type names are collected. Inoperation 606, each module of which a cell was instantiated is identified and a module’s name is collected. Each place where a module name is called upon later is checked within a logic model, and key information is extracted. In one embodiment, key information is an instance name, wire aliases, and a name of the module that it was instantiated within.In some cases, modules may be nested (modules within modules within modules), in which case a module name may be a module path of nested modules.After extracting key information about logic cell types, in operation 608, relationships among identified logic cells are determined. Since the key information has been acquired about the input and output wires for logic cell for all the different places that it is used within an electronic circuitry design, connections among the logic cells may be identified based on the key information. In operation 610, extracted cell information is provided and cleaned to ensure correct formatting for identified connections. In operation 612, a cell tree of clock gates is formed and used as a search tree to search the cell information for gated-devices connected to a given clock gate.In operation 614, a cell tree is stepped through one clock gate at a time to search lists of cell information. All logic cells are returned for gated-devices coupled (e.g., that are supplied an enabled clock signal (“ENCLK signal”) from a clock gate as an input clock signal) to a given clock gate, for example, register, or another clock gate, and the logic cell information for the identified logic cells is stored. Each such gated-device is recorded as gated by the particular clock gate.By way of a contemplated example, in one or more embodiments, all buffers that are supplied (e.g., coupled to) an ENCLK signal from a particular clock gate are found.An output wire of each such buffer is recorded. The output wire is then viewed as equivalent to an ENCLK signal. A cell-tree is stepped through again, one clock gate at a time, to search cell information and identify gated-devices and buffers supplied by the ENCLK signal. Newly identified gated-devices are recorded as well as newly identified buffers, which are viewed as equivalent to an ENCLK signal. This process is repeated until no more buffers are identified and it is assumed that all gated-devices have been found and stored for a clock gate as its found gated-devices.In one or more embodiments, when a gated-device is found, such as a register, a gated-device identifier (e.g., a number of several digits) is stored in a list together with a clock gate identifier to show that clock gate and gated-device are connected. In one embodiment, a gated-device identifier is a number corresponding to a location in a gated- device list the particular gated-device can be found. When a gated-device is found, that branch of that clock tree is no longer necessary for more logic cells connected to a gated- device’ s output since a gated-device does not propagate a clock signal, meaning that logic cells connected to a gated-device’ s output are not driven by the respective clock gate.FIG. 7 shows a flowchart of an example clock gating interpretation process 700, in accordance with one or more embodiments of the disclosure.In operation 702, a longest sampling period for which clock gates each may be sampled without losing any relevant information is searched for and identified. In operation 704, all paths for signals of registers and clock gates that are to be analyzed are found. Each path may be described as a position for a particular cell in a circuitry design, and, more particularly, may be described in a list of nested modules that the particular cell is located within. In operation 706, the paths are parsed into as a series of terminal commands. A terminal command of the series of terminal commands may be a command to perform a simulation of a single clock gate and its associated gated-devices (e.g., driven registers). The command may include specific instructions for running an electronic circuitry design in a simulator (e.g., for a commercially available simulation engine), the path for each relevant device (here, each relevant cell), and a sampling frequency and/or sampling period. In operation 708, relevant waveforms are exported, each waveform (or subset of the waveforms) is at a desired sample period for analyzing a specific clock gate and its gated-devices.In one or more embodiments, to find a desired sample period, a correct waveform for each clock gate is found (i.e., specific to a clock gate), exported to a file, and searched line by line. Every clock edge is detected, and a time of a clock edge event is read and stored. For every time a rising clock edge has been detected, a time since a clock fell until a rising edge is recorded, and vice versa for the event of a falling edge. A resulting half period is identified and stored. In one embodiment, the resulting-half period and other sample periods that are identified are added to a list of candidate sample periods. One or more greatest common divisors for half-periods may be found and appended to the list of candidate sample periods. Upon analyzing all the clock gates and completing a list of candidate sample periods list, the list of candidate sample periods is returned. A list of candidate sample periods may be analyzed and appropriate sampling periods may be selected for simulating an electronic circuitry design, and selected sampling periods may be used to determine sampling frequencies.A list of connections between the clock gates and registers may be used to create a terminal command to run a digital simulator. The correct paths for the signals that are needed in the terminal commands are also created. An output may be a file listing one terminal command for each of the clock gates. This file is read run one by one. A file (e.g., a comma separated values type file) is written out for each clock gate and stored, for example, in a directory of a file system or a database.FIG. 8 shows a flowchart of an example efficiency interpretation process 800, in accordance with one or more embodiments of the disclosure. Generally, wave forms corresponding to simulated electronic circuit designs may be analyzed and dynamic efficiency information collected and stored. Dynamic efficiency information may include, for example, a number of clock cycles where a gated-device is active, a number of cycles where a gated-device is static and a clock-gate corresponding to such gated-device propagating a clock (an open state), a maximum number of consecutive cycles where the clock gate is propagating the clock, a number of gated and ungated-device in a circuitry, and a number of gated-devices in a fan out of a clock gate (e.g., stored as a list of connections to a clock gate). In one or more embodiments, clock-gates and gated-devices may be quantified (e.g., as a score) according to dynamic efficiency information, for example, using a raw score, a percentage, a ranking, and combinations thereof. For example, counts of gated versus ungated-devices; a dynamic efficiency score (e.g., in percentages) for clock gates, groups of clock gates, gated-devices, and groups of gated- devices calculated as a comparison of wasted propagating period versus total propagating period; and overall dynamic efficiency score (e.g., in percentages) for a circuit calculated using one or more of the foregoing dynamic efficiency scores.Generally, a waveform file may be processed to collect dynamic efficiency information for tested criteria, e.g., a dynamic efficiency in terms of percent, a maximum number of consecutive cycles where a clock gate is open, and a number of gated-devices in a clock gates fan out. Since periods of samples may be shorter than a period of a clock, an edge detector may be implemented to determine when to read signals of interest. By way of a contemplated example, when a sampling frequency is higher than a clock frequency for a particular part of a waveform, each sample of a clock may be observed to determine if it is different than a previous sample of the clock - in other words, if there has been a change - i.e., activity. If the sample is different than the previous sample, then it may be inferred that is where an edge occurred (or within a number of clock cycles corresponding to a delay).Turning to example efficiency interpretation process 800, in operation 802, a waveform is read for a first or next) clock gate and its gated-devices (e.g., registers in its fan out). In operation 804, a first or next clock cycle is detected (e.g., using edge detection techniques) based on the waveform. In operation 806, it is determined whether the clock gate is in propagating mode of operation (also referred to herein as“open mode”) for the current clock cycle (i.e., the detected first or next clock cycle in operation 802).If, in operation 806, it is determined that the clock gate is in a propagating mode for the current clock cycle, then in operation 808 an open mode cycle count is incremented.The open mode cycle count is a count of a number of consecutive cycles that a clock gate is in a propagating mode - in other words, a measure of propagating cycles. In operation 810, states of gated-devices are read for the current clock cycle and the previous clock cycle to identify if states for such gated-devices have changed. In operation 812, the states of gated-devices for the current clock cycle and the previous clock cycle are compared, and it is determined if any state changes are detected. If register values have not changed (i.e., a register is static), then in operation 816 a static clock cycle is recorded - that is, a clock cycle is recorded where a clock gate was propagating but a gated-device was static. In one embodiment, recording a static cycle may include incrementing a static cycle count. If gated-device states have changed (i.e., a register is active), then in operation 814 an active cycle may be recorded. In one embodiment, recording an active cycle may include incrementing an active cycle count. After recording an active cycle (operation 814) or a static cycle (operation 816), as the case may be, in operation 824, it is determined there are more clock cycles to analyze for the current clock gate based on the waveform. If there are more clock cycles to analyze then process 800 loops back to operation 804 and detects the next clock cycle.If, in operation 806, it is determined that a clock gate is in gate mode (i.e., not propagating a clock) for the current clock cycle then in operation 818 it is determined if a current open mode cycle count (which is also a count of consecutive propagating cycles) is larger than a previously stored open mode cycle count. If not larger, then in operation 822 the current open mode cycle count is discarded in favor of the previously stored open mode cycle count. If larger, then in operation 820 the previously stored open mode cycle count is discarded and the current open mode cycle count is stored. Upon analyzing a clock gate, a stored counter will correspond to a detected maximum number of consecutive cycles for which a clock gate is propagating. After storing or discarding the current open mode cycle count, as the case may be, in operation 824 it is determined there are more clock cycles to analyze for the current clock gate based on the waveform. If there are more clock cycles to analyze, then process 800 loops back to operation 804 and detects the next clock cycle.If, in operation 824, it is determined that there are no more clock cycles to analyze, then, in operation 826, scores for a clock gate are calculated and stored. Scoring examples, such as dynamic efficiency scores, are described later in this disclosure. In operation 828, it is determined if there are more clock gates to analyze. If there are more clock gates to analyze, then, process 800 proceeds to operation 802 to read a next waveform for a next clock gate and its gated-devices.If, in operation 824, it is determined that there are no more clock gates to analyze, then in operation 830, a weighted score is assigned to each clock gate (e.g., a weighting of dynamic efficiency score that represents its contribution to a score for an entire system), and results are parsed to a file and stored. Weighted scores may be used, among other ways, to compare clock gates to each other. For example, weighted scores may be used to determine which clock gates of a set of clock gates are the most inefficient based on a set of criteria. Weighting may involve determining which metrics are most important and assigning a weight (e.g., a value) to specific metrics. Importance may be determined, for example, based on contribution to factors such as contribution to power consumption. Metrics considered when determining weights may include number of consecutive clock cycles propagating and total number of cycles a clock gate receives an input (i.e., number of cycles where the clock gate is gated by another, closed, clock gate).Process 800 is complete after performing operation 830.A results script may be configured to create an output file, leading with general information about an electronic system design as a whole. General information may include, for example, an amount (e.g., a count) of ungated versus gated registers, a dynamic efficiency for gates and/or groups of gates (e.g., in a percentage), an overall average dynamic efficiency score (e.g., in a percentage), and dynamic efficiency information more generally.After an overview, rated/scored lists for clock gates may be described. An amount of clock gates shown for each list may be given, for example, by user input, and pre-set to show a number of worst and best clock gates (e.g., in terms of dynamic efficiency), for example, set to show the ten worst clock gates and ten best clock gates.By way of example, each entry starts with a clock gate name which is followed by a number for each tested criterion. A path for a clock gate may be shown, for example, in a netlist type display arrangement. At the bottom of an entry a path may be shown for each gated-device (e.g., register) that is driven by a clock gate. In one or more embodiments, register paths may be used to locate a clock gate within original RTL code, given that the clock gates are not physically inferred before the synthesis stage, i.e., in a netlist.Report 1 is an example summary for a whole electronic system, generated in accordance with one or more embodiments of the disclosure:SUMMARY OF ELECTRONIC SYSTEMNumber of Registers (Total | Gated | Ungated): 4063 | 3366 | 697 Clock Gate Percentage Total: 82.85%Dynamic Clock Gate Efficiency On Average in Percent: 3.39%Report 2 is an example report for a clock gate that is reported to be the most inefficient of an electronic system, generated in accordance with one or more embodiments of the disclosure:SNPS CLOCK GATE HIGH-clkctr oscel sysclk:Stationary for max 10512 cycles consecutivelyEfficiency in percent: 0.00Flip-flops in register: 4Path: ®· ISYNT-> i smclkctrl chipclkctrl-> i_oscsel_sysclk-> i_oscsel_sysclk clk_gate_s> s_osc_en_reg_qual_reg_3Gate registers:clkctrl chip-> i_oscsel_sysclk-> sys_osc_en_reg-qual_reg_3_clkctrl chip-> i_oscsel_sysclk-> sys_osc_en_reg-qual_reg_2_clkctrl chip-> i_oscsel_sysclk-> sys_osc_en_reg-qual_reg_l_clkctrl chip-> i_oscsel_sysclk-> sys_osc_en_reg-qual_reg_0 Report 3 is an example report for a clock gate that is reported to be the most efficient of another electronic system, generated in accordance with one or more embodiments of the disclosure:SNPS_CLOCK GATE HIGH port_busif_3 0 13: _Stationary for max 3 cycles consecutivelyEfficiency in percent: 100.00Flip-flops in register: 152One of ordinary skill in the art will appreciate that there are many benefits and advantages associated with implementation of embodiments of the disclosure other than those specifically described.While elements such as clock gates, combinational logic, gated-devices, buses and various connections may be described herein, one of ordinary skill in the art would understand that such elements and connections may physical, such as part of circuitry, logical, such as defined in a design like a gate-level model, or both physical and logical, such as a design corresponding to circuitry. Examples and embodiments described in this disclosure should be interpreted to cover physical implementations and logical implementations, independently and in combination, and legal equivalents thereof; unless an example or embodiment is specifically stated to apply to one of a logical or physical implementation, or a context would be understood by one of ordinary skill in the art to apply to one of physical or logic implementations, in which case it covers such stated or understood implementation and legal equivalents thereof.For at least the reasons set forth herein, various embodiments of the present disclosure provide a technical solution to one or more problems that arise from technology that could not reasonably be performed by a person. Various embodiments of the present disclosure provide technical advantages and technical solutions that are rooted in computer technology, overcome problems and/or disadvantages rooted in computer-related technology, and improve computer-related technology generally, including problems, disadvantages, and challenges described herein. Further, at least some embodiments disclosed herein may improve computer-related technology by allowing a computer to perform a function not previously performable by a computer.Many of the functional descriptions in this specification may be illustrated, described or labeled as modules, threads, steps, or other segregations of programming code, including firmware, in order to more particularly emphasize their implementation independence. Modules may be at least partially implemented in hardware, in one form or another. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.Modules may also be implemented using software or firmware, stored on a physical storage device (e.g., a computer-readable storage medium which may also be referred to herein as simply a computer-readable medium), in memory, or a combination thereof for execution by various types of processors.An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as a thread, object, procedure, or function. Nevertheless, the executable of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several storage or memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. Where a module or portions of a module are implemented in software, the software portions are stored on one or more physical storage devices.In some embodiments, the software portions are stored in a non-transitory state such that the software portions, or representations thereof, persist in the same physical location for a period of time. Additionally, in some embodiments, the software portions are stored on one or more non-transitory storage devices, which include hardware elements capable of storing non-transitory states and/or signals representative of the software portions, even though other portions of the non-transitory storage devices may be capable of altering and/or transmitting the signals. Examples of non-transitory storage devices are flash memory and random-access-memory (RAM). Another example of a non-transitory storage device includes a read-only memory (ROM) which can store signals and/or states representative of the software portions for a period of time. However, the ability to store the signals and/or states is not diminished by further functionality of transmitting signals that are the same as or representative of the stored signals and/or states. For example, a processor may access the ROM to obtain signals that are representative of the stored signals and/or states in order to execute the corresponding software instructions.Software portions of modules may also be stored on computer-readable storage media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable storage media may be any available media that may be accessed by a general-purpose or special-purpose computer.By way of example, and not limitation, such computer-readable storage media may include non-transitory storage media including Random Access Memory (RAM), Read- Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory(EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other optical disk storage (such as a digital video disc or“DVD”), magnetic disk storage or other magnetic storage devices, flash memory devices (e.g., solid state memory devices), or any other storage medium which may be used to carry or store desired program code in the form of computer-executable instructions or data structures and which may be accessed by a general-purpose or special-purpose computer. Combinations of the above may also be included within the scope of computer-readable storage media and storage mediums.Computer-executable instructions may include, for example, instructions and data configured to cause a processor to perform a certain operation or group of operations, including to perform the various embodiments of this disclosure.The embodiments described herein may be embodied, wholly or partially, in one or more computer program products supplied on any one of a variety of computer-readable storage media. The computer program product(s) may be embodied in computer language statements.The term“computer program product” is used to refer to a computer-readable storage media, as defined above, which has on it any form of software to enable a computer system to operate according to any embodiment of the invention. Software applications may include software for facilitating interaction with software modules, including user interface and application programming interfaces. Software may also be bundled, especially in a commercial context, to be built, compiled and/or installed on a local computer.Additional non-limiting embodiments of the disclosure may include:Embodiment 1 : A method of analyzing an electronic system, comprising:identifying a propagating period of a clock gate; identifying an idle period of a gated- device operatively coupled to the clock gate; identifying a wasted propagation period responsive to an overlap of the propagating period and the idle period; configuring a modified clock gating logic responsive to the wasted propagation period; comparing a first activity of the gated-device to a second activity of the gated-device; and confirming the first activity and the second activity are consistent.Embodiment 2: The method of Embodiment 1, wherein comparing the first activity of the gated-device to the second activity of the gated-device comprises:determining the first activity of the gated-device responsive to activity associated with the gated-device prior to configuring the modified clock gating logic; and determining the second activity of the gated-device responsive to activity associated with the gated-device subsequent to configuring the modified clock gating logic.Embodiment 3 : The method of Embodiments 1 and 2, wherein identifying an idle period of a gated-device comprises: identifying activity on a data path that corresponds to a clock path that includes the clock gate and the gated-device; and identifying the idle period responsive to the identified activity.Embodiment 4: The method of any of Embodiments 1 to 3, wherein identifying the idle period comprises observing one or more state changes at an output of the gated- device.Embodiment 5 : The method of any of Embodiments 1 to 4, wherein identifying the idle period further comprises: identifying clock cycles corresponding to the observed state changes; and identifying a series of clock cycles between two consecutive state changes of the observed state changes.Embodiment 6: The method of any of Embodiments 1 to 5, wherein identifying the propagation period of the clock gate comprises identifying a series of clock cycles during which the clock gate is propagating a received clock.Embodiment 7 : The method of any of Embodiments 1 to 6, wherein the identifying the wasted propagation period responsive to the overlap of the propagating period and the idle period comprises: identifying a first series of clock cycles corresponding to the propagating period of the clock gate; identifying a second series of clock cycles corresponding to the idle period of the gated-device; and identifying at least one clock cycle that is the same for the first series of clock cycles and the second series of clock cycles.Embodiment 8: The method of any of Embodiments 1 to 7, wherein configuring the modified clock gating logic responsive to the wasted propagation period comprises configuring the modified clock gating logic to not propagate a received clock for at least some of the wasted propagation period.Embodiment 9: A system for analyzing electronic circuitry, comprising: a non- transitory storage medium configured to store electronic files of waveforms corresponding to operation of clock gates and gated-devices of an electronic circuitry; a processor for processing the electronic files of waveforms stored at the non-transitory storage medium, wherein the processor is configured to: identify a propagating period of a clock gate; identify an idle period of a gated-device operatively coupled to the clock gate; identify a wasted propagation period responsive to an overlap of the propagating period and the idle period; configure a modified clock gating logic responsive to the wasted propagation period; compare a first activity of the gated-device to a second activity of the gated-device; and confirm the first activity and the second activity are consistent.Embodiment 10: The system of Embodiment 9, wherein the processor is configured to compare the first activity of the gated-device to the second activity of the gated-device by: determining the first activity of the gated-device responsive to activity associated with the gated-device prior to configuring the modified clock gating logic; and determining the second activity of the gated-device responsive to activity associated with the gated-device subsequent to configuring the modified clock gating logic.Embodiment 11: The system of Embodiments 9 and 10, wherein the processor is configured to identify the idle period of a gated-device by: identifying activity on a data path that corresponds to a clock path that includes the clock gate and the gated-device; and identifying the idle period responsive to the identified activity.Embodiment 12: The system of any of Embodiments 9 to 11, wherein the processor is configured to identify the idle period comprises observing one or more state changes at an output of the gated-device.Embodiment 13: The system of any of Embodiments 9 to 12, wherein the processor is configured to identify the idle period further by: identifying clock cycles corresponding to the observed state changes; and identifying a series of clock cycles between two consecutive state changes of the observed state changes.Embodiment 14: The system of any of Embodiments 9 to 13, wherein the processor is configured to identify the propagation period of the clock gate by identifying a series of clock cycles during which the clock gate is open.Embodiment 15: The system of any of Embodiments 9 to 14, wherein the processor is configured to identify the wasted propagation period responsive to the overlap of the propagating period and the idle period by: identifying a first series of clock cycles corresponding to the propagating period of the clock gate; identifying a second series of clock cycles corresponding to the idle period of the gated-device; and identifying at least one clock cycle that is the same for the first series of clock cycles and the second series of clock cycles.Embodiment 16: The system of any of Embodiments 9 to 15, wherein the processor is configured to configure the modified clock gating logic responsive to the wasted propagation period by configuring the modified clock gating logic to close the clock gate for at least some of the wasted propagation period.Embodiment 17: A computer program product, comprising: a computer-readable medium; and instructions stored on the computer-readable medium, the instructions configured to enable a processor to perform operations of: identifying a propagating period of a clock gate; identifying an idle period of a gated-device operatively coupled to the clock gate; identifying a wasted propagation period responsive to an overlap of the propagating period and the idle period; configuring a modified clock gating logic responsive to the wasted propagation period; comparing a first activity of the gated-device to a second activity of the gated-device; and confirming the first activity and the second activity are consistent.Embodiment 18: The computer program product of Embodiment 17, wherein the instructions are configured to enable the processor to compare the first activity of the gated-device to the second activity of the gated-device by: determining the first activity of the gated-device responsive to activity associated with the gated-device prior to configuring the modified clock gating logic; and determining the second activity of the gated-device responsive to activity associated with the gated-device subsequent to configuring the modified clock gating logic. Embodiment 19: The computer program product of Embodiments 17 and 18, wherein the instructions are configured to enable the processor to identify the idle period of a gated-device by: identifying activity on a data path that corresponds to a clock path that includes the clock gate and the gated-device; and identifying the idle period responsive to the identified activity.Embodiment 20: The computer program product of any of Embodiments 17 to 19, wherein the instructions are configured to enable the processor to identify the idle period comprises observing one or more state changes at an output of the gated-device.Embodiment 21 : The computer program product of any of Embodiments 17 to 20, wherein the instructions are configured to enable the processor to identify the idle period further by: identifying clock cycles corresponding to the observed state changes; and identifying a series of clock cycles between two consecutive state changes of the observed state changes.Embodiment 22: The computer program product of any of Embodiments 17 to 21, wherein the instructions are configured to enable the processor to identify the propagation period of the clock gate by identifying a series of clock cycles during which the clock gate is open.Embodiment 23: The computer program product of any of Embodiments 17 to 22, wherein the instructions are configured to enable the processor to identify the wasted prorogation period responsive to the overlap of the propagating period and the idle period by: identifying a first series of clock cycles corresponding to the propagating period of the clock gate; identifying a second series of clock cycles corresponding to the idle period of the gated-device; and identifying at least one clock cycle that is the same for the first series of clock cycles and the second series of clock cycles.Embodiment 24: The computer program product of any of Embodiments 17 to 23, wherein the instructions are configured to enable the processor to configure the modified clock gating logic responsive to the wasted propagation period by configuring the modified clock gating logic to close the clock gate for at least some of the wasted propagation period.Embodiment 25: A method of analyzing an electronic circuitry design, comprising: identifying logic cells comprising a gate-level logic model of an electronic circuitry design; generating first simulation commands for simulating the electronic circuitry design responsive to the identified logic cells; performing a first simulation responsive to generating the first simulation commands; identifying clock gate behavior responsive to performing the first simulation; generating second simulation commands for simulating the electronic circuitry design responsive to identifying the clock gate behavior; performing a second simulation responsive to generating the second simulation commands; collecting dynamic efficiency information for the electronic circuitry design responsive to performing the second simulation; and scoring one or more of the electronic circuitry design, clock-gates of the electronic circuitry design, and gated-devices of the electronic circuitry design responsive to collecting the dynamic efficiency information.Embodiment 26: The method of Embodiment 25, wherein collecting dynamic efficiency information comprises collecting one or more of: a number of clock cycles where a gated-device is active; a number of cycles where a gated-device is static and a clock-gate corresponding to such gated-device is propagating a received clock; and a number of gated-devices in a fan out of a clock gate.Embodiment 27 : The method of Embodiments 25 and 26, wherein the generating the first simulation commands for simulating the electronic circuitry design responsive to the identified logic cells comprises: identifying clock gates described in the logic cells; and generating the first simulation commands responsive to the identified clock gates.Embodiment 28: The method of any of Embodiments 25 to 27, wherein the generating the first simulation commands responsive to the identified clock gates comprises generating simulation commands for simulating signaling changes at the identified clock gates.Embodiment 29: The method of any of Embodiments 25 to 28, wherein generating simulation commands for simulating signaling changes at the identified clock gates comprises: generating the simulation commands for simulating signaling changes at the identified clock gates but not for simulating signaling changes at other devices the electronic circuitry design.Embodiment 30: The method of any of Embodiments 25 to 29, wherein identifying clock gate behavior responsive to performing the first simulation comprises identifying sampling frequencies for performing a simulation of clock gates and gated- devices of the electronic circuitry design.Embodiment 31 : The method of any of Embodiments 25 to 30, wherein identifying sampling frequencies for performing the simulation of the clock gate and the gated-devices of the electronic circuitry design comprises: identifying a lowest clock frequency and a highest clock frequency during an analysis period; and determining a second clock frequency responsive to the identifying the lowest clock frequency and the highest clock frequency during the analysis period.Embodiment 32: The method of any of Embodiments 25 to 31, wherein determining the second clock frequency responsive to the identifying the lowest clock frequency and the highest clock frequency during the analysis period comprises:identifying a multiple of the lowest clock frequency that is the same or higher to the highest clock frequency; and defining the second clock frequency responsive to identifying the multiple of the lowest clock frequency.Embodiment 33: The method of any of Embodiments 25 to 32, wherein identifying sampling frequencies for performing the simulation of the clock gate and the gated-devices of the electronic circuitry design comprises: identifying all clock frequencies during an analysis period; and determining a least-common multiple clock frequency responsive to identifying all clock frequencies during the analysis period.Embodiment 34: The method of any of Embodiments 25 to 33, wherein generating second simulation commands for simulating the electronic circuitry design responsive to identifying the clock gate behavior comprises: generating simulation commands for simulating clock gates and gated-devices of the electronic circuitry using a sampling frequency.Embodiment 35: A system configured to perform any of the operations of one or more of Embodiments 26 to 35.Embodiment 36: A computer program product configured to perform any of the operations of one or more of Embodiments 26 to 35.While the present disclosure has been described herein with respect to certain illustrated embodiments, those of ordinary skill in the art will recognize and appreciate that the present invention is not so limited. Rather, many additions, deletions, andmodifications to the illustrated and described embodiments may be made without departing from the scope of the invention as hereinafter claimed along with their legal equivalents.In addition, features from one embodiment may be combined with features of another embodiment while still being encompassed within the scope of the invention as contemplated by the inventors. |
Systems, apparatuses, and methods for identifying response data arriving out-of-order from two different memory types are disclosed. A computing system includes one or more clients for processing applications. A memory channel transfers memory traffic between a memory controller and a memory bus connected to each of a first memory and a second memory different from the first memory. The memory controller determines a given point in time when read data is to be scheduled to arrive on the memory bus from memory. The memory controller associates a unique identifier with the given point in time. The memory controller identifies a given command associated with the arriving read data based on the given point in time. |
1.A memory controller includes:A first interface, where the first interface is used to receive read response data from both a first memory device and a second memory device different from the first memory device on a data bus;A second interface, where the second interface is used to send the read response data to one of multiple clients; andControl logicWherein in response to determining that a given time point for receiving the read response data is reached, the control logic is configured to:Identifying a first memory access command based on the given point in time; andIn response to determining that valid data is received on the data bus at the given point in time:Mark the first memory access command as complete; andThe valid data is sent to a given client that generated the first memory access command.2.The memory controller of claim 1, wherein the control logic is configured to:Maintain a bit vector, where each bit in the bit vector corresponds to a scheduled time slot; andThe given position of the bit vector is set in response to determining to schedule data to be transmitted on the data bus at a time corresponding to a given time slot.3.The memory controller of claim 1, wherein in response to receiving an instruction to schedule a second memory access command to be transmitted, the control logic is further configured to send a read response for receiving the second memory access command The time point of the data is assigned to the identifier identifying the second memory access command.4.The memory controller of claim 3, wherein assigning the identifier to the second memory access command comprises: determining that the second memory access command is targeted at the same address as the unresolved memory access command Status access command.5.The memory controller of claim 1, wherein the control logic is further configured to generate a unique identifier for assignment to the given point in time.6.5. The memory controller of claim 5, wherein the unique identifier includes one or more of a thread identifier and a part of a target address targeted by the first memory access command.7.3. The memory controller of claim 1, wherein determining the given point in time comprises adding a response waiting time of the first memory access command to a time when the first memory access command is scheduled to be transmitted.8.A method including:Receiving read response data from a first memory device or a second memory device different from the first memory device on the data bus via the first interface;Sending the read response data to one of multiple clients through the second interface; andIn response to determining that a given point in time for receiving read response data is reached:Identifying a first memory access command based on the given point in time; andIn response to determining that valid data is received on the data bus at the given point in time:Marking the first memory access command as complete; andThe valid data is sent to a given client that generated the first memory access command.9.The method of claim 8, further comprising:Maintaining a bit vector, where each bit in the bit vector corresponds to a scheduled time slot; andThe given position of the bit vector is set in response to determining to schedule data to be transmitted on the data bus at a time corresponding to a given time slot.10.The method of claim 8, wherein in response to receiving an instruction to schedule the second memory access command to be transmitted, the method comprises: allocating a time point for receiving the read response data of the second memory access command Give an identifier that identifies the second memory access command.11.The method of claim 10, wherein assigning the identifier to the second memory access command comprises: determining that the second memory access command is a state access that targets the same address as an unresolved memory access command command.12.The method according to claim 8, further comprising: generating a unique identifier for assignment to the given point in time.13.The method of claim 12, wherein the unique identifier includes one or more of a thread identifier and a part of a target address targeted by the second memory access command.14.8. The method of claim 8, wherein determining the given point in time comprises adding the response waiting time of the first memory access command to the time when the first memory access command is scheduled to be transmitted.15.A computing system, which includes:A plurality of clients configured to generate a memory access request for data stored in a first memory device or a second memory device different from the first memory device; andA memory controller coupled to each of the first memory device and the second memory device;Wherein in response to determining that a given time point for receiving the read response data is reached, the memory controller is configured to:Identifying a first memory access command based on the given point in time; andIn response to determining that valid data is received on the data bus at the given point in time:Mark the first memory access command as complete; andThe valid data is sent to a given client that generated the first memory access command.16.The computing system of claim 15, wherein the memory controller is further configured to:Maintain a bit vector, where each bit in the bit vector corresponds to a scheduled time slot; andThe given position of the bit vector is set in response to determining to schedule data to be transmitted on the data bus at a time corresponding to a given time slot.17.The computing system of claim 1, wherein in response to receiving an instruction to schedule a second memory access command to be transmitted, the memory controller is further configured to send a read response for receiving the second memory access command The time point of the data is assigned to the identifier identifying the second memory access command.18.The computing system according to claim 17, wherein assigning the identifier to the second memory access command comprises: determining that the second memory access command is in a state that targets the same address as the unresolved memory access command Access command.19.The computing system of claim 15, wherein the memory controller is further configured to generate a unique identifier for assignment to the given point in time.20.The computing system of claim 19, wherein the unique identifier includes one or more of a thread identifier and a part of a target address targeted by the first memory access command. |
Supports response to memory types with non-uniform latency on the same channelBackground techniqueRelated technology descriptionA variety of computing devices integrate multiple types of ICs to provide heterogeneous integration of system functionality. Multiple functions are set in the processing node, and multiple functions include audio/video (A/V) data processing, other high-data parallel applications used in the medical and commercial fields, general instruction set architecture (ISA) processing instructions, Digital, analog, mixed signal and radio frequency (RF) functions, etc. There are multiple options for placing processing nodes in a system package to integrate multiple types of ICs. Some examples are system-on-chip (SOC), multi-chip module (MCM), and system-in-package (SiP).Regardless of the choice of system packaging, in several applications, the performance of one or more computing systems may depend on the processing node. In one example, the processing node is one of multiple processing nodes in the sockets of the multi-socket server. The server is used to provide services to other computer programs in the remote computing device and computer programs in the server. In another example, the processing node is used within a mobile computing device running several different types of applications, and may relay information to multiple users at once (both local and remote).Maintaining performance at a relatively high level usually requires quick access to the stored data. Several types of data-intensive applications rely on fast access to data storage to provide reliable high performance for several local and remote programs and their users. The memory hierarchy shifts from relatively fast volatile memory (such as registers on the processor die and caches located on or connected to the processor die) to nonvolatile and relatively slow memory . The interfaces and access mechanisms used for different types of memories have also changed. Therefore, any hybrid proposal for combining two different types of memory in a hierarchical structure poses a challenge to maintain high performance to meet the fast access requirements of running computer programs.In view of the above, an effective method and system for identifying response data arriving out of order from two different memory types is desired.Description of the drawingsThe advantages of the methods and mechanisms described herein can be better understood by referring to the following description in conjunction with the accompanying drawings. In the accompanying drawings:Figure 1 is a block diagram of an embodiment of a computing system.Figure 2 is a block diagram of one embodiment of a timing diagram.Figure 3 is a block diagram of another embodiment of a timing diagram.Figure 4 is a block diagram of another embodiment of a timing diagram.Figure 5 is a block diagram of another embodiment of a timing diagram.Figure 6 is a block diagram of another embodiment of a computing system.Figure 7 is a block diagram of one embodiment of a memory controller.Figure 8 is a flowchart of one embodiment of a method for scheduling memory requests for distribution to two different memory types.Figure 9 is a flowchart of another embodiment of a method for scheduling memory requests for distribution to two different memory types.Figure 10 is a flowchart of another embodiment of a method for scheduling memory requests for issuance to two different memory types.Figure 11 is a flowchart of another embodiment of a method for scheduling memory requests for distribution to two different memory types.Figure 12 is a flowchart of another embodiment of a method for scheduling memory requests for distribution to two different memory types.Figure 13 is a flowchart of one embodiment of a method for identifying response data arriving out of order from two different memory types.Figure 14 is a flowchart of another embodiment of a method for identifying response data arriving out of order from two different memory types.Although the present invention is susceptible to various modifications and alternative forms, specific embodiments are shown in the drawings by way of example and are described in detail herein. However, it should be understood that the drawings and detailed descriptions are not intended to limit the present invention to the specific forms disclosed, but on the contrary, the present invention will cover all that fall within the scope of the present invention as defined by the appended claims. Modifications, equivalents and alternatives.detailed descriptionIn the following description, numerous specific details are explained to provide a thorough understanding of the methods and mechanisms presented herein. However, those of ordinary skill in the art should recognize that various embodiments may be practiced without these specific details. In some cases, well-known structures, components, signals, computer program instructions, and techniques are not shown in detail to avoid obscuring the methods described herein. It should be understood that, in order to make the description clear and simple, the elements shown in the drawings are not necessarily drawn to scale. For example, the size of some elements may be enlarged relative to other elements.Various systems, devices, methods, and computer-readable media for identifying response data arriving out of order from two different memory types are disclosed. In various embodiments, the computing system includes one or more clients for processing applications. Examples of clients are general-purpose central processing units (CPU), graphics processing units (GPU), accelerated processing units (APU), input/output (I/O) devices, and so on. The memory channel within the memory controller transfers memory traffic between the memory controller and the memory bus connected to each of the first memory and the second memory.In various embodiments, the first memory and the second memory utilize different data storage technologies and have different access latency. The access latency of the first memory and the second memory may differ between them by at least a threshold amount of time. In other embodiments, each of the first memory and the second memory use the same data storage technology, but still have access latency that differs by at least a threshold amount of time. For example, in one embodiment, the first memory uses the same data storage technology as the second memory, but the first memory uses the onboard cache and the second memory does not.Each of the first memory and the second memory may include one of a variety of random access memories (RAM), such as a variety of dynamic random access memories (DRAM), and a variety of non-volatile (NV) One type of dual in-line memory module (DIMM) (such as NVDIMM-P), another type of data storage technology (such as phase change memory (PCM), ferroelectric memory (FeRAM), magnetoresistive memory (MRAM) ), resistance memory (ReRAM or RRAM), three-dimensional (3D) cross-point (XPoint) memory), etc. Therefore, the difference between the one or more access waiting times of the first memory and the one or more access waiting times of the second memory may exceed the threshold. In some embodiments, the access latency of the first memory measured from the issuance of the read command to the received response with valid data is on the scale of tens of nanoseconds. In various embodiments, the second memory access latency measured from the issuance of a read or status command to the received response is on the scale of hundreds of nanoseconds. Therefore, the difference between waiting times exceeds hundreds of nanoseconds, which may be higher than a given threshold amount of time.In various embodiments, the command processor or other logic converts each received memory request into one or more commands. The scheduler in the memory controller determines whether there are two pending memory access commands, such as a first command for the first memory type and a second command for the second memory type. The scheduler determines whether each of the first command and the second command can be issued without causing data conflicts on the shared memory data bus. For example, in addition to the access latency of each of the first memory and the second memory, the memory controller tracks and schedules the read response data to arrive on the shared memory data bus based on the point in time for issuing the selected command Time point in time. In some embodiments, the time point is measured in clock cycles. If selecting any one of the first command and the second command does not schedule data conflicts on the shared memory data bus, then each of the first command and the second command is still a candidate for issuance. In this case, the scheduler selects a command from the first command and the second command based on the arbitration logic.In other embodiments, in order to avoid data conflicts on the shared memory data bus, the scheduler in the memory controller determines that the read response data has not been scheduled to be at the next given point in time on the memory data bus. Then, the scheduler determines whether there is time to schedule a first memory access command for accessing the first memory that will provide response data at a given point in time. The scheduler also determines whether there is time to schedule a second memory access command for accessing the second memory that will provide response data at a given point in time.If there is enough time for at least one of the first access command and the second access command to provide response data at a given point in time, the scheduler selects one of the first memory access command and the second memory access command based on the arbitration logic. By. In one embodiment, the arbitration logic uses weighting criteria. The criteria include at least priority level, age, etc. Thereafter, the scheduler issues the selected access command to one of the first memory and the second memory via the memory channel.In some embodiments, when the scheduler schedules a given command to be transmitted, the scheduler determines the given point in time at which the requested read data will be scheduled to arrive on the shared memory data bus. In one embodiment, the scheduler adds the waiting time of the given command to the point in time when the scheduler schedules to transmit the given command. In some embodiments, the scheduler generates an identifier as an indication of an entry in the request queue that stores information corresponding to a given command. In other embodiments, the scheduler generates an identifier based on a combination of one or more of the thread identifier and a portion of the target address of the memory request corresponding to the given command. The scheduler stores the association of the identifier with a given point in time. In one embodiment, a table is used. Therefore, the scheduler is able to read data requested based on the arrival on the shared memory data bus at a given point in time rather than based on a tag inserted in a given command or through a packet associated with the requested read data on arrival. Identifies the given command.Referring to FIG. 1, a generalized block diagram of one embodiment of a computing system 100 is shown. As shown in the figure, the clients 110 and 112 send memory requests to the memory controller 120. The memory channel 124 in the memory controller 120 transmits memory traffic between the memory controller 120 and the memory bus 130. Each of the storage 140 and the storage 150 stores data accessed by the clients 110 and 112. In various embodiments, one or more of storage 140 and storage 150 are used by clients 110 and 112 as system storage. In various embodiments, the access latency of the memory 140 differs from the access latency of the memory 150 by at least a threshold amount of time. In some embodiments, the memory 140 and the memory 150 use different data storage technologies, and therefore, the access latency of the memory 140 and the access latency of the memory 150 differ between them by at least a threshold amount of time. In other embodiments, each of memory 140 and memory 150 uses the same data storage technology, but still has access latency that differs by at least a threshold amount of time. For example, in one embodiment, the memory 140 uses the same data storage technology as the memory 150, but the memory 140 uses an onboard cache while the memory 150 does not. Therefore, the access latency is different between the memory 140 and the memory 150 and may differ by a threshold amount of time. In other embodiments, one of the memory 140 and the memory 150 may use other configurations and/or other components, but the other may not be used, which results in different access latency between them.For ease of description, the computing system 100 does not show a communication texture, I/O interfaces for input/output (I/O) devices, and any links and interfaces for network connections. In some embodiments, the components of the computing system 100 are individual dies on an integrated circuit (IC), such as a system on a chip (SOC). In other embodiments, the components are individual dies in a system in package (SiP) or multi-chip module (MCM). In some embodiments, the clients 110 and 112 include one or more of a central processing unit (CPU), a graphics processing unit (GPU), a hub for a multimedia engine, and the like. Each of the clients 110 and 112 is one of a variety of computing resources capable of processing applications and generating memory requests.Although a single memory controller 120 is shown, in other embodiments, another number of memory controllers are used in the computing system 100. In various embodiments, the memory controller 120 receives memory requests from the clients 110 and 112, and the scheduler 122 schedules the memory requests and sends the scheduled memory requests to one of the memories 140 and 150 via the memory channel 124. In some embodiments, the scheduler 122 within the memory controller 120 includes control logic that schedules memory requests targeting memory locations in the memory 140 separately from scheduling memory requests targeting memory locations in the memory 150. Thereafter, the scheduler 122 selects between the memory request targeting the memory 140 and the memory request targeting the memory 150. In one embodiment, the scheduler 122 mixes accesses targeting the memory 140 and the memory 150.The control logic for scheduling memory requests in the scheduler 122 uses information such as: the quality of service (QoS) or other priority levels of the memory request, the process or software thread identifier (ID) of the memory request, the age of the memory request, and the The amount of time since the memory request was issued to the memory 140, the amount of time since the memory request was issued to the memory 150, and so on. Therefore, the scheduler 122 supports out-of-order issuance of memory requests. When the scheduler 122 selects a memory request to send to one of the memory 140 and the memory 150, the scheduler 122 sends the selected memory request to the memory channel 124 for transmission.The memory channel 124 interfaces with each of the memory 140 and the memory 150. The memory channel 124 supports a protocol for interfacing with the memory 140 and supports another protocol for interfacing with the memory 150. The protocol determines values for information transmission, such as the number of data transmissions per clock cycle, signal voltage level, signal timing, signal and clock phase, and clock frequency.In various embodiments, the memory bus 130 supports sending data traffic in a single direction within a given amount of time (such as during a given mode in read mode and write mode), and then in another given amount of time Send data traffic in the opposite direction (such as during another mode of read mode and write mode). In one embodiment, the memory bus 130 utilizes a single command bus and a single data bus. Therefore, it is possible to schedule the issuance of memory requests to the memory 140 and the memory 150 in a manner that avoids data conflicts on the memory bus 130.As mentioned earlier, in some embodiments, the memory 140 and the memory 150 use different data storage technologies, and therefore have different access latency. As shown, the memory 140 has an access waiting time 132 that differs from the access waiting time 134 of the memory 150 by at least a threshold amount of time. Although a single access latency is shown for each of the memory 140 and the memory 150, in other embodiments, one or more of the memory 140 and the memory 150 have multiple access latency. However, each of the plurality of access waiting times of the memory 140 differs from each of the plurality of access waiting times of the memory 150 by at least a threshold amount of time.In one embodiment, one of the memory 140 and the memory 150 includes one of a variety of dynamic random access memories (DRAM), and the other of the memory 140 and the memory 150 includes a variety of non-volatile ( NV) One of dual in-line memory modules (DIMMs), such as NVDIMM-P. In other embodiments, other memory types with different access latency are used for the memory 140 and the memory 150. For example, in addition to using multiple types of random access memory (RAM) technology and NVDIMM technology, in some embodiments, each of memory 140 and memory 150 includes other examples of data storage technology, such as phase change memory (PCM), ferroelectric memory (FeRAM), magnetoresistive memory (MRAM), resistance memory (ReRAM or RRAM), three-dimensional (3D) cross-point (XPoint) memory, etc. In various embodiments, the difference between the access latency of the memory 140 and the access latency of the memory 150 is higher than the threshold. Therefore, the scheduler 122 includes control logic and sequence elements for issuing memory access commands targeting locations in the memory 140 and the memory 150 in a hybrid manner.In some embodiments, the memory controller 120 includes a command processor for converting each received memory request into one or more commands. In one embodiment, the scheduler 122 determines whether there are two pending memory access commands, such as a first command for the memory 140 and a second command for the memory 150. The scheduler 122 determines whether each of the first command and the second command can be issued without causing a data conflict on the shared memory data bus 130. For example, based on the point in time for issuing the selected command, the access waiting time 132 and the access waiting time 134, the memory controller 120 tracks the point in time when the data will be scheduled to arrive on the shared memory data bus 130. The pending first command and second command may be read access or write access. In some embodiments, the time point is measured in clock cycles. If selecting any one of the first command and the second command does not schedule data conflicts on the shared memory data bus 130, then each of the first command and the second command is still a candidate for issuance. In this case, the scheduler 122 selects a command from the first command and the second command based on the arbitration logic. In one embodiment, the arbitration logic uses weighting criteria.In other embodiments, in order to avoid data conflicts on the memory bus 130 regardless of the apparent difference between the access latency 132 and the access latency 134, the scheduler 122 determines that the memory bus 130 is scheduled as the next available service. Set a point in time. In other words, the scheduler 122 determines that the next given point in time for reading response data or writing data to drive on the memory bus 130 has not been scheduled. In some embodiments, the time point is measured in clock cycles. The scheduler 122 also determines whether there is time to schedule the first command for accessing the memory 140 and the second command for accessing the memory 150 to provide data at a given point in time. As mentioned earlier, the command processor converts received memory requests into commands. In one embodiment, one or more of the first command and the second command has one or more previous commands and/or one or more subsequent commands, when the first and second commands can be issued , The preceding command and/or the following command increase the waiting time and delay.If there is enough time for at least one of the first access command and the second access command to provide data at a given point in time when the memory bus 130 is available, the scheduler 122 selects the first memory access command and the second memory access command One of them. The scheduler 122 may use the criteria described earlier, such as priority level, age, etc. Thereafter, the scheduler 122 transmits the selected access command to one of the memory 140 and the memory 150 via the memory channel 124.Referring to Figure 2, a generalized block diagram of one embodiment of a timing diagram 200 is shown. In the illustrated embodiment, memory access commands are shown to be issued at different times on the timeline. The memory access command is issued to one of two different types of memory with different access latency. In various embodiments, the first type of memory (memory type 1) uses a different data storage technology from that of the second type of memory (memory type 2), and therefore, the memory type 1 has an access latency compared to the memory type 2. Access wait time that differs by at least a threshold amount of time.As shown in the figure, three memory access commands labeled A, B, and C are issued at times indicated by tags t1, t2, and t3. These memory access commands are issued to memory type 1. The responses to these memory access commands are shown to arrive at the times indicated by the marks t4, t5, and t6. In some embodiments, the marks on the timeline are equivalent to clock cycles. In other embodiments, the markers on the timeline are equivalent to other time measures indicating a given point in time. In addition to the deterministic access latency with three marks on the timeline, the responses are also shown to arrive in order relative to the order in which the memory access commands A, B, and C are issued.Additionally, another memory access command D is shown to be issued at the time indicated by the mark t7. The memory access command D is issued to memory type 2. The response is shown as being received at the time indicated by the mark t12. The access waiting time of the memory access command D issued to the memory type 2 is greater than the access waiting time of the memory access commands A, B, and C issued to the memory type 1. In some embodiments, the access latency of memory type 2 is five marks on the timeline.In the illustrated embodiment, memory type 2 has a second access latency. For example, the memory access command E is issued to the memory type 2 at the time indicated by the mark t13 on the timeline. In some embodiments, the second access latency for memory type 2 is six marks on the timeline. As shown in the figure, the response to the memory access command E is shown to arrive at the time indicated by the mark t19. In some embodiments, for memory type 1 and memory type 2, the read access latency is equal to the write access latency. In other embodiments, for one or more of memory type 1 and memory type 2, the read access latency is different from the write access latency. In the illustrated embodiment, the access commands A-E have different access waiting times 210 and 220, and the commands A-E are shown as being issued separately from each other. However, this release scheme is inefficient.Referring to FIG. 3, a generalized block diagram of another embodiment of a timing diagram 300 is shown. In the illustrated embodiment, memory access commands are shown to be issued at different times on the timeline. The memory access command is issued to one of two different types of memory with significantly different access latency (such as a difference of at least a threshold amount of time). As shown in the figure, the waiting time 310 of the access command issued to the memory type 1 is less than the waiting time 320 of the access command issued to the memory type 2.Similar to the timing diagram 200, in some implementations, the marks on the timeline are equivalent to clock cycles. In other embodiments, the markers on the timeline are equivalent to other time measures that indicate points in time. As shown in the figure, three memory access commands labeled A, B, and C are issued at times indicated by tags t1, t2, and t3. These memory access commands are issued to memory type 1. In addition to the deterministic wait time with three marks on the timeline, the responses to these memory access commands are shown to arrive at the times indicated by the marks t4, t5, and t6, and are relative to issuing the memory access command A, The order of B and C is orderly.The memory access command D is issued to the memory type 2 at the time indicated by the mark t7. Before receiving the response, another memory access command E is issued to the memory type 1 at the time indicated by the mark t8. At the time indicated by the mark t9, it is impossible to issue another memory access command to the memory type 1 without data conflict. In this example, it is known that the access latency of the memory access command issued to the memory type 1 is three marks on the time line, and the access latency of the memory access command issued to the memory type 2 is at least five times the time line. Tags. Therefore, it is known that due to the scheduling of the memory access command D, the memory data bus is not available at the time indicated by the mark t12. If the memory access command is issued to the memory type 1 at the time indicated by the mark t9, a data conflict will occur at t12.Other access commands issued are shown, such as the memory access command F issued to the memory type 2 at time t13 and the memory access commands G and H issued to the memory type 1 at times t14 and t15. In this example, the access latency of the memory access command F is six marks on the timeline. Therefore, it is known that the memory data bus is not available at the time indicated by the flag t19 due to the scheduling of the status access command F. If the memory access command is issued to the memory type 1 at the time indicated by the tag t16, a data conflict will occur at t19. Therefore, the scheduler that issues memory access commands to the two types of memories via the memory channel considers the time when the memory data bus is unavailable due to the waiting time of the command in order to avoid data conflicts on the memory data bus.Referring to FIG. 4, a generalized block diagram of another embodiment of a timing diagram 400 is shown. In the illustrated embodiment, the memory access commands 420 and 430 are shown as being issued at different times based on the clock 410. In the illustrated embodiment, the clock period of the clock 410 is used to provide a time measurement to identify a point in time. The memory access command is issued to one of two different types of memory having access latency different from each other by at least a threshold amount of time. In one embodiment, command 420 is issued to a first type of memory (memory type 1), and command 430 is issued to a second type of memory (memory type 2).As shown in the figure, the waiting time of the access command 420 is less than the waiting time of the access command 430. For ease of illustration, the waiting time is not drawn to scale. In some embodiments, the memory type 1 access latency measured from the issuance of the read command to the received response with valid data is on the scale of tens of nanoseconds. In the example shown, the latency is shown as 2 clock cycles. In various embodiments, the memory type 2 access latency measured from the issuance of a read or status command to the received response (which may or may not include valid data) is on the scale of hundreds of nanoseconds. For ease of illustration, the waiting time is shown as 5 clock cycles, not drawn to scale.In various embodiments, a memory request (such as a memory read request or a memory write request) is converted into one or more commands based on accessing the memory. For example, the control logic in DRAM executes complex transactions (such as activation (open) transactions and pre-charging of data and control lines in DRAM) once to access the identified row, and once during the off transaction to store in the row buffer The modified content in the area is put back to the marked line. Each of the different DRAM transactions (such as activation/opening, column access, read access, write access, and precharge/close) has a different corresponding latency. Generally, activation and precharge transactions have significantly higher latency than read access and write access transactions.The dotted lines of the commands shown in the example shown represent possible additional commands issued with the memory access command. For example, activation/turn-on commands and pre-charge/turn-off commands for DRAM may be used, but the commands are not shown in the timing diagram 400. Similarly, for NVDIMM-P, each of the transaction read (X-READ) command, the send read (SREAD) command, and the speculative status read command is usually followed by an extended address (XADR) command, which allows The linear address extended address. These additional commands are not shown specifically, but are represented by the dashed lines of possible arrangements between the commands 420 and 430. Therefore, back-to-back access commands are generally not issued on back-to-back clock cycles. A scheduler of a memory controller with a memory channel considers possible additional commands when scheduling memory access commands for issuance.The responses are shown as responses 440, and they are received on a shared single memory data bus. As shown in the figure, the memory access command "READ A" for memory type 1 is issued at clock cycle (CC) 1. In the case of the access latency of two clock cycles in this example, valid response data arrives at CC3. As shown in this example, valid data consumes two clock cycles, such as CC 3 and CC 4. During each clock cycle, an amount of data equal to the width of the data bus is returned to the memory controller. The supported size of the data bus is based on design choices.In one embodiment, the scheduler or other control logic in the memory controller determines that the next given time point at which the memory data bus is scheduled to be available is after CC 4, which is CC 5. The scheduler determines that there is time to schedule memory access commands for memory type 1 and memory access commands for memory type 2. The amount of response data for the memory access command for memory type 1 will not conflict with the arrival of response data for the earlier memory access command for memory type 2. Therefore, the scheduler issues a read command "READ B" for memory type 2 at CC 2 and issues a memory access command "READ C" for memory type 1 at CC 3. In the case of the access latency of two clock cycles in this example, the valid response data of "READ C" arrives at CC 5 and CC 6. In the case of the access latency of five clock cycles in this example, for "READ B", valid response data is scheduled to arrive at CC 7 and CC 8. However, as shown in the figure, the requested data is not yet ready to be retrieved from memory type 2. An indication that the requested data is not yet available is received by the memory controller and used by the scheduler to retry at a later time.The scheduler determines that the next given point in time at which the read response data has not been scheduled to drive on the memory data bus is CC 9. The scheduler determines that there is time to schedule memory access commands for memory type 1 and memory access commands for memory type 2. In order to select the next memory access command to transmit, the scheduler uses information such as: the quality of service (QoS) or other priority level of the memory request, the process or software thread identifier (ID) of the memory request, the age of the memory request, and the The amount of time since the memory access request was issued to the memory type 1, the amount of time since the memory access request was issued to the memory type 2, and so on. In the provided example, the scheduler issues a read access command "READ D" for memory type 2 at CC 4. In the case of an access latency of five clock cycles in this example, valid response data is scheduled to arrive at CC 9 and CC 10.The scheduler determines that the next given point in time when the memory data bus is available is CC 11. The scheduler determines that there is time to schedule memory access requests for memory type 1 and memory access requests for memory type 2. The scheduler selects the next memory access command to transmit based on earlier criteria such as priority level, age, etc. In some embodiments, the scheduler assigns a given weight to each of the criteria and performs a weighted sum. The memory access command or state access command with the largest sum is selected for release.In one embodiment, the memory controller receives an indication on another channel or link interface that "READ B" response data is now available from memory type 2. Although the memory access command "READ E" has a higher weighted sum than the transmission read command "SREAD B" corresponding to the earlier read command "READ B", the scheduler determines the value of the memory access command "READ E" The response data volume will conflict with the response data arrival of the read command "READ D" earlier. Therefore, the scheduler issues the send read command "SREAD B" at CC 8, and issues the memory access command "READE" at CC 9. In the case of the access latency of two clock cycles in this example, the valid response data of "READ E" arrives at CC11 and CC12. In the case of an access wait time of five clock cycles for "SREAD B" in this example, valid response data is scheduled to arrive at CC 13 and CC 14 (not shown). Although the timing diagram 400 is described with respect to read access commands, in other embodiments, similar timing diagrams are used for write access commands, where the write data is set on the shared memory data bus, and may differ from the read response data. Or other write access commands have data conflicts.In some embodiments, the received response data includes a tag or other identifier that identifies which command is associated with the response data. In other embodiments, the arrival timing of the request data is used to identify which command is associated with the response data. Therefore, although the requested data arrives out of the order corresponding to the issuance of the commands, the scheduler in the memory controller can track which received data belongs to which command.Referring to FIG. 5, a generalized block diagram of another embodiment of a timing diagram 500 is shown. In the illustrated embodiment, the memory access commands 520 and 530 are shown as being issued at different times based on the clock 510. In the illustrated embodiment, the clock period of the clock 510 is used to provide a time measurement to identify points in time. The memory access command is issued to one of two different types of memory with different access latency. In one embodiment, the command 520 is issued to a first type of memory, which is a conventional DRAM, and the command 530 is issued to a second type of memory, which is an NVDIMM-P. However, other types of memory with different access latency are also possible and envisioned.For ease of illustration, the command wait time is not drawn to scale. In some implementations, the command latency of a conventional DRAM is on the scale of tens of nanoseconds. In the example shown, the latency is shown as 2 clock cycles. In various implementations, the access latency of NVDIMM-P is on the scale of hundreds of nanoseconds. In the example shown, the latency is shown as 7 clock cycles. In various embodiments, a memory request (such as a memory read request) is converted into one or more commands based on accessing the memory. As mentioned earlier, the control logic in DRAM performs complex transactions, such as activation transactions and shutdown transactions. In addition, other signals are generated, such as row address strobe and column address strobe.Similar to the earlier timing diagram 400, the timing diagram 500 is described with respect to read access commands. However, in other embodiments, a similar timing diagram is used for the write access command, where the write data is set on the shared memory data bus and may occur with other write data of the read response data or other write access commands. Data conflict. The responses are shown as responses 540, and they are received on a single memory data bus. The scheduler selects the next memory access command to transmit based on earlier criteria such as priority level, age, etc. In some embodiments, the scheduler assigns a given weight to each of the criteria and performs a weighted sum for use when selecting the next command to transmit.As shown in the figure, the scheduler issues a transaction read command "X-READ A" for memory type 2 at CC 2. The extended address command "XADR A" that allows extended addresses for large linear addresses is immediately followed at CC 3. In the case of an access latency of 7 clock cycles in this example, valid response data is scheduled to arrive at CC 9. In some embodiments, the waiting time is measured from the command "XADR A" instead of the command "X-READ A". In various implementations, the requested data consumes multiple clock cycles. However, for ease of explanation, the requested data of the command "X-READ A" consumes a single clock cycle.The scheduler issues a memory access command "READ B" for memory type 1 at CC 3. In the case of the access latency of two clock cycles in this example, valid response data arrives at CC5. As shown in the figure, the activation command "ACTIVATE" is issued at CC 1 in preparation for issuing the command "READ B" at CC 3. The column address strobe (CAS) is asserted at CC 3 to have a logic low value. The row address and column address are provided on the address line labeled pointer 570, aligned with the assertion of the corresponding strobe. As shown in the figure, the requested data of the command "READ B" consumes four clock cycles, such as CC 5, CC 6, CC 7 and CC 8. The scheduler considers the number of clock cycles consumed by the received requested data when determining the next given point in time when the memory data bus is available.In one embodiment, the scheduler determines that the next given point in time when the memory data bus is available is CC 10. The scheduler determines that there is time to schedule memory access commands for memory type 1, but there is no time to schedule memory access commands for memory type 2. As shown in the figure, the earliest point in time to issue the next memory access command for memory type 2 is after the command "XADR A", which is CC 4. In the case of a command wait time of 7 clock cycles, the requested data is scheduled to arrive at CC 11 instead of CC 10. Therefore, the scheduler issues a memory access command "READ C" for memory type 1 at CC 8. In the case of the access latency of two clock cycles in this example, valid response data arrives at CC 10.As shown in the figure, the precharge command "PRECHARGE" and the activation command "ACTIVATE" are issued at CC 4 and CC 6, respectively, in preparation for issuing the command "READ C" at CC 8. The "BANK" data on the address line labeled pointer 570 indicates the bank to be closed. In some embodiments, the received response data includes a tag or other identifier that identifies which command is associated with the response data. In other embodiments, the arrival timing of the request data is used to identify which command is associated with the response data. Therefore, although the requested data arrives out of the order corresponding to the issuance of the commands, the scheduler in the memory controller can track which received data belongs to which command.Referring to Figure 6, a generalized block diagram of another embodiment of a computing system 600 is shown. As shown, the computing system 600 includes a communication texture 620 between each of the clients 610 and the memory controller 630. The memory controller 630 includes a memory channel 638 for transferring memory traffic between the memory controller 620 and the memory 670 and the memory 680 via the memory bus 650. Each of the storage 670 and the storage 680 stores data accessed by the client 610. In some embodiments, the components of system 600 are individual dies on an integrated circuit (IC), such as a system on a chip (SOC). In other embodiments, the components are individual dies in a system in package (SiP) or multi-chip module (MCM). For ease of description, the power controller, interrupt controller, network link interface, etc. are not shown.In various embodiments, the memory bus 650 utilizes a bidirectional shared bus structure. In various embodiments, the memory 670 and the memory 680 use different data storage technologies, and therefore, the memory 670 has an access latency that differs from the access latency of the memory 680 by at least a threshold amount of time. In various implementations, one or more of memory 670 and memory 680 are used by client 610 as system memory.In one embodiment, when one of the memory 670 and the memory 680 is one of multiple types of DRAM, an example of the protocol for the corresponding interface between the memory channel 638 and the memory controller 630 is dual Data rate (DDR) type of protocol. The protocol determines values for information transfer, such as the number of data transfers per clock cycle, signal voltage level, signal timing, signal and clock phase, and clock frequency. Examples of protocols include DDR2 SDRAM, DDR3 SDRAM, GDDR4 (Graphic Double Data Rate, Version 4), SDRAM, GDDR5, SDRAM, GDDR6, HBM2, etc. The memory controller 630 includes control circuits for interfacing to the memory channel 638 and other memory channels (not shown) and following corresponding protocols.Although a single memory controller 630 is shown, in other embodiments, another number of memory controllers are used in the computing system 600. As shown in the figure, the memory controller 630 includes a request queue 632 for queuing memory access requests received from the client 610 via the communication texture 620. The memory controller 630 also has a response queue 634 for storing responses received from the memory 670 and the memory 680. In one embodiment, the request queue 632 includes a separate read queue for each of the memory 670 and the memory 680 for storing memory read requests. In addition, the request queue 632 includes a separate write queue for each of the memory 670 and the memory 680 for storing memory write requests. In some embodiments, when one or more of the memory 670 and the memory 680 includes a data storage technology that provides a miss status as a response to an access, the memory controller 630 also includes a miss queue 639. In one embodiment, one of the memory 670 and the memory 680 is an NVDIMM-P that provides a miss status response.In some embodiments, the request queue 632 includes one or more queues for storing received memory access requests, and a queue for storing scheduled memory access commands converted from the received requests and selected from the one or more queues. Separate queue. The scheduler 636 includes control logic for selecting memory access commands stored in the request queue 632 to be issued to the memory 670 and the memory 680 out of order. Therefore, the memory controller 630 supports out-of-order issuance of memory access commands to the memory 670 and the memory 680.In various embodiments, the scheduler 636 in the memory controller 130 schedules the issuance of the stored memory access commands based on: quality of service (QoS) or other priority information, age, process or thread identifier (ID) , The amount of time since the issuance of the memory access command to the memory 670, the amount of time since the issuance of the memory access command to the memory 680, and the relationship with other stored requests (such as targeting the same memory channel, at the same level) Target, target the same memory bank, and/or target the same page). In some implementations, the scheduler 636 assigns a given weight to each of the criteria and performs a weighted sum. The memory access command or state access command with the largest sum is selected for release.In various embodiments, the communication texture 620 transfers traffic back and forth between the client 610 and the memory controller 630, and includes an interface for supporting corresponding communication protocols. In some embodiments, the communication structure 620 includes at least a queue for storing requests and responses, selection logic for arbitrating between received requests before sending requests across the internal network, and for constructing and processing packets. The logic for decoding and the logic for choosing a route for the packet.In the illustrated embodiment, the client 610 includes a central processing unit (CPU) 612, a graphics processing unit (GPU) 614, and a hub 616. The hub 616 is used to communicate with the multimedia engine 618. The CPU 612, GPU 614, and multimedia engine 618 are examples of computing resources capable of processing application programs. Although not shown, in other embodiments, the client 610 includes other types of computing resources. In some embodiments, each of the one or more processor cores in the CPU 612 includes circuitry for executing instructions in accordance with a given selected instruction set architecture (ISA). In various embodiments, each of the processor cores in the CPU 612 includes a superscalar multi-threaded microarchitecture for processing the instructions of a given ISA.In one embodiment, GPU 614 includes a highly parallel data microarchitecture with a large number of parallel execution channels. In one embodiment, the micro-architecture uses a single instruction multiple data (SIMD) pipeline for parallel execution channels. The multimedia engine 618 includes a processor for processing audio data and visual data of a multimedia application. In some embodiments, the address space of the computing system 600 is between at least the CPU 612, GPU 614, and hub 616, as well as one or more other components (such as input/output (I/O) peripherals (not shown)) and other types Of computing resources. The memory map is maintained for determining which addresses are mapped to which component, and therefore to which one of the CPU 612, GPU 614, and hub 616 should be routed to memory requests for specific addresses.In various implementations, one or more of the memory 670 and the memory 680 are filled with data from the disk storage 662 through the I/O controller and bus 660 and the memory bus 650. The corresponding cache fill line with the requested block is transferred from one or more of the memory 670 and the memory 680 to the corresponding one of the cache memory subsystems in the client 610 in order to complete the original memory access request. Cache fill lines are placed in one or more levels of cache. In one embodiment, the disk storage 662 provides non-volatile secondary storage of data. In one embodiment, the disk storage 662 includes one or more hard disk drives (HDD). In other embodiments, the disk storage 662 includes a solid state disk (SSD).Referring to Figure 7, a generalized block diagram of one embodiment of the memory controller 700 is shown. In the illustrated embodiment, the memory controller 700 includes: an interface 710 to the client via a communication structure; a queue 720 for storing received memory access requests and received responses; a control unit 750; and via a memory data bus and memory Channel to multiple memory devices (each using a different memory technology) interface 780. Each of the interfaces 710, 780, and 782 supports a corresponding communication protocol. In one embodiment, the interface 780 is an interface to a memory command bus for sending a memory access command corresponding to a memory request received via the interface 710 to a memory device that includes a data storage technology of the first memory type. In one embodiment, the interface 782 is an interface to the memory data bus for transferring data between the memory controller 700 and another memory device including a second memory that is different from the first memory type Types of data storage technology. In various embodiments, the access latency of the first memory type differs from the access latency of the second memory type by at least a threshold amount of time.In the illustrated embodiment, the queue 720 includes a request queue 730, a response queue 740, and a miss queue 742. In one embodiment, the queue 720 includes a first read queue 732 for storing received read requests targeting a first memory type, and a first read queue 732 for storing received read requests targeting a second memory type. The second read queue 734. Although two read queues are shown to receive read requests targeting two different memory types, in other embodiments, another number of read queues may be used to receive another number of different memory types. The target's read request. In addition, the queue 720 includes a first write queue 736 for storing received write requests targeting the first memory type, and a second write queue 736 for storing received write requests targeting the second memory type. Into the queue 738. In some embodiments, when one or more of the first memory type and the second memory type includes a data storage technology that provides a miss status as a response to an access, the queue 720 also includes a miss queue 742. In one embodiment, one of the first memory type and the second memory type is an NVDIMM-P that provides a miss status response. In one embodiment, the queue 720 includes a queue 739 for storing scheduled memory access requests selected from one or more of the queues 732-738 or a unified queue (if a unified queue is used).In some embodiments, the read scheduler 752 includes arbitration logic for selecting read requests out of order from the first read queue 732 and for selecting read requests out of order from the second read queue 734. In one embodiment, when the corresponding request is available for scheduling from the first read queue 732 or the second read queue 734 in a given clock cycle, the read scheduler 752 reads from the first read queue 732 or The request is selected in the second read queue 734. In some embodiments, the read scheduler 752 schedules read requests to be issued out of order to one of the first memory type and the second memory type based on: quality of service (QoS) or other priority information, age, Process or thread identifier (ID), and relationship to other stored requests (such as targeting the same memory channel, targeting the same level, targeting the same memory bank, and/or targeting the same page ).In order to avoid data conflicts on the memory data bus regardless of the multiple deterministic access latency of the first memory type and the second memory type, in one embodiment, the read scheduler 752 determines the next given memory data bus available Point in time. In some embodiments, the time point is measured in clock cycles. The read scheduler 752 determines whether there is enough time to schedule the first memory access command corresponding to the selected read request stored in the first read queue 732 to provide response data at a given point in time. In addition, the read scheduler 752 also determines whether there is enough time to schedule the second memory access command corresponding to the selected read request stored in the second read queue 734 to provide response data at a given point in time. In other words, the read scheduler 752 determines whether the new memory access command received by the first read queue 732 or the second read queue 734 can be scheduled for issuance to the first memory device or the second memory device, so that the A response to a new memory access command is received on the memory data bus at a given point in time. In various embodiments, a given point in time is the next available point in time where the memory data bus is not scheduled so that data is driven on the memory data bus and has not been considered for scheduling.In some embodiments, although the access latency of one or more of the first memory type and the second memory type is non-deterministic, the response has a deterministic latency. After the deterministic waiting time, the response is returned with an indication indicating whether valid data is included in the response. If the response does not include valid data, try again later. Therefore, the memory access command is stored in the miss queue 742 for retry later. As mentioned earlier, sometimes other commands are used in addition to memory access commands. These other commands also add the waiting time to the waiting time of the memory access command.If there is enough time to issue at least one of the first access command and the second access command to provide response data on the memory data bus at a given point in time, the read scheduler 752 selects the first memory access command and the second access command. One of the memory access commands. The scheduler 752 can use the criteria described earlier, such as priority level, age, etc. In addition, weighted values can be used. In one embodiment, the read scheduler 752 places the selected access command in the queue 739 before sending the selected access command to the corresponding memory type via the memory channel. To determine whether a new pending memory access command stored in either the first read queue 732 or the second read queue 734 can be scheduled for issuance at a given point in time, in one embodiment, the read The fetch scheduler 752 determines that the response waiting time of the new memory access command is N clock cycles, where N is an integer. The read scheduler 752 identifies an earlier point in time corresponding to N clock cycles before a given point in time, and determines whether the memory command bus is available at the earlier point in time.If the read scheduler 752 determines that there is enough time to schedule the above new memory access command, the read scheduler 752 schedules the new memory access command for issuance at an earlier point in time, and stores the memory data bus at a given time. Click the unavailable indication. In some embodiments, the bit vector is stored in a register to indicate when the memory data bus is available and when the memory data bus is unavailable. For example, in various implementations, each bit in the bit vector corresponds to a specific time slot. If the scheduler determines to deliver data during one or more given time slots (for example, to transfer write data to memory, or to retrieve read data from memory), the scheduler sets one or more of the vector A corresponding time slot bit to indicate that the data bus is scheduled to be busy at that time. For example, in some embodiments, a bit with a value of '0" indicates that no data is scheduled to be on the data bus at that time (ie, the data bus is not busy). In such embodiments, the bit of the time slot is set Causes the bit to have the value "1." In other embodiments, these values can be reversed so that "0" indicates a busy period on the data bus, and a "1" indicates a non-busy period. In this embodiment, the set bit Will cause the bit to have the value "1". By referring to the bit vector, the scheduler can quickly determine whether a given time slot is available for scheduling new activities. In one embodiment, register storage is used to indicate which points in time have not been considered for scheduling And indications of which points in time have been considered for scheduling. In various embodiments, these stored indications can be used to determine other given points in time for future scheduling commands for issuance.In some embodiments, in order to avoid data conflicts on the memory data bus regardless of the multiple deterministic access latency of the first memory type and the second memory type, compared with the next given point in time when the memory data bus is initially determined to be available , The read scheduler 752 determines the next point in time when the memory command bus is available. Likewise, in some embodiments, the time point is measured in clock cycles. In some embodiments, the read scheduler 752 determines each of the different types stored in the first read queue 732 and the second read queue 734 by adding the corresponding waiting time to the next point in time when the memory command bus is available. The corresponding given point in time of the pending memory access command.To determine whether a new pending memory access command stored in either the first read queue 732 or the second read queue 734 can be scheduled for issuance at the next point in time when the memory command bus is available, a In an embodiment, the read scheduler 752 determines that the response latency of the new memory access command is N clock cycles, where N is an integer. The read scheduler 752 identifies a later given time point corresponding to N clock cycles after the time point when the memory command bus is available. Thereafter, the read scheduler 752 determines whether the memory data bus is available at a given later point in time.In some embodiments, the read scheduler 752 uses the bit vector stored as described earlier to determine whether to address one or more pending memory accesses stored in the first read queue 732 and the second read queue 734. Whether the memory data bus is available for each of the corresponding one or more given time points of each of the commands. If the memory data bus is only available for a single pending memory access command during the corresponding given point in time, the read scheduler 752 schedules the single pending memory access command at the next point in time when the memory command bus is available. If the memory data bus is available for multiple pending memory access commands during the corresponding given point in time, the read scheduler 752 selects one of the pending memory access commands based on the criteria described earlier (such as priority level, age, etc.) One of them to launch. The read scheduler 752 schedules the selected pending memory access commands at the next point in time when the memory command bus is available.The write scheduler 754 includes selection logic for the first write queue 736 and the second write queue 738 similar to that used by the read scheduler 752. In various embodiments, the write scheduler 754 also considers data conflicts caused by data being driven on the shared memory data bus. The write scheduler 754 also uses the control logic used by the read scheduler 752 to implement the decision algorithm. In one embodiment, the response scheduler 756 includes similar logic for issuing responses to clients out of order based on priority. In some embodiments, the received response data includes a tag or other identifier used by the response scheduler 756 to identify which command stored in the first read queue 732 or the second read queue 734 is associated with the response data. In other embodiments, the response scheduler 756 uses the arrival timing of the request data on the memory data bus to identify which command is associated with the response data. Therefore, although the request data arrives out of the order corresponding to the issuance of the command, the response scheduler 756 can track which received data belongs to which command.In some embodiments, when the read scheduler 752 schedules a given command to be transmitted, the response scheduler 756 determines the given point in time at which the requested read data is scheduled to arrive on the shared memory data bus. In one embodiment, the response scheduler 756 adds the waiting time for the given command to the point in time when the read scheduler 752 schedules to transmit the given command. In some embodiments, the response scheduler 756 generates an identifier. In some embodiments, the identifier is an indication of an entry in the request queue that stores information corresponding to a given command. In other embodiments, the identifier is a combination of one or more of the thread identifier and a portion of the target address of the memory request corresponding to the given command. The response scheduler 756 stores the association of the identifier with a given point in time. In one embodiment, a table is used. Therefore, the response scheduler 756 is able to read data requested based on the arrival on the shared memory data bus at a given point in time rather than based on the tag inserted in the given command or through the data associated with the requested read data that arrived. Group to identify a given command.In some embodiments, the control register 770 stores an indication of the current mode. For example, off-chip memory data buses and memory devices support read mode or write mode at a given time. Therefore, traffic is routed in a given single direction during the current mode, and changes direction when the current mode is changed after the data bus turnaround wait time. In various embodiments, the control register 770 stores the threshold number of read requests (read burst length) to be sent during the read mode. In some embodiments, the control register 770 stores the weights of the criteria used by the read scheduler 752 and the write scheduler 754 to select the requests stored in the queues 732-738 for transmission.Referring to Figure 8, one embodiment of a method 800 for scheduling memory requests for issuance to two different memory types is shown. For discussion purposes, the steps in this embodiment (and in FIGS. 9-14) are shown in sequential order. However, it should be noted that in various embodiments of the described method, one or more of the described elements are performed simultaneously, performed in a different order than shown, or omitted altogether. Other additional elements can also be implemented as needed. Any of the various systems or devices described herein are configured to implement method 800.One or more clients in the node execute computer programs or software applications. The computing resource determines that a given memory access request is missed in the cache memory subsystem within the given client among one or more clients. The client sends a memory access request via a memory controller to a system memory implemented by two different memories, the memory controller having a memory channel connected to each of the two different memories. The difference between the one or more access latency of the first type of memory and the one or more access latency of the second type of memory exceeds a threshold amount of time. When a memory request for a first type of memory connected to a given memory channel is received, the memory request is stored (block 802). When a memory request for a second type of memory connected to a given memory channel is received, the memory request is stored (block 804).Based at least on the priority and target of the memory request, the memory requests for the first type of memory are marked to be issued out of order (block 806). Based at least on the priority and target of the memory request, the memory requests for the second type of memory are marked to be issued out of order (block 808). Therefore, the memory controller supports out-of-order issuance for each of the first memory and the second memory. The memory request is scheduled for publishing in a manner that provides response data at a given point in time (block 810). For example, scheduling memory requests in a hybrid manner without data conflicts on the shared memory data bus regardless of different access latency.In various embodiments, the scheduler or other control logic in the memory controller determines whether there are two pending memory access commands, such as a first command for the first memory type and a second command for the second memory type. command. The scheduler determines whether each of the first command and the second command can be issued without causing data conflicts on the shared memory data bus. For example, in addition to the access waiting time of each of the first type of memory and the second type of memory, based on the point in time for issuing the selected one of the first command and the second command, the memory controller tracks The point in time when the read response data or write data is scheduled to arrive on the shared memory data bus. In some embodiments, the time point is measured in clock cycles.If selecting any one of the first command and the second command does not schedule data conflicts on the shared memory data bus, then each of the first command and the second command is still a candidate for issuance. In this case, the scheduler selects a command from the first command and the second command based on the arbitration logic. In other embodiments, the determination of whether to issue the first command or the second command starts with the selection of a specific given point in time when the read response data or the write data is driven on the shared memory data bus.Turning now to FIG. 9, one embodiment of a method 900 for scheduling memory requests for issuance to two different memory types is shown. Identify the next given point in time at which the read response data is driven on the memory data bus (block 902). For example, when determining the next given point in time, consider both the access waiting time of each issued memory access command and state access command and the scheduled amount of requested data to be returned. In some embodiments, the time point is measured in clock cycles.If the read response data has been scheduled to arrive for a given point in time (the "yes" branch of conditional block 904), then the control flow of method 900 returns to block 902 where the next given point in time is identified. For example, consider the next clock cycle after the currently selected clock cycle. Alternatively, a certain count is added to the current clock cycle, said count being equal to the given number of clock cycles consumed by scheduling read request data to arrive from one of two different memories. If the read response data is not scheduled to arrive for a given point in time ("No" branch of conditional block 904), then it is determined whether there is enough time to schedule a memory access command for the first memory type to provide at the given point in time Response data (block 906). Next, it is determined whether there is enough time to schedule a memory access command for a second memory type different from the first memory type to provide response data at a given point in time (block 908).In some embodiments, it is also determined whether there is enough time to schedule a state access command for the second memory type to provide response data at a given point in time (block 910). In some embodiments, the access latency of the state access command is different from the access latency for the memory access command of the second memory type. A command is selected from candidate commands that can provide response data at a given point in time (block 912). In various embodiments, the scheduler selects the next memory access command or status access command to transmit based on the criteria described earlier (such as priority level, age, etc.). The selected command is scheduled for release at a point in time that allows the selected command to provide response data at a given point in time (block 914). For example, when scheduling memory access commands and status access commands for transmission, a scheduler of a memory controller with a memory channel considers possible additional commands and their corresponding waiting times for preparing the selected command for transmission.As described above, the method 900 describes the steps of avoiding data conflicts on the memory data bus regardless of the multiple deterministic access latency of the first memory type and the second memory type. However, as mentioned earlier, in other embodiments, the scheduler of the memory controller with the memory channel determines the next time point at which the memory command bus is available compared to the next given point in time when the memory data bus is initially determined to be available. . In some embodiments, the time point is measured in clock cycles. In some embodiments, the scheduler determines the corresponding given point in time for each different type of pending memory access command by adding the corresponding waiting time to the next point in time when the memory command bus is available.To determine whether a new pending memory access command can be scheduled for issuance at the next point in time when the memory command bus is available, in one embodiment, the scheduler determines that the response latency of the new memory access command is N clock cycles, where N is an integer. The scheduler identifies a later given point in time corresponding to N clock cycles after the point in time when the memory command bus is available. Thereafter, the scheduler determines whether the memory data bus is available at a given later point in time.If the memory data bus is only available for a single pending memory access command during the corresponding given point in time, the scheduler schedules the single pending memory access command at the next point in time when the memory command bus is available. If the memory data bus is available for multiple pending memory access commands during the corresponding given point in time, the scheduler selects one of the pending memory access commands based on the criteria described earlier (such as priority level, age, etc.) emission. The scheduler schedules the selected pending memory access commands at the next point in time when the memory command bus is available.The following description of the methods 1000-1200 describes the steps for initially determining the next given point in time when the memory data bus is available, and subsequently determining the earlier point in time when the memory access command is scheduled to be issued on the memory command bus. However, in various other embodiments, as described above, the scheduler determines the next point in time when the memory command bus is available, and then determines the later point in time when the read response data is scheduled to arrive on the memory data bus without conflict. Although the steps in methods 1000-1200 are described with respect to read access commands, in other embodiments, similar logic and steps are used for write access commands, where the write data is set on the shared memory data bus and may A data conflict occurs with the read response data or other write data of other write access commands being driven on the shared memory data bus.Turning now to Figure 10, one embodiment of a method 1000 for scheduling memory requests for issuance to two different memory types is shown. In order to select an access command and schedule the access command to be issued at a time point that allows the selected access command to provide response data at a target given time point, a specific timing value is evaluated. In some embodiments, the following steps are performed after block 914 of method 900 (of FIG. 9). A first amount of time between a given point in time for the issuance of the command and the most recent point in time for the scheduled first access command for the first memory type is determined (block 1002). The waiting time of any necessary additional commands to prepare for the possible next issuance of the access command of the first memory type is added to the first waiting time of the access command of the first memory type (block 1004). Similar steps are performed for the access command for the second memory type. For example, a second amount of time between a given point in time and the most recent point in time for the scheduled second access command for the second memory type is determined (block 1006). The waiting time for any necessary additional commands to prepare for the possible next issuance of the access command for the second memory type is added to the second waiting time for the access command for the second memory type (block 1008).A third amount of time between the given point in time and the most recent point in time for the scheduled third access command for the second memory type is determined (block 1010). The waiting time for any necessary additional commands to prepare for the possible next issuance of the third access command for the second memory type is added to the third waiting time for the third access command for the second memory type (box 1012). Each of the first amount of time, the second amount of time, and the third amount of time is compared to a corresponding one of the first waiting time, the second waiting time, and the third waiting time (block 1014).Turning now to FIG. 11, one embodiment of a method 1100 for scheduling memory requests for issuance to two different memory types is shown. In order to select an access command and schedule the access command for issuance at a time point that allows the selected access command to provide response data at a target given point in time, a specific comparison is made for the time series value. In some embodiments, the following steps are performed after block 1014 of method 1000 (of FIG. 10).If the first waiting time is not greater than the first amount of time (the "no" branch of conditional block 1102), the first memory access command for the first memory type is inserted into the candidate command set for issuance (block 1104). In other words, if the cumulative waiting time of the memory access command for the first memory type and any additional commands used to prepare the memory access command for issuance is less than or equal to the last issuance of any command for the first memory type and Given the amount of time between points in time, there is enough time to issue a memory access command for the first memory type. For example, referring again to the timing diagram 500 (of FIG. 5), at least four clock cycles are required between the issuance of "READ C" at CC 8 and the completion of the issuance of "READ B" before CC 4.If the first waiting time is greater than the first amount of time ("Yes" branch of conditional block 1102), the first memory access command for the first memory type is removed from consideration as a candidate command for issuance (block 1106 ). Similar steps are performed for the second memory access command for the second memory type. For example, if the second waiting time is not greater than the second amount of time (the "No" branch of conditional block 1108), the second memory access command for the second memory type is inserted into the candidate command set for issuance (block 1110) . Otherwise, if the second waiting time is greater than the second amount of time ("No" branch of conditional block 1108), the second memory access command for the second memory type is removed from the set of candidate commands for issuance (block 1112 ).Similar steps are performed for the third memory access command for the second memory type. However, in some embodiments, it is checked whether the requested read data has been returned for the corresponding original memory access command. Briefly referring to the timing diagram 400 (of FIG. 4) again, since the requested read data is not returned for the original transaction read command "READ B", the read command "SREAD B" is issued. At CC 7, the requested read data was scheduled to arrive, but it did not return from the second memory type. However, since the requested read data is returned at the scheduled given time point at CC 9, no subsequent read command is issued for the read command "READ D". In some embodiments, the memory controller receives an indication on another channel or link interface indicating whether the response data of the read access command is now available for the specific memory type of the first memory type and the second memory type. In other embodiments, the memory controller issues a speculative read command to determine whether the response data is ready.If the third waiting time is not greater than the third amount of time (the "No" branch of the condition block 1114), and it is determined that the corresponding response data has not been returned (the "No" branch of the condition block 1116), it will be used for the second memory type The third memory access command is inserted into the candidate command set for issuance (block 1118). If the third waiting time is not greater than the third amount of time (the "No" branch of the condition block 1114), and it is determined that the corresponding response data has been returned (the "Yes" branch of the condition block 1116), it will be used for the second memory type Three memory access commands are removed from the candidate command set for issuance (block 1120). Similarly, if the third waiting time is greater than the third amount of time ("Yes" branch of conditional block 1114), the third memory access command for the second memory type is removed from the set of candidate commands for issuance (block 1120).Turning now to FIG. 12, one embodiment of a method 1200 for scheduling memory requests for issuance to two different memory types is shown. In order to select an access command and schedule the access command for issuance at a time point that allows the selected access command to provide response data at a target given point in time, arbitration is performed between the set of qualified candidate commands. In some embodiments, the following steps are performed after the steps of method 1100 (of FIG. 11). A weight is assigned to the criteria used to select commands from the set of candidate commands for issuance (block 1202).As mentioned earlier, the standard includes one or more of the following: QoS or other priority information, age, process or thread identifier (ID), amount of time since the memory access command was issued to the first memory type, and The amount of time since a memory access command or a state access command was issued to the second memory type. In some embodiments, the programmable control and status register stores the weights assigned to the selected criteria. The candidate command set for issuance is determined (block 1204). In one embodiment, the command is qualified after the steps of the previous method 900-1100. If the set contains a single command (the "yes" branch of conditional block 1206), the single command is selected for publishing (block 1208).If the set contains multiple commands (the "no" branch of conditional block 1206), a single command is selected from among multiple candidates based on the weighting criterion (block 1210). As mentioned earlier, in some implementations, the scheduler assigns a given weight to each of the criteria and performs a weighted sum. The memory access command or state access command with the largest sum is selected for release. The total wait time of the selected command including any necessary additional commands to prepare for the issue of the selected command is subtracted from the point in time when the response data is scheduled to arrive (block 1212). The additional command and the selected access command are scheduled to be issued at the determined point in time found by performing the subtraction (block 1214).Turning now to FIG. 13, an embodiment of a method 1300 for identifying read response data arriving out of order from two memories with different access latency is shown. In various embodiments, one or more of the memories is a type of memory that responds with a deterministic response time. In this embodiment, the response includes the requested data, or the response indicates that the data is not currently ready. In some embodiments, this memory type is NVDIMM-P. As shown in the embodiment of FIG. 13, receiving a memory access command (e.g., a read request) is an indication of readiness issued to the memory (block 1302). Then, it is determined whether the memory access command already has an assigned identifier (block 1304). If there is no identifier yet (the "no" branch of conditional block 1306), then an identifier is generated or otherwise assigned for the memory access command (block 1308). In some embodiments, the identifier is an identification of an entry in the request queue that stores information corresponding to the memory access command. In other embodiments, the identifier is a combination of one or more of the thread identifier and a portion of the target address of the memory request corresponding to the given access command.If the command already has an identifier (the "yes" branch of conditional block 1306), the control flow of method 1300 moves to block 1310, where the first point in time at which the memory access command is scheduled to be transmitted is determined (block 1310). For example, the scheduler can provide this information when the scheduler selects a memory access command for publishing.In addition, the response waiting time for the memory access command is determined (block 1312). In various embodiments, in addition to the state access command for the second memory type, the memory access command for each of the first memory type and the second memory type of the memory also has a deterministic latency. However, each command may have a waiting time different from that of another command. Then, the identifier is associated with a second point in time corresponding to the time of expected response to the command (block 1314). For example, if the time for the command to be issued is T1, and it is determined that the waiting time is equal to 100 cycles (or some other measure of waiting time), then the second time point is equal to T1+100. In various embodiments, when the memory access command is delivered to the memory, the identifier is not included with the memory access command. In contrast, a time point (such as a specific clock cycle) at which the response to the command is scheduled is used to identify the access command corresponding to the received response.Referring to FIG. 14, an embodiment of a method 1400 for identifying response data arriving out of order from two different memory types is shown. In order to match the received read data with the corresponding access command without inserting a tag or other identifier in the issued command, the control logic in the response scheduler executes the following steps. In some embodiments, the following steps are performed after the steps of method 1300 (of FIG. 13).The identifier and time point are maintained for receiving the read response data of the issued command (block 1402). If a given point in time for receiving the read response data has been reached (the "yes" branch of conditional block 1404), then a given command corresponding to the reached point in time is identified based on the point in time (block 1406). In some embodiments, numbered clock cycles are used to indicate points in time. In other embodiments, another count or measure of time is used. In various embodiments, a given point in time is used to index into the table to determine the identifier associated with the corresponding command. For example, in some embodiments, the identifier is an indication of an entry in the request queue that stores information corresponding to the memory access command. In other embodiments, the identifier is a combination of one or more of the thread identifier and a portion of the target address of the memory request corresponding to the given access command.If valid data is received at a given point in time (the "yes" branch of conditional block 1408), the requester who generated the given command is identified (block 1410), the valid data is sent to the identified requester (block 1412), and The given command is marked as completed (block 1414). If no valid data is received at a given point in time (the "no" branch of conditional block 1408), then the given command remains unresolved (block 1416).In various embodiments, program instructions of a software application are used to implement the previously described methods and/or mechanisms. The program instructions describe the behavior of the hardware in a high-level programming language (such as C). Alternatively, a hardware design language (HDL) such as Verilog is used. The program instructions are stored on a non-transitory computer-readable storage medium. Many types of storage media are available. The storage medium can be accessed by the computing system during use to provide the computing system with program instructions and accompanying data for program execution. The computing system includes at least one or more memories and one or more processors configured to execute program instructions.It should be emphasized that the above-mentioned embodiments are only non-limiting examples of implementations. Once the above disclosure is fully understood, many changes and modifications will become apparent to those skilled in the art. The following claims are intended to be interpreted as encompassing all such changes and modifications. |
Methods and arrangements are provided for significantly reducing electron trapping in semiconductor devices having a polysilicon feature and an overlying dielectric layer. The methods and arrangements employ a nitrogen-rich region within the polysilicon feature near the interface to the overlying dielectric layer. The methods include selectively implanting nitrogen ions through at least a portion of the overlying dielectric layer and into the polysilicon feature to form an initial nitrogen concentration profile within the polysilicon feature. Next, the temperature within the polysilicon feature is raised to an adequately high temperature, for example using rapid thermal anneal (RTA) techniques, which cause the initial nitrogen concentration profile to change due to the migration of the majority of the nitrogen towards either the interface with the overlying dielectric layer or the interface with an underlying layer. Consequently, the polysilicon feature has a first nitrogen-rich region near the interface to the overlying dielectric layer and a second nitrogen-rich region near the interface to the underlying layer. The migration of nitrogen further forms a contiguous reduced-nitrogen region located between the first nitrogen-rich region and the second nitrogen-rich region. The contiguous reduced-nitrogen region has a lower concentration of nitrogen than does the first nitrogen-rich region and the second nitrogen-rich region. The first nitrogen-rich region has been found to reduce electron trapping within the polysilicon feature. Thus, for example, in a non-volatile memory device wherein the polysilicon feature is a floating gate, false programming of the memory device can be significantly avoided by reducing the number of trapped electrons in the floating gate. |
What is claimed is:1. A method for forming a semiconductor device, the method comprising:forming a first dielectric layer;forming a first gate on the first dielectric layer;forming a second dielectric layer on the first gate;forming a silicon nitride film on the second dielectric layer; and thenforming a first nitrogen-rich region within the first gate and substantially adjacent to the first dielectric layer, and a second nitrogen-rich region within the first gate and substantially adjacent the second dielectric layer, whereinthe silicon nitride film is formed on the second dielectric layer prior to the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate.2. The method as recited in claim 1, wherein the step of forming a second dielectric layer on the first gate includes forming a first silicon dioxide film on the first gate.3. The method as recited in claim 2, wherein the step of forming a second dielectric layer on the first gate further includes forming a silicon nitride film on the first silicon dioxide film prior to the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate.4. The method as recited in claim 3, wherein the step of forming a second dielectric layer on the first gate further includes forming a second silicon dioxide film on the silicon nitride film prior to the step of the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate.5. A method for forming a semiconductor device, the method comprising:forming a first dielectric layer;forming a first gate on the first dielectric layer;forming a second dielectric layer on the first gate; and thenforming a first nitrogen-rich region within the first gate and substantially adjacent to the first dielectric layer, and a second nitrogen-rich region within the first gate and substantially adjacent the second dielectric layer, wherein the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate further comprises:implanting nitrogen ions through the second dielectric layer and into the first gate, the implanted nitrogen ions forming a first nitrogen concentration profile within the first gate; andcausing the first nitrogen concentration profile to be altered to form a second nitrogen concentration profile within the first gate, the second nitrogen concentration profile comprising the first nitrogen-rich region, the second nitrogen-rich region and a contiguous reduced-nitrogen region located between the first nitrogen-rich region and the second nitrogen-rich region, the contiguous reduced-nitrogen region having a lower concentration of nitrogen than the first nitrogen-rich region and the second nitrogen-rich region.6. The method as recited in claim 5, wherein the step of causing the first nitrogen concentration profile to be altered further comprises causing the first nitrogen-rich region to include between about 0.01% and about 1% atomic percentage of nitrogen.7. The method as recited in claim 6, wherein the step of the step of causing the first nitrogen concentration profile to be altered further comprises causing the second nitrogen-rich region to include between about 0.01% and about 1% atomic percentage of nitrogen.8. The method as recited in claim 5, wherein the step of causing the first nitrogen concentration profile to be altered to form the second nitrogen concentration profile within the first gate further comprises causing the lower concentration of nitrogen in the contiguous reduced-nitrogen region to include less than about 0.001% atomic percentage of nitrogen.9. The method as recited in claim 5, wherein the step of implanting nitrogen ions through the second dielectric layer and into the first gate uses an ion implantation energy of between about 10 and about 30 KeV to provide a dosage of between about 1*10<14 > and about 1*10<16 > nitrogen ions/cm<2> .10. The method as recited in claim 5, wherein the step of causing the first nitrogen concentration profile to be altered to form a second nitrogen concentration profile within the first gate further includes applying thermal energy to the first gate.11. The method as recited in claim 10, wherein the step of applying thermal energy to the first gate causes an internal temperature within the first gate of between about 900 and about 1100[deg.] C.12. A method for nitrogen doping a polysilicon layer, the method comprising:forming a polysilicon layer in a semiconductor device, the polysilicon layer sharing a first interface with an underlying dielectric layer and a second interface with an overlying dielectric layer;implanting nitrogen through the overlying dielectric layer and substantially into a polysilicon layer; andheating the polysilicon layer to cause the implanted nitrogen to form a first nitrogen-rich region substantially adjacent to the underlying dielectric layer and a substantially separate second nitrogen-rich region substantially adjacent the overlying dielectric layer, thereby leaving a reduced-nitrogen region located within the polysilicon layer between the first nitrogen-rich region and the second nitrogen-rich region, wherein the reduced-nitrogen region always has a lower concentration of nitrogen than the first nitrogen-rich region and the second nitrogen-rich region. |
This application is a divisional of application Ser. No. 09/143,089 filed Aug. 28, 1998 now abandoned.TECHNICAL FIELDThe present invention relates to semiconductor devices and manufacturing processes, and more particularly to methods and arrangements for effectively reducing false programming within non-volatile memory semiconductor devices that can occur as a result of electron trapping near the interface between a floating gate and an interpoly dielectric layer.BACKGROUND ARTA continuing trend in semiconductor technology is to build integrated circuits with more and/or faster semiconductor devices. The drive toward this ultra large-scale integration (ULSI) has resulted in continued shrinking of device and circuit features. As the devices and features shrink, new problems are discovered that require new methods of fabrication and/or new arrangements.A flash or block erase Electrically Erasable Programmable Read Only Memory (flash EEPROM) semiconductor memory includes an array of memory cells that can be independently programmed and read. The size of each memory cell, and therefore the memory array, is made small by omitting select transistors that would enable the cells to be erased independently. The array of memory cells is typically aligned along a bit line and a word line and erased together as a block. An example of a memory of this type includes individual metal oxide semiconductor (MOS) memory cells, each of which includes a source, drain, floating gate, and control gate to which various voltages are applied to program the cell with a binary 1 or 0. Each memory cell can be read by addressing it via the appropriate word and bit lines.An exemplary memory cell 8 is depicted in FIG. 1a. As shown, memory cell 8 is viewed in a cross-section through the bit line. Memory cell 8 includes a doped substrate 12 having a top surface 11, and within which a source 13a and a drain 13b have been formed by selectively doping regions of substrate 12. A tunnel oxide 15 separates a floating gate 16 from substrate 12. An interpoly dielectric 24 separates floating gate 16 from a control gate 26. Floating gate 16 and control gate 26 are each electrically conductive and typically formed of polysilicon.On top of control gate 26 is a silicide layer 28, which acts to increase the electrical conductivity of control gate 26. Silicide layer 28 is typically a tungsten silicide (e.g., WSi2), that is formed on top of control gate 26 prior to patterning, using conventional deposition and annealing processes.As known to those skilled in the art, memory cell 8 can be programmed, for example, by applying an appropriate programming voltage to control gate 26. Similarly, memory cell 8 can be erased, for example, by applying an appropriate erasure voltage to source 13a. When programmed, floating gate 16 will have a charge corresponding to either a binary 1 or 0. By way of example, floating gate 16 can be programmed to a binary 1 by applying a programming voltage to control gate 26, which causes an electrical charge to build up on floating gate 16. If floating gate 16 does not contain a threshold level of electrical charge, then floating gate 16 represents a binary 0. During erasure, the charge needs to be removed from floating gate 16 by way of an erasure voltage applied to source 13a. FIG. 1b depicts a cross-section of several adjacent memory cells from the perspective of a cross-section through the word line (i.e., from perspective A, as referenced in FIG. 1a). In FIG. 1b, the cross-section reveals that individual memory cells are separated by isolating regions of silicon dioxide formed on substrate 12. For example, FIG. 1b shows a portion of a floating gate 16a associated with a first memory cell, a floating gate 16b associated with a second memory cell, and a floating gate 16c associated with a third memory cell. Floating gate 16a is physically separated and electrically isolated from floating gate 16b by a field oxide (FOX) 14a. Floating gate 16b is separated from floating gate 16c by a field oxide 14b. Floating gates 16a, 16b, and 16c are typically formed by selectively patterning a single conformal layer of polysilicon that was deposited over the exposed portions of substrate 12, tunnel oxide 15, and field oxides 14a-b. Interpoly dielectric layer 24 has been conformally deposited over the exposed portions of floating gates 16a-c and field oxides 14a-b. Interpoly dielectric layer 24 isolates floating gates 16a-c from the next conformal layer which is typically a polysilicon layer that is patterned (e.g., along the bit line) to form control gate 26. Interpoly dielectric layer 24 typically includes a plurality of films, such as, for example, a bottom film of silicon dioxide, a middle film of silicon nitride, and a top film of silicon dioxide. This type of interpoly dielectric layer is commonly referred to as an oxide-nitride-oxide (ONO) layer. The thickness and physical properties of interpoly dielectric layer 24 affect the data retention capabilities of memory cell 8.The continued shrinking of the memory cells, for example, as depicted in the memory cells of FIGS. 1a-b, requires that floating gates 16a-c be reduced in size (e.g., reduced width, length and/or height). The resulting reduced-size memory cell is typically operated with an attendant reduction in the threshold level of electrical charge that is required to program floating gate 16 to a binary 1 state. By way of example, in certain reduced-size memory cells, a binary 1 state can be represented by the electrical charge provided by as few as 5,000 electrons stored within floating gate 16. Consequently, there is a potential for false programming of the memory cell if an appropriate number of unwanted free electrons are allowed to migrate into, or otherwise charge, floating gate 16. In particular, it has been found that in certain memory cells electrons can be trapped near the interface between the floating gate 16 and the overlying interpoly dielectric layer 24 during fabrication. In certain instances these trapped electrons can escape from the trapping mechanism, for example, due to subsequent thermal changes and/or the passage of time. Once released, these unwanted electrons can falsely program floating gate 16 (e.g., to a binary 1 state). Thus, there is need for methods and arrangements that effectively reduce the potential for electron trapping, and/or false programming as a result thereof, at or near the interface between floating gate 16 and interpoly dielectric layer 24.SUMMARY OF THE INVENTIONThese needs and others are met by the present invention, which provides methods and arrangements that effectively reduce the potential for electron trapping in a polysilicon feature in a semiconductor device by advantageously employing a nitrogen-rich region within the polysilicon feature near the interface between the polysilicon feature and an overlying dielectric layer. Because the nitrogen-rich region significantly reduces the electron-trap density near this interface, the resulting semiconductor device is much less likely to be falsely programmed or otherwise significantly affected due to the subsequent release of trapped electrons.Thus, in accordance with certain embodiments of the present invention, there is provided a semiconductor device having a first dielectric layer, a first gate formed on the first dielectric layer, and a second dielectric layer formed on the first gate. The first gate includes a first nitrogen-rich region that is located substantially adjacent the first dielectric layer, and a substantially separate second nitrogen-rich region that is located substantially adjacent the second dielectric layer. There is also a reduced-nitrogen region within the first gate. The reduced-nitrogen region is located between the first nitrogen-rich region and the second nitrogen-rich region and has a lower concentration of nitrogen than both the first nitrogen-rich region and the second nitrogen-rich region. In certain embodiments the second dielectric layer includes a plurality of films selected from a group comprising silicon dioxide and silicon nitride and the first gate includes polysilicon. In certain embodiments, the first nitrogen-rich region has between about 0.01% and about 1% atomic percentage of nitrogen and the second nitrogen-rich region has between about 0.01% and about 1% atomic percentage of nitrogen. In still other embodiments, the lower concentration of nitrogen in the reduced-nitrogen region is less than about 0.001% atomic percentage of nitrogen.The above stated needs and others are also met by a method for forming a semiconductor device, in accordance with still further embodiments of the present invention. The method includes forming a first dielectric layer, forming a first gate on the first dielectric layer, forming at least a portion of a second dielectric layer on the first gate, and forming a first nitrogen-rich region within the first gate substantially adjacent the first dielectric layer, and a second nitrogen-rich region within the first gate substantially adjacent the second dielectric layer. In certain embodiments, the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate further includes the steps of implanting nitrogen ions through the second dielectric layer and into the first gate, the implanted nitrogen ions forming a first nitrogen concentration profile within the first layer, and causing the first nitrogen concentration profile to be altered to form a second nitrogen concentration profile within the first gate. The second nitrogen concentration profile includes a first nitrogen-rich region, a second nitrogen-rich region and a reduced-nitrogen region located between the first nitrogen-rich region and the second nitrogen-rich region. The reduced-nitrogen region has a lower concentration of nitrogen than the first nitrogen-rich region and the second nitrogen-rich region. By way of example, in accordance with still other certain embodiments of the present invention, the step of causing the first nitrogen concentration profile to be altered includes the steps of causing the first nitrogen-rich region to include between about 0.01% and about 1% atomic percentage of nitrogen, causing the second nitrogen-rich region to include between about 0.01% and about 1% atomic percentage of nitrogen, and/or causing the lower concentration of nitrogen in the reduced-nitrogen region to include less than about 0.001% atomic percentage of nitrogen. In certain further embodiments, the step of forming at least a portion of a second dielectric layer on the first gate includes forming a first silicon dioxide film on the first gate prior to the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate. While in still other embodiments, the step of forming at least a portion of a second dielectric layer on the first gate can also include the step of forming a silicon nitride film on the first silicon dioxide film prior to the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate, and or even the step of forming a second silicon dioxide film on the first silicon dioxide film prior to the step of the step of forming the first nitrogen-rich region and the second nitrogen-rich region within the first gate. In certain exemplary embodiments, the step of implanting nitrogen ions through the second dielectric layer and into the first gate uses an ion implantation energy of between about 10 and about 30 KeV to provide a dosage of between about 1*10<14 > and about 1*10<16 > nitrogen ions/cm<2> . The resulting first nitrogen concentration profile is then altered to form a second nitrogen concentration profile within the first gate by applying thermal energy to the first gate. For example, in certain embodiments, the internal temperature within the first gate is raised to between about 900 and about 1100[deg.] C. for a predetermined period of time.In accordance with still further embodiments of the present invention, a method for doping a polysilicon layer with nitrogen is provided. The method includes the steps of forming a polysilicon layer in a semiconductor device, the polysilicon layer sharing a first interface with an underlying dielectric layer and a second interface with an overlying dielectric layer, implanting nitrogen through the overlying dielectric layer and substantially into a polysilicon layer, and heating the polysilicon layer to cause the implanted nitrogen to form a first nitrogen-rich region substantially adjacent to the underlying dielectric layer and a substantially separate second nitrogen-rich region substantially adjacent the overlying dielectric layer. This leaves a reduced-nitrogen region within the polysilicon layer between the first nitrogen-rich region and the second nitrogen-rich region. As such, the reduced-nitrogen region has a lower concentration of nitrogen than the first nitrogen-rich region and the second nitrogen-rich region.In accordance with still further embodiments of the present invention a method for reducing electron-trap density at an interface in a semiconductor device is provided. The method includes the steps of forming a gate, forming a dielectric layer on the gate to create a gate/dielectric interface, and then implanting ions through the dielectric layer and into the gate, whereby the ions reduce the electron-trap density at the gate/dielectric interface. In certain embodiments, the method includes altering a profile of a concentration of the ions in the gate such that an ion-rich region is formed at the gate/dielectric interface. The ions are selected for their ability to reduce the electron-trap density near at or near the interface, without significantly affecting the function of the gate. By way of example, nitrogen ions have been found to reduce the electron-trapping in doped polysilicon gates. In certain embodiments, the step of altering the profile includes heating the gate, for example by using a rapid thermal anneal (RTA).The foregoing and other features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements in which:FIG. 1a depicts a cross-sectional view of a portion of a typical prior art semiconductor device having at least one memory cell, as viewed at the bit-line;FIG. 1b depicts a cross-sectional view of a portion of a typical prior art semiconductor device, as in FIG. 1a, having at least one memory cell, as viewed at the word-line;FIG. 2a depicts a cross-sectional view of a portion of a typical prior art semiconductor device, as in FIGS. 1a-b, following deposition of an interpoly dielectric layer over a plurality of patterned floating gates;FIG. 2b depicts an enlarged view of part of a floating gate as depicted in the portion of FIG. 2a, which shows that the interpoly dielectric layer is comprised of a plurality of films, including a first silicon dioxide film, a silicon nitride film and then a second silicon dioxide film, and that electrons can be trapped at or near the interface between the floating gate and the first silicon dioxide film during fabrication;FIG. 2c depicts the enlarged view of FIG. 2.b, wherein at least a portion of the trapped electrons are no longer trapped and have migrated within the floating gate, thereby causing the floating gate to become falsely programmed, for example, to a binary 1 state rather than a binary 0 state (as intended);FIG. 3 depicts a cross-sectional view of a portion of a semiconductor device having a portion of the interpoly dielectric layer, including a first silicon dioxide film and a silicon nitride film, formed over a floating gate, and wherein nitrogen ions are being implanted into the portion to create an initial nitrogen concentration profile substantially within the floating gate, in accordance with certain exemplary embodiments of the present invention;FIG. 4 is a graph depicting an initial nitrogen concentration profile as implanted within the portion as depicted, for example, in FIG. 3, wherein the resulting nitrogen concentration profile has a bell shape that is substantially located within the thickness of the floating gate, in accordance with certain exemplary embodiments of the present invention;FIG. 5 is a graph, based on FIG. 4, depicting a migrated nitrogen concentration profile following subsequent thermal processing (e.g., a thermal anneal process) of the initial nitrogen concentration profile portion within the floating gate, wherein the graph clearly shows that the migrated nitrogen concentration profile includes a top nitrogen-rich region near the interface between the floating gate and the overlying first silicon dioxide film, and a bottom nitrogen-rich region near the interface between the floating gate and the underlying tunnel oxide, in accordance with certain exemplary embodiments of the present invention;FIG. 6 depicts the portion of FIG. 3 having a migrated nitrogen concentration profile, for example, as depicted in FIG. 5, following thermal processing, which includes a top nitrogen-rich region near the interface between the floating gate and the overlying first silicon dioxide film, and a bottom nitrogen-rich region near the interface between the floating gate and the underlying tunnel oxide, in accordance with certain exemplary embodiments of the present invention;FIG. 7 depicts the portion of FIG. 6 following formation of a second silicon dioxide film on the silicon nitride film to complete the formation of the interpoly dielectric layer, in accordance with certain exemplary embodiments of the present invention;FIG. 8a depicts a cross-sectional view of a portion of a semiconductor device having a first silicon dioxide film formed over a floating gate, and wherein nitrogen ions are implanted into the portion to create an initial nitrogen concentration profile substantially within the floating gate, in accordance with certain other embodiments of the present invention;FIG. 8b depicts the portion of FIG. 8a having a migrated nitrogen concentration profile, for example, as depicted in FIG. 5, which includes a top nitrogen-rich region near the interface between the floating gate and the overlying first silicon dioxide film, and a bottom nitrogen-rich region near the interface between the floating gate and the underlying tunnel oxide, and following formation of a silicon nitride film and second silicon dioxide film on the first silicon dioxide film to complete the formation of the interpoly dielectric layer, in accordance with certain other embodiments of the present invention;FIG. 9a depicts yet another cross-sectional view of a portion of a semiconductor device having a interpoly dielectric layer, including a first silicon dioxide film, a silicon nitride film and a second silicon dioxide film, formed over a floating gate, and wherein nitrogen ions are implanted into the portion to create an initial nitrogen concentration profile substantially within the floating gate, in accordance with certain further embodiments of the present invention; andFIG. 9b depicts the portion of FIG. 9a having a migrated nitrogen concentration profile, for example, as depicted in FIG. 5, which includes a top nitrogen-rich region near the interface between the floating gate and the overlying first silicon dioxide film, and a bottom nitrogen-rich region near the interface between the floating gate and the underlying tunnel oxide, in accordance with certain other embodiments of the present invention.DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTSThe process steps and structures described below do not form a complete process flow for manufacturing integrated circuits. The present invention can be practiced in conjunction with integrated circuit fabrication techniques currently used in the art, and only so much of the commonly practiced process steps are included as are necessary for an understanding of the present invention. The figures representing cross-sections of portions of an integrated circuit device during fabrication are not drawn to scale, but instead are drawn to illustrate the features of the present invention.FIG. 2a depicts an exemplary cross-sectional view of a portion 10 of a typical prior art semiconductor, similar to FIGS. 1a-b, following the formation of floating gates 16a-c and the formation of interpoly dielectric layer 24 thereon. Floating gates 16a-c are typically formed by depositing a conformal layer of doped polysilicon over the exposed surfaces of the field oxides 14a-b and tunnel oxide 15 regions of the semiconductor wafer. The layer of doped polysilicon can be formed using conventional chemical vapor deposition (CVD) or plasma enhanced chemical vapor deposition (PECVD) techniques, or the like. Next, the layer of doped polysilicon is selectively etched to electrically isolate a plurality of floating gates, such as floating gates 16a-c. The selective etching process exposes portions of the top surface of field oxide 14a between floating gates 16a and 16b, and field oxide 14b between floating gates 16b and 16c. The selective etching process typically includes forming a resist mask (not shown) on the layer of doped polysilicon and etching away exposed portions of the doped polysilicon layer and stopping on the underlying field oxides (e.g., 14a-b).As depicted in the enlarged view of portion 10 in FIG. 2b, interpoly dielectric layer 24 is typically formed over the floating gates 16a-c and field oxides 14a-b by sequentially depositing a plurality of dielectric films (e.g., films 24a-c). For example, in an exemplary embodiment interpoly dielectric layer 24 is an "ONO layer" that includes a first silicon dioxide film 24a formed on floating gate 16b (and on field oxides 14a-b as shown in FIG. 2a), a silicon nitride film 24b formed on first silicon dioxide film 24a, and a second silicon dioxide film 24c formed on silicon nitride film 24b. Films 24a-c are typically formed using conventional thermal, CVD, and/or PECVD deposition techniques. For example, in certain embodiments the first silicon dioxide film 24a is about 50 Angstroms thick and formed using conventional thermal oxide deposition techniques, the silicon nitride film 24b is about 80 Angstroms thick and formed using conventional CVD or PECVD techniques, and the second silicon dioxide layer 24c is about 40 Angstroms thick and formed using conventional CVD or PECVD techniques.A plurality of trapped electrons 25 are also depicted within the floating gate 16b, at or near the interface of the overlying first silicon dioxide film 24a. It is believed that defects are introduced near the top surface of floating gate 16b during the formation of the first silicon dioxide film 24a, and that these defects include the trapped electrons 25, and/or lead to the formation of mechanisms that trap electrons. It has been found that the trapped electrons 25 cannot be adequately removed during subsequent semiconductor device erase processes. Further, it has been found that many of the trapped electrons can break free of their trapping mechanisms during the lifetime of the semiconductor device and migrate away from the interface and into the interior regions of floating gate 16b, for example, as depicted in FIG. 2c. It is believed that thermal cycling of the semiconductor device during the operational lifetime tends to increase the migration of previously trapped electrons 25. Consequently, in certain semiconductor devices, especially reduced-size memory devices, this later migration of previously trapped electrons 25 can lead to false programming of floating gate 16b, thereby rendering the semiconductor device unreliable.FIG. 3 depicts a cross-sectional view of a portion 10' of an exemplary semiconductor device, in accordance with certain embodiments of the present invention. Portion 10' includes a substrate 12, upon which a tunnel oxide 15 has been formed to a thickness of about 50 Angstroms, using conventional thermal deposition techniques. A floating gate 16b' is formed on tunnel oxide 15 to a thickness of between about 300 and about 2500 Angstroms, again using conventional deposition and patterning techniques, for example, as described above with regard to FIG. 2a. A first silicon dioxide film 24a is formed on floating gate 16b' to thickness of between about 30 and about 150 Angstroms using conventional thermal deposition processes. Next, a silicon nitride film 24b is formed on first silicon dioxide film 24a to thickness of between about 50 and about 150 Angstroms using conventional chemical vapor deposition CVD or like processes.While the exact mechanisms are not fully understood, it has been found that the density of trapped electrons can be significantly reduced, if not substantially eliminated, by providing nitrogen near the interface of floating gate 16b' and first silicon dioxide 24a. With this in mind, nitrogen ions are implanted, for example, using conventional ion implantation techniques, through the silicon nitride film 24b and first silicon dioxide film 24a, and into floating gate 16b'. For example, in accordance with certain exemplary embodiments of the present invention, an ion implantation energy of between about 10 and about 30 KeV in a dosage of between about 1*10<14 > and about 1*10<16 > ions/cm<2> , and more preferably about 5*10<15 > ions/cm<2> , is used to implant nitrogen into floating gate 16b'. Methods for implanting nitrogen ions for other specific purposes are known on the art. For example, U.S. Pat. No. 4,774,197, which is hereby incorporated in the present application, in its entirety and for all purposes, describes implanting nitrogen ions to prevent the incursion of impurities into the tunnel oxide, which would degrade the quality of the tunnel oxide.The implantation of nitrogen into portion 10' creates an initial nitrogen concentration profile substantially within the floating gate 16b', in accordance with certain exemplary embodiments of the present invention. By way of example, graph 40 in of FIG. 4 depicts an initial nitrogen concentration profile 42, as is implanted within portion 10' in FIG. 3, in accordance with certain preferred embodiments of the present invention. As shown, the resulting nitrogen concentration profile 42 has a higher concentration of nitrogen located substantially within the thickness of the floating gate 16b'. For example, the concentration of nitrogen, as measured as an atomic percentage of the material within floating gate 16b', is preferably between about 0.01% and about 1% percent and varies as a function of the thickness of floating gate 16b'. Once floating gate 16b' has been implanted with nitrogen, a subsequent conventional thermal processing step is employed to alter the initial nitrogen concentration profile 42. The thermal processing step preferably raises the temperature within floating gate 16b' to between about 900 and about 1100[deg.] C., which causes the implanted nitrogen that is substantially within floating gate 16b' to migrate or to be otherwise repositioned substantially within floating gate 16b'. Graph 40 has been altered in FIG. 5 to depict a migrated nitrogen concentration profile 42' following subsequent thermal processing, for example, using conventional thermal anneal process techniques. As shown, migrated nitrogen concentration profile 42' includes a top nitrogen-rich region 44 near the interface between floating gate 16b' and the overlying first silicon dioxide film 24a, and a bottom nitrogen-rich region 46 near the interface between floating gate 16b' and the underlying tunnel oxide 15, in accordance with certain exemplary embodiments of the present invention. In certain cases, substantially all of the implanted nitrogen within floating gate 16b' migrates towards either of these interfaces to form region 44 and/or 46, thereby leaving only a negligible concentration of nitrogen therebetween. By way of example, an exemplary anneal process employs a Centura available from Applied Material of Santa Clara, Calif. to raise the temperature of floating gate 16b to between about 900 and about 1100[deg.] C. for a period of between about 10 and about 60 seconds.Thus, the density of trapped electrons in floating gate 16b', for example as depicted in FIG. 6 is significantly reduced, if not substantially eliminated, because of top nitrogen-rich region 44 located near the interface of floating gate 16b' and first silicon dioxide 24a. Further, bottom nitrogen-rich region 46 can, in certain semiconductor devices, increase the charge retention capabilities of floating gate 16b' and/or reduce electron trapping that can occur near the interface to tunnel oxide 15.FIG. 7 depicts the portion of FIG. 6 following formation of a second silicon dioxide film 24c on silicon nitride film 24b, which completes the formation of the interpoly dielectric layer 24, in accordance with certain exemplary embodiments of the present invention. For example, second silicon dioxide film 24c can be deposited to a thickness of between about 20 and about 80 Angstroms using conventional CVD or like techniques.In the exemplary embodiment described above and depicted in FIGS. 3-7, the nitrogen is preferably implanted into floating gate 16b' following the formation of the first silicon dioxide film 24a and silicon nitride film 24b, because this reduces the potential for causing damage to the first silicon dioxide film 24a, for example due to charging during the nitrogen ion implantation process. However, in accordance with still other embodiments of the present invention, the ion implantation process can be employed at other stages in the fabrication process.By way of example, FIG. 8a depicts a cross-sectional view of portion 10', having only the first silicon dioxide film 24a formed over floating gate 16b', into which nitrogen ions are implanted to create an initial nitrogen concentration profile 42 substantially within floating gate 16b', in accordance with certain other embodiments of the present invention. FIG. 8b depicts portion 10' of FIG. 8a having a migrated nitrogen concentration profile 42' (for example, as depicted in FIG. 5), following a thermal processing step, which causes top nitrogen-rich region 44 and bottom nitrogen-rich region 46 to form, and following formation of silicon nitride film 24b and second silicon dioxide film 24c to complete the formation of the interpoly dielectric layer 24.Similarly, FIG. 9a depicts cross-sectional view of portion 10' having a completed interpoly dielectric layer 24 (i.e., including first silicon dioxide film 24a, silicon nitride film 24b, and second silicon dioxide film 24c) formed over floating gate 16b', in which nitrogen ions are implanted to create an initial nitrogen profile 42 substantially within floating gate 16b', in accordance with still other embodiments of the present invention. FIG. 9b depicts portion 10' of FIG. 9a having a migrated nitrogen concentration profile 42' (for example, as depicted in FIG. 5), following a thermal processing step, which causes top nitrogen-rich region 44 and bottom nitrogen-rich region 46 to form.Using the ion implantation of nitrogen and subsequent thermal processing to create a desired nitrogen concentration profile in a floating gate, the embodiments of the present invention reduces the electron-trap density at the floating gate interface. This reduces the probability of false programming of the semiconductor device.Although the present invention has been described and illustrated in detail, it is to be clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims. |
Techniques described herein include a method, system, and apparatus for detecting an orientation configuration. For example, an apparatus having an all-in-one port may include a first configuration pin and a second configuration pin. The apparatus may also include logic configured to enter into an accessory mode based on a presence of a first signal on the first configuration pin and a second signal on the second configuration pin. The logic may be further configured to provide an orientation indication by altering the first signal on the first configuration pin. |
1.An apparatus having an all-in-one port, the all-in-one port including:The first configuration pin;The second configuration pin;Logic unit configured to:Entering the assist mode based on the presence of the first signal on the first configuration pin and the second signal on the second configuration pin;The orientation indication is provided by changing the first signal on the first configuration pin.2.The apparatus of claim 1, wherein the orientation indication is provided prior to operating system startup.3.The apparatus of claim 1, wherein the assist mode is a debug assist mode.4.The apparatus of claim 3, wherein the presence of the signals on the first configuration pin and the second configuration pin is provided by a debug test system communicatively coupled to the all-in-one port. .5.The apparatus of claim 3, wherein providing the orientation indication via the logic unit of the apparatus generates a multiplexing configuration at the debug test system.6.The apparatus of claim 3, wherein the debug test system provides a signal to the device under test for detection, and the device under test modifies a multiplex configuration.7.The apparatus of claim 6, wherein the apparatus is configured to:Sink computing device during debug assist mode; orSource device during the debug assist mode.8.The apparatus of claim 1, wherein the logic unit is configured to change the voltage on the first configuration pin by decreasing a voltage level of the first signal on the first configuration pin The first signal.9.The apparatus of claim 8, wherein the logic unit is configured to reduce the voltage level on the first signal of the first configuration pin by grounding the first signal.10.The apparatus of claim 1, wherein the logic unit is configured to change the location on the first configuration pin by increasing the voltage level of the first signal on the first configuration pin The first signal is described.11.A method for orientation detection of all-in-one ports includes:Entering an assist mode based on the presence of a first signal on a first configuration pin of the multi-port and a second signal on a second configuration pin of the multi-port; andThe orientation indication is provided by changing the first signal on the first configuration pin.12.The method of claim 11, wherein the direction indication is provided without initializing operating system software associated with the all-in-one port.13.The method of claim 11 wherein said assist mode is a debug assist mode.14.The method of claim 13, wherein the presence of the signals on the first configuration pin and the second configuration pin is provided by a debug test system.15.The method of claim 13, further comprising multiplexing at the debug test system based on the orientation indication.16.A tangible, non-transitory computer readable medium comprising instructions that, when executed by a processor, instruct the processor to:The auxiliary mode is entered based on the presence of the first signal on the first configuration pin of the all-in-one port and the second signal on the second configuration pin; andThe orientation indication is provided by changing the first signal on the first configuration pin.17.The computer-readable medium of claim 16, wherein the orientation indication is provided without initializing operating system software associated with the all-in-one port.18.The computer-readable medium of claim 16 wherein the auxiliary mode is a debug assist mode.19.A system for orientation detection includes:Goals, which include:The first configuration pin;The second configuration pin;a module for entering debug assist mode based on the presence of a first signal on the first configuration pin and a second signal on the second configuration pin;a module for providing an orientation indication by changing the first signal on the first configuration pin; andA module that generates a multiplexed configuration at the debug test system is indicated based on the orientation.20.The system of claim 19, wherein the orientation indication is provided without initializing the target operating system software.21.A system for orientation detection includes:Goals, which include:The first configuration pin;The second configuration pin;Logic unit configured to:Entering a debug assist mode based on the presence of a first signal on the first configuration pin and a second signal on the second configuration pin;Providing an orientation indication by changing the first signal on the first configuration pin; andA debug test system includes a logic unit configured to generate a multiplexed configuration at the debug test system based on the orientation indication.22.The system of claim 21, wherein the orientation indication is provided without initializing the target operating system software.23.The system of claim 21, wherein the assist mode is a debug assist mode.24.The system of claim 23, wherein the presence of signals on the first configuration pin and the second configuration pin are provided by a debug test system communicatively coupled to the all-in-one port.25.The system of claim 23, wherein the direction indication is provided via the logic unit of the device to generate a multiplexing configuration at the debug test system. |
Orientation indicator connectorCross-reference to related applicationsThis application claims the benefit of U.S. Patent Application No. 14/979,243, filed on December 22, 2015 by Kuehnis et al., U.S. Patent Application No. 14/979,243, which requires the benefit of U.S. Provisional Patents filed by Kuehnis et al. on June 30, 2015; The benefit of application serial number 62/186626, the contents of both of these patent applications are incorporated herein by reference as if fully set forth herein.Technical fieldThe present disclosure generally relates to techniques for indicating the orientation of a connector at a computing device. In particular, the present disclosure relates to the orientation of a computing device having a multi-in-one connector.Background techniqueThe computing system may include an integrated circuit, a system-on-chip (SOC), and other circuit components configured to integrate multiple microcontrollers. Parts of the computing system may encounter errors. For example, each microcontroller within a given system may have their own firmware components and their own operating system driver components. Many of these microcontroller firmware and driver components may encounter errors that may need to be debugged.Description of the drawingsFigure 1 shows a device under test with an all-in-one port communicatively coupled to a debug system;FIG. 2A is a timing diagram illustrating a process for detecting an orientation in an assist mode; FIG.FIG. 2B is a timing diagram illustrating a process for detecting an orientation in an assist mode when the device under test is a downward facing port; FIG.FIG. 3 is a flowchart illustrating a procedure for detecting an orientation in an assist mode; FIG.FIG. 4 is a block diagram of a tangible, non-transitory computer readable medium providing orientation of a connector; andFIG. 5 is a block diagram of an example system for indicating the orientation of a connector.The same reference numerals have been used throughout this disclosure and the drawings to indicate like components and features. Reference numerals in the 100 series refer to features originally found in FIG. 1 ; reference numerals in the 200 series refer to features originally found in FIG. 2 ; and so on.detailed descriptionThe present disclosure generally relates to techniques for detecting the orientation in an assist mode at an all-in-one port. The all-in-one port may provide a power interface that may be at least partially or completely reversible, and may include a general data interface and additional data-specific interfaces such as a display interface, an audio interface, and the like. In general, the orientation indication may be provided by detecting signals on two or more configuration pins. For example, where only two orientations are available, a signal at the first directional pin instead of the second directional pin may provide an orientation indication. However, during debugging, the debug test system can provide the same signal on each configuration pin to bring the target system into debug assist mode. Therefore, the orientation may not be easily detectable. The techniques described herein include a logic unit at the target system that can enter debug assist mode when signals are detected at both the first and second configuration pins. The assist mode discussed in this article may refer to the debug assist mode, and may also refer to other assist modes, including the audio assist mode. The logic unit may then be configured to change one of the signals to provide an orientation indication. The orientation indication may be received by a logic unit of a debug test system having a multiplexer that may be configured based on the orientation indication. Although the debug mode is discussed generally herein, the techniques can also be implemented to other interfaces and related modes. Furthermore, the specific specifications of the multi-port assist modes and ports may be discussed throughout this disclosure, however, the techniques disclosed herein may be implemented in any directional environment with suitable configurations and support as described below.An example of an all-in-one port may include a universal serial bus (USB) "type C" connector indicated in the specification standard entitled "USB Type-C Cable and Connector Specification, Revision 1.1, April 3, 2015". The standard is referred to herein as the "USB Type C Specification." As discussed in more detail below, the USB Type C connector may include a reversible plug connector. Other all-in-one ports can be implemented in the debugging techniques described herein. However, for simplicity, the all-in-one ports may be interchangeably referred to herein or collectively simply as multi-port or USB type C connectors.For contexts, in the USB Type C specification, when a port asserts a pull-up resistor value (Rp/Rp) (usually a downward-facing port (DFP)) and connects to a given pull-down resistor value (Rd/Rp) When the other port (usually the upward-facing port (UFP)) of the ), the two ports can enter the debug assistant mode (DAM). In the example, one port is given two Rp—one on each of the two configuration pins (eg, CC1 and CC2) in the socket, and the other port is given two Rds—in CC respectively. On each of the feet (for example, CC1 and CC2). The cable can have a wire extending through it that can be connected to the CC pin in the plug. Due to the voltage of the cable above ground but less than the voltage applied to Rp, the cable orientation can be determined when both the ports are observed for Rp and Rd connected by the cable. At the Rp port, the corresponding CC pin can be at the applied voltage, and at the Rd port, the corresponding CC pin can be at ground. In order to detect the debug mode in the current embodiment, two wires extend through the cable to detect the presence of Rd or Rp. However, although the use of cables allows Rd/Rd or Rp/Rp to be detected, this detection may be at the expense of loss of orientation information. In the present disclosure, a "debug" cable may be used to connect the two CC pins through a cable so that the connection can be signaled in a debug mode or other similarly signaled mode.The target may be referred to herein as "Device Under Test (DUT)" and may be a DFP or UFP. The debug test system (DTS) is generally pre-configured to test DFP or UFP. When we detect that its resistance pull-down value (Rd) is at a voltage higher than ground (for example, 2.5 volts (V)) and when its resistance pull-up value (Rp) is lower than its pull-up value At voltages (such as 2.5 volts (V)), both DTS and DUT enter DAM mode. However, in a general sense, the techniques described herein may be implemented without involving dual roles when the configuration channel is also used to initiate debug mode in the dual role scenario discussed above.The techniques described herein provide a debugging test system with orientation indications from the DUT. Because the DUT can be more and the debug test system can be less, the manufacturing costs associated with installing and executing the directional technology can be reduced with debugging the test system to implement the techniques of the present invention.FIG. 1 shows a target computing system having an all-in-one port to be communicatively coupled to a debug system. Computing system 100 may include a device under test (DUT) 102 having an all-in-one port 104 with a plurality of pins 106 . The pin 106 of the all-in-one port 104 may be configured to be communicatively coupled to a debug test system (DTS) 108 . The DTS 108 may be configured as a test access port (TAP), such as a Joint Test Action Group (JTAG) TAP 110, a Universal Asynchronous Receiver/Transmitter (UART) TAP 112, or any other type of debug TAP or test access mechanism (TAM) Or run the control mechanism to run one or more types of debugging operations. Although the DUT 102 of FIG. 1 shows the JTAG TAP 110 and the UART 112, the DUT 102 may include any combination of the illustrated TAP or operational control mechanisms and be configured to provide pathways for generating debug trace data from the trace source 114. Any other type of port, and any port used to run the debug control process, for example, is not necessarily a JTAG-based debug access port (DAP). The trace data from the trace source 114 may include trace statements that indicate the flow of execution of various components (not shown) of the DUT 102 . Tracking data may be provided from the tracking source 114 to the DTS 108 through the all-in-one port 104 .The all-in-one port 104 may include a device under test logic unit (DUTL) 116 . DUTL 116 may be configured to enter an assist mode and provide an orientation indication to DTS 108 as discussed in more detail below. In some cases, DUTL 116 may be configured to recognize which configuration channel signal is available for universal serial bus "USB" power transfer (PD) communication. In this situation, some systems may require full operation of the voltage bus (VBUS) at higher voltages. In addition, PD communications may be required for additional reconfiguration of the all-in-one port 104.As discussed above, the all-in-one port 104 can be implemented as a USB Type C connector with pins 106 listed in the enlarged block 118 . USB type C connector 118 is an at least partially reversible plug connector. As indicated at dotted box 120, data lines D+ and D- may be disposed on the upper half of the USB Type C connector 118 and on the lower half of the connector such that they are diametrically opposed. The arrangement of the data lines at 120 provides a reversible function so that the plug can be received in a face-up or inverted arrangement at the USB Type C connector 118 .In general, both the target computing device (eg, DUT 102) and the test system (eg, DTS 108) may use the USB Type C connector 118. As shown in FIG. 1, USB type C connector 118 may include a 24-pin two-sided connector interface that provides four power/ground pairs (GND) 122 and 124, USB 2.0 high speed (HS) indicated at block 120. ) Two differential pairs of data bus (D+, D-), four pairs of super high speed (SS) data buses indicated at dashed boxes 126 and 128 (TX1+, TX1-, RX1+, RX1-, TX2+, TX2-, RX2+, RX2-), the two “sideband use” pins (SBU1 and SBU2) indicated at 130 and 132, and the second position of the BMC configuration data channel pin for cable orientation detection A configuration (CC1) pin 134 and a second configuration (CC2) pin 136, and a power interface pin (VBUS) indicated by dashed boxes 138 and 140. In the example, the CC signal path can be used for USB power transfer or BMC, communication.As discussed above, DUTL 116 may be configured to enter an assist mode and provide an orientation indication to DTS 108 . For example, when the DTS 108 is connected to the DUT 102, the signals may be rendered to the CC1 pin 134 and the CC2 pin 136. In the example with an additional debug cable, the signals that are rendered to each of the CC1 pin 134 and the CC2 pin 136 may be the same value, such as a 2.5 volt value. Based on the presence of the first signal on CC1 pin 134 having the same value and the second signal on CC2 pin 136, the logic unit may be configured to enter an assist mode, such as debug assist mode (DAM). DUTL 116 may be further configured to provide an orientation indication back to DTS 108 by changing a first signal on one of the configuration channel pins (eg, CC1 pin 134 or CC2 pin 136 ). For example, the signal provided by DTS 108 may be a 2.5 volt signal detected at DUTL 116. The DUTL 116 may change the signal by electrically removing a resistive pull down or pulling a signal at one of the configuration pins (eg, CC1 pin 134) to ground. This change can be detected by the debug test system logic unit (DTSL) 142 of the DTS 108 and can be used to configure the multiplexer (MUX) 144 of the DTS based on the orientation indication so that debugging operations can begin. In this case, the all-in-one port 104 can be configured as a UFP. Although this example shows that the DTS is multiplexed, in another example, the DTS signal and the DUT can detect and change the MUX.In some cases, the DTS 108 is configured to test the DFP. In this scenario, the DTS 108 may be responsible for manipulating a resistive pull down signal (Rd), and the DUT 102 (acting as a DFP in this case) may be responsible for manipulating its MUX (not shown) to properly connect the signals. In addition, in some cases, the DFP under test may remove a pull-up from a CC pin (eg, CC1 pin 134 or CC2 pin 136 ), and the DTS 108 may be configured to detect orientation and control the MUX. 144.Entering the DAM and providing orientation detection may also be performed during or during the boot stage of the DUT 102 before the operating system associated with the DUT 102 is initialized. As a result, orientation detection can be provided with a relatively low latency and used in early stage operations compared to other debug mode states that may require operating system functionality. In other words, an orientation indication may be provided without initializing software at DUT 102, moving some operations of the directional configuration to DTSL 142 and MUX 144 of DTS 108.FIG. 2A is a timing diagram illustrating a process for detecting an orientation in an assist mode. At 202, a debug test system (eg, DTS 108 of FIG. 1) renders a signal to a target system, such as DUT 102 of FIG. As indicated at 202, the signal may be implemented using a symmetric resistor level on the current sense line where Rd/Rd represents the CC1 pin 134 and CC2 pin 136 of FIG. 1 as seen from the perspective of the DTS 108. Symmetrical pull-down signal. When DUT 102 detects a signal on each of CC1 pin 134 and CC2 pin 136 having symmetrical resistor levels, DUT 102 may enter a sink configuration where DUT 102 acts as a device, as opposed to the host, as indicated at 204 . of. At block 206, the DTS 108 may switch to assume Rp/Rp - a symmetrical resistive pull-up signal at CC1 pin 134 and CC2 pin 136 appears from the perspective of the DTS 108. In response, DUT 102 may enter assist mode at block 208, such as the DAM discussed above with respect to FIG. At block 210, DUTL 116, eg, DUT 102, may be configured to change the signal as detected by the DTS to provide an orientation indication. At block 212, the change is detected and MUX 144 is configured at block 214. For example, the Rd/Rd signal presented at 202 from the perspective of the DTS 108 can be changed so that the DTS 108 can detect the same resistance level Rd on one configuration pin and ground (GND) on the other configuration pin. ). As another example, the signal may be changed such that the DTS 108 can detect the same resistance level Rd on one configuration pin and Rp on the second configuration pin. In other words, sensing of the voltage levels (Vcc1 and Vcc2) of each of the pin 134 and the CC2 pin 136 may be performed by CC1. If Vcc1 - Vcc2 = 0, the DUT 102 is in assist mode. If Vcc1 - Vcc2 > 0, the MUX 144 is in the default mode. If Vcc1 - Vcc2 <0, the MUX 144 switches to an alternate orientation. Although the equations presented herein can be used for reference, in implementation, some thresholds Vcc can be integrated to compensate for the resistor value margins used on the all-in-one port 104 .Although FIG. 2A discusses various resistance signaling including Rp and Rd and GND, the techniques described herein may be implemented using any of a variety of resistor signaling, current sources, and/or sinks instead of resistors, or any combination thereof. Based on the perspective of DTS 108 or DUT 102. For example, DTS 108 may provide DTS 108 with a symmetric signal. This signal can be viewed by the DUT 102 as a pull-up signal and by the DTS 108 as a pull-down signal. In either perspective, the techniques provided herein enable DUT 102 to enter debug mode without initializing software while still providing configuration instructions that may otherwise be provided with software.FIG. 2B is a timing diagram illustrating a process for detecting an orientation in an assist mode when the device under test is a downward facing port. At 202, a debug test system (eg, DTS 108 of FIG. 1) renders a signal to a target system, such as DUT 102 of FIG. As indicated at 216, the signal may be implemented using a symmetric resistor level on the current sense line, where Rp/Rp represents the CC1 pin 134 and the CC2 pin 136 appearing in FIG. 1 from the perspective of the DTS 108. The symmetrical resistance pulls up the signal. When the DUT 102 detects a signal on each of the CC1 pin 134 and the CC2 pin 136 having symmetrical resistor levels, the DUT 102 may enter the source configuration, where the DUT 102 acts as a master, as opposed to the device, as indicated at 218 of. At block 220, the DTS 108 may switch to display a symmetric pull-down signal at the CC1 pin 134 and the CC2 pin 136 from the perspective of the DTS 108 assuming Rd/Rd. In response, DUT 102 may enter assist mode at block 222, such as DAM discussed above with respect to FIG. At block 224, the DUTL 116, eg, DUT 102, may be configured to change the signal as detected by the DTS to provide an orientation indication. For example, the Rp/Rp signal presented at 216 from the perspective of the DTS 108 can be changed so that the DTS 108 can detect the same resistance level Rd on one configuration pin and the disconnection on the other configuration pin ( Open) signal value or Rp. The change is detected at block 226, and MUX 144 is configured at block 228.FIG. 3 is a flowchart illustrating a method 300 for detecting orientation in an assist mode. At block 302, the method 300 may include entering an assist mode based on the presence of a first signal on a first configuration pin of an all-in-one port and a second signal on a second configuration pin. At block 304, the method 300 may include providing an orientation indication by changing the first signal on the first configuration pin.In some cases, an orientation indication is provided without initializing the software associated with the all-in-one port. In addition, the assist mode is a debug assist mode in some cases. In this case, the presence of the signal at block 302, on the first and second configuration pins is provided by the debug test system.Method 300 may also include multiplexing at the debug test system based on the orientation indication. In some cases, method 300 may also include configuring an all-in-one port, which may be configured as a source or sink. In this scenario, the all-in-one port is configured as a sink during debug assist mode.Changing the signal at block 304 may include reducing the voltage level of the first signal on the first configuration pin. In this case, reducing the voltage level on the first signal of the first configuration pin includes grounding the first signal. However, in some cases, changing the first signal on the first configuration pin includes increasing the voltage level of the first signal on the first configuration pin.FIG. 4 is a block diagram illustrating an example of a tangible, non-transitory computer-readable medium 400 that may use data processing and graphics operations to facilitate query and context access. Computer readable media 400 may be accessed by processor 402 through computer interconnect 404 . Processor 402 may be a server processor, a compute node processor, or other processor. Tangible, non-transitory computer-readable medium 400 may include executable instructions or code to instruct processor 402 to perform the operations of the techniques described herein.The various software components discussed herein may be stored on a tangible, non-transitory computer-readable medium 400, as indicated in FIG. For example, debug accessor mode (DAM) entry device 406 may instruct processor 402 to detect signals on multiple pins. In the example, the pins can be CC pins, such as CC1 134 and CC2 136. The DAM entry device 406 may instruct the processor 402 to detect a signal indicating to enter the debug mode. In an example, each pin can carry 2.5 volts that will be detected upon receiving an instruction from the DAM to enter device 406 to processor 402. In an example, DAM entry device 406 may instruct processor 402 to detect Rd/Rd and may enter DAM. The orientation provider 408 may instruct the processor 402 to provide orientation by instructing the processor to modify the voltage on the pin. In the example, both the first and second pins may be CC pins, as indicated above. In an example, the change in voltage by the direction of the provider 408 to the processor 402 may result in a raised or lowered voltage across one of the pins. A multiplexer (MUX) configuration (config) generator 410 may instruct the processor 402 to generate a multiplexing configuration at the computer-readable medium 400 based on the orientation indication provided by the orientation providing device 406. It should be understood that any number of additional software components not shown in FIG. 4 may be included within the tangible, non-transitory computer-readable medium 400, depending on the application.FIG. 5 is a block diagram of an example system 500 (eg, a computing device) for indicating the orientation of a connector. Computing system 500 may be, for example, a laptop computer, a desktop computer, a tablet computer, a mobile device or a server, and the like. In particular, the computing system 500 may be a mobile device, such as a cell phone, a smart phone, a personal digital assistant (PDA), a tablet phone, or a tablet. The computing system 500 may include a central processing unit (CPU) 502 configured to execute the stored instructions and a memory device 504 that stores instructions executable by the CPU 502 . The CPU may be coupled to memory device 504 by bus 506 . In addition, the CPU 502 may be a single-core processor, a multi-core processor, a computing cluster, or any number of other configurations. In addition, computing system 500 may include more than one CPU 502 . Memory device 504 may include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory system. For example, memory device 504 may include dynamic random access memory (DRAM).Computing system 500 may also include a graphics processing unit (GPU) 508 . As shown, CPU 502 may be coupled to GPU 508 through bus 506 . The GPU 508 may be configured to perform any number of graphics operations within the computing system 500 . For example, GPU 508 may be configured to render or manipulate graphics images, graphics frames, video, etc. for display to a user of computing system 500 . In an embodiment, the GPU 508 includes a plurality of graphics engines, where each graphics engine is configured to perform a particular graphics task or to perform a particular type of workload.The CPU 502 may be linked via bus 506 to a display interface 510 configured to connect the computing system 500 to the display device 512 . Display device 512 may include a display screen that is a built-in component of computing system 500 . The display device 512 may also include a computer monitor, a television, a projector, or the like externally connected to the computing system 500 .The CPU 502 may also be connected via a bus 506 to an input/output (I/O) device interface 514 configured to connect the computing system 500 to one or more I/O devices 516 . For example, I/O device 516 may include a keyboard and a pointing device, where the pointing device may include a touchpad or a touch screen or the like. I/O device 516 may be a built-in component of computing system 500 or may be a device externally connected to computing system 500 .Computing system 500 may also include a storage device 518 . Storage device 518 is a physical memory, such as a hard disk drive, a solid state drive, an optical drive, a thumb drive, a drive array, or any combination thereof. Storage device 518 may also include a remote storage drive, such as for cloud computing applications. Storage device 518 includes any number of applications configured to run on computing system 500 . Storage device 520 may include orientation indicator 520 .The orientation indicator 520 may instruct the connector to enter debug assist mode (DAM) based on signals across multiple pins. The orientation indicator 520 can change the signal on one or more pins while in DAM to indicate the orientation of the connection. In an example, the change in the signal may include a modification of the voltage provided across the pins controlled by the orientation indicator 520 . The orientation indicator 520 may indicate the signaling of the exit from the DAM by returning the changed signal on at least one of the pins to the unaltered state. In another example, the directional indicator 520 may indicate the signaling of the exit from the DAM by modifying the signal on the multiple pins to the second changed signal. In the case where there are two pins, the exit from the assist mode or DAM indicated by the orientation indicator 520 may include modifying the signal on the pin to ground, Rp/Rp, Rd/Rd, or provided with zero voltage.Computing system 500 may also include a network interface controller (NIC) 522 . NIC 522 may be configured to connect computing system 500 to network 524 via bus 506 . The network 524 may be a wide area network (WAN), a local area network (LAN), the Internet, or the like. In an example, the orientation indicator 520 may use the all-in-one port to indicate the orientation and configuration of the connector through the network 524 .exampleExample 1 is a device with an all-in-one port, an all-in-one port. The apparatus includes: a first configuration pin; a second configuration pin; a logic unit configured to: based on the presence of the first signal on the first configuration pin and the second signal on the second configuration pin To enter the assist mode; the orientation indication is provided by changing the first signal on the first configuration pin.Example 2 includes the apparatus described in Example 1 with or without optional features. In this example, an orientation directive is provided before the operating system starts.Example 3 includes the apparatus of any of examples 1 to 2 that includes or does not include optional features. In this example, the assist mode is the debug assist mode. Optionally, the presence of signals on the first and second configuration pins is provided by a debug test system that is communicatively coupled to the all-in-one port. Optionally, providing a directional indication via the device's logic unit results in a multiplexed configuration at the debug test system. Optionally, the debug test system provides signals for the device under test to detect, and the device under test changes the multiplex configuration. Optionally, the apparatus is configured as a sink computing device during a debug assist mode; or a source device during a debug assist mode.Example 4 includes the apparatus of any of Examples 1 to 3, with or without optional features. In this example, the logic unit is configured to change the first signal on the first configuration pin by reducing the voltage level of the first signal on the first configuration pin. Optionally, the logic unit is configured to reduce the voltage level on the first signal of the first configuration pin by grounding the first signal.Example 5 includes the apparatus of any of examples 1 to 4, which includes or does not include optional features. In this example, the logic unit is configured to change the first signal on the first configuration pin by increasing the voltage level of the first signal on the first configuration pin.Example 6 is a method for directional detection of an all-in-one port. The method includes entering an assist mode based on the presence of a first signal on a first configuration pin and a second signal on a second configuration pin; and providing by changing a first signal on the first configuration pin Directional indication.Example 7 includes the method described in Example 6, which includes or does not include optional features. In this example, an orientation indication is provided without initializing the operating system software associated with the all-in-one port.Example 8 includes the method of any of Examples 6 to 7, which includes or does not include optional features. In this example, the assist mode is the debug assist mode. Optionally, the presence of signals on the first and second configuration pins is provided by the debug test system. Optionally, the method includes multiplexing at a debug test system based on an orientation indication. Optionally, the method includes configuring an all-in-one port, which may be configured as a source or sink. Optionally, the all-in-one port is configured as a sink during debug assist mode or as a source during debug assist mode.Example 9 includes the method of any of Examples 6 to 8, which includes or does not include optional features. In this example, changing the first signal on the first configuration pin includes reducing the voltage level of the first signal on the first configuration pin. Optionally, reducing the voltage level on the first signal of the first configuration pin includes grounding the first signal.Example 10 includes the method of any of Examples 6-9, including or not including optional features. In this example, changing the first signal on the first configuration pin includes increasing the voltage level of the first signal on the first configuration pin.Example 11 is a system for orientation detection. The system includes a target including: a first configuration pin; a second configuration pin; a logic unit configured to: base on a first signal on the first configuration pin and a second signal on the second configuration pin Exists to enter a debug assist mode; provide an orientation indication by changing a first signal on a first configuration pin; and a debug test system that includes a logic unit configured to generate a multiplex configuration at a debug test system based on the orientation indication .Example 12 includes the system described in Example 11 with or without optional features. In this example, operating system software is provided that indicates orientation without initializing the target.Example 13 includes the system of any of Examples 11-12, with or without optional features. In this example, the target may be configured as a source or sink, and where the target is configured as a sink computing device during the debug assist mode.Example 14 includes the system of any of Examples 11-13, with or without optional features. In this example, the target logic unit is configured to change the first signal on the first configuration pin by reducing the voltage level of the first signal on the first configuration pin.Example 15 includes the system of any of Examples 11-14, with or without optional features. In this example, the target logic unit is configured to change the first signal on the first configuration pin by increasing the voltage level of the first signal on the first configuration pin.Example 16 includes the system of any of Examples 11-15, with or without optional features. In this example, the target logic cell is configured to reduce the voltage level on the first signal of the first configuration pin by grounding the first signal.Example 17 includes the system of any of Examples 11-16, with or without optional features. In this example, the target can be configured as: sink during debug assist mode; or source during assist mode.Example 18 includes the system of any of Examples 11-17, with or without optional features. In this example, the assist mode is the debug assist mode.Example 19 includes the system of any of Examples 11-18, with or without optional features. In this example, the presence of signals on the first and second configuration pins is provided by the debug test system.Example 20 includes the system of any of Examples 11-19, with or without optional features. In this example, the target signals the exit from the assist mode by returning the first signal on the first configuration pin from the changed level.Example 21 is a tangible, non-transitory computer-readable medium that includes instructions that, when executed by a processor, instruct the processor to provide the orientation of the connector. The computer readable medium includes instructions for a processor to: enter an assist mode based on the presence of a first signal on a first configuration pin of an all-in-one port and a second signal on a second configuration pin; The directional indication is provided by changing the first signal on the first configuration pin.Example 22 includes the computer-readable medium of Example 21, with or without optional features. In this example, an orientation indication is provided without initializing the operating system software associated with the all-in-one port.Example 23 includes the computer-readable medium of any of Examples 21-22, with or without optional features. In this example, the assist mode is the debug assist mode. Optionally, the presence of signals on the first and second configuration pins is provided by the debug test system. Optionally, the computer-readable medium includes multiplexing at a debug test system based on the orientation indication. Optionally, the computer-readable medium includes a configuration of a multi-port, which can be configured as a source or sink. Optionally, the all-in-one port is configured as a sink during debug assist mode or as a source during debug assist mode.Example 24 includes the computer-readable media of any of Examples 21-23, with or without optional features. In this example, changing the first signal on the first configuration pin includes reducing the voltage level of the first signal on the first configuration pin. Optionally, reducing the voltage level on the first signal of the first configuration pin includes grounding the first signal.Example 25 includes the computer-readable medium of any of Examples 21-24, with or without optional features. In this example, changing the first signal on the first configuration pin includes increasing the voltage level of the first signal on the first configuration pin.Example 26 is a device with an all-in-one port, an all-in-one port. The apparatus includes instructions to direct a processor to: a first configuration pin; a second configuration pin; a presence of a second signal based on a first signal on a first configuration pin and a second configuration pin A module to enter the assist mode; a module that provides an orientation indication by changing the first signal on the first configuration pin.Example 27 includes the device of Example 26, with or without optional features. In this example, operating system software is provided that indicates orientation without initializing the device.Example 28 includes the apparatus of any of Examples 26-27, with or without optional features. In this example, the assist mode is the debug assist mode. Optionally, the presence of signals on the first and second configuration pins is provided by a debug test system that is communicatively coupled to the all-in-one port. Optionally, an orientation indication is provided via the module of the device to generate a multiplexing configuration at the debug test system. Optionally, the device is a computing device that may be configured as a source computing device or a sink computing device. Optionally, the apparatus is configured as a sink computing device during a debug assist mode; or a source device during a debug assist mode.Example 29 includes the apparatus of any of Examples 26-28, with or without optional features. In this example, the module of the device changes the first signal on the first configuration pin by reducing the voltage level of the first signal on the first configuration pin.Example 30 includes the apparatus of any of Examples 26-29, with or without optional features. In this example, the module of the device reduces the voltage level on the first signal of the first configuration pin by grounding the first signal.Example 31 includes the apparatus of any of Examples 26-30, with or without optional features. In this example, the module of the device changes the first signal on the first configuration pin by increasing the voltage level of the first signal on the first configuration pin.The embodiments are embodiments or examples. References in the specification to "an embodiment," "one embodiment," "some embodiments," "various embodiments," or "other embodiments" are intended to refer to particular features, structures, or characteristics described in connection with the embodiments It is included in at least some but not necessarily all of the embodiments of the technology of the present invention. The various appearances of "an embodiment," "one embodiment," or "some embodiments" do not necessarily all refer to the same embodiment.All components, features, structures, or characteristics that are not described and illustrated herein need to be included in one or more particular embodiments. For example, if the specification states a component, feature, structure, or characteristic "may," "may," "might," or "could" be included, that particular component, feature, structure, or characteristic does not necessarily need to be included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the element. If the specification or claim refers to "extra" elements, that does not exclude more than one of the additional elements.It should be noted that although some embodiments have been described with reference to specific embodiments, other embodiments are possible according to some embodiments. Moreover, the arrangement and/or order of circuit elements or other features illustrated in the drawings and/or described herein need not be arranged in the particular way illustrated and described. According to some embodiments, many other arrangements are possible.In each system shown in the figures, elements may in some cases each have the same reference numbers or different reference numbers to suggest that the elements represented may be different and/or similar. However, the elements may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which is called the first element and which is called the second element are arbitrary.It should be understood that details in the aforementioned examples may be used anywhere in one or more embodiments. For example, all optional features of the computing device described above may also be implemented with respect to any of the methods or computer-readable media described herein. Additionally, although flowcharts and/or state diagrams may be used herein to describe embodiments, the techniques are not limited to those diagrams or the corresponding descriptions herein. For example, flow need not go through each illustrated box or state or in exactly the same order as shown and described herein.The techniques of this disclosure are not limited to the specific details set forth herein. Indeed, those skilled in the art, having the benefit of this disclosure, will appreciate that many further variations from the foregoing description and accompanying drawings may be made within the scope of the techniques of this disclosure. Accordingly, it is the following claims, including any amendments thereto, that define the scope of the technology of the present invention. |
Techniques and mechanisms for providing structures of a magnetic material based inductor. In an embodiment, an inductor (300) comprises a body (341) of a magnetic material, and a conductor (306) which extends along a surface of the body. The body comprises a carrier material (161) and magnetic filler particles (162) distributed in the carrier material. A passivation material (315) of the inductor is provided adjacent to the conductor and to surfaces of the filler particles. The conductor and the passivation material comprise different respective material compositions, wherein the passivation material comprises one of nickel, tin, copper, palladium, or gold. In another embodiment, the inductor is one of a plated through hole inductor type of a planar inductor type. |
A device comprising:a first body in a substrate, the first body comprising a carrier material and first magnetic filler particles;a first conductor which extends along a surface of the first body; anda first material adjacent to the first conductor and to the first magnetic filler particles, wherein the first conductor and the first material comprise different respective material compositions; anda first terminal and a second terminal coupled to the first conductor such that an inductor is formed.The device of claim 1, wherein the first material comprises one of nickel, tin, copper, palladium, or gold.The device of claim 1 or claim 2, wherein the first material comprises one of an inorganic nitride, a metal oxide, or a polymer.The device of any of claims 1 through claim 3, wherein:in a first region, a first portion of the first material is between the first body and a second portion of the first conductor;in a second region, a third portion the first material is between the first body a fourth portion of the first conductor; andany portion of the first conductor is outside of a third region between the first region and the second region.The device of claim 4, wherein a first hole extends through the first body, wherein the first conductor extends in the first hole, and wherein the third region is in the first hole.The device of claim 5, further comprising:a second body in the substrate, the second body comprising the carrier material and second magnetic filler particles, wherein a second hole extends through the second body;a second conductor which extends in the second hole, wherein the first conductor and the second conductor are coupled in series with each other between the first terminal and the second terminal; anda second material adjacent to the second conductor and to the second magnetic filler particles, wherein the second conductor and the second material comprise different respective material compositions, wherein the second material comprises one of nickel, tin, copper, palladium, or gold; andwherein:in a fourth region, a fifth portion of the second material is between the second body and a sixth portion of the second conductor;in a fifth region, a seventh portion the second material is between the second body an eighth portion of the second conductor; andany portion of the second conductor is outside of a sixth region between the fourth region and the fifth region.The device of claim 4, wherein:the first body forms at least in part a first side of a layer of the substrate;the first conductor is on the first side; andthe first side extends through the first region, the second region, and the third region.The device of claim 7, wherein:in a fourth region, a fifth portion of the first material is between the first body and a sixth portion of the first conductor;the second region is between the third region and the fourth region; andany portion of the first conductor is outside of a fifth region between the second region and the fourth region.The device of any of claims 1 through claim 3, wherein the magnetic filler particles comprise one of iron, nickel, zinc, or silicon.The device of any of claims 1 through claim 3, wherein the carrier material comprises one of a polymer resin, a rubber, or a ceramic.A method comprising:forming in a substrate a first body comprising a carrier material and first magnetic filler particles;depositing a first material on surfaces of the first magnetic filler particles;after depositing the first material, forming a first conductor which extends along a surface of the first body, wherein the first material is adjacent to the first conductor and to the first magnetic filler particles, wherein the first conductor and the first material comprise different respective material compositions; andcoupling a first terminal and a second terminal to the first conductor to provide an inductor.The method of claim 11, wherein the first material comprises one of nickel, tin, copper, palladium, or gold.The method of claim 11 or claim 12, wherein the first material comprises one of an inorganic nitride, a metal oxide, or a polymer.The method of any of claims 11 through claim 13, wherein:in a first region, a first portion of the first material is between the first body and a second portion of the first conductor;in a second region, a third portion the first material is between the first body a fourth portion of the first conductor; andany portion of the first conductor is outside of a third region between the first region and the second region.The device of claim 14, wherein:forming the first body comprises forming a first hole which extends through the first body;the first conductor extends in the first hole; andthe third region is in the first hole. |
BACKGROUND1. Technical FieldThis disclosure generally relates to magnetic material based inductors and more particularly, but not exclusively, to passivation structures which facilitate the fabrication of an inductor.2. Background ArtConventional processors with integrated voltage regulation (IVR) schemes, such as FIVR (fully integrated voltage regulator), typically use package embedded air core inductors (ACIs). Fully integrated voltage regulators (FIVRs) enable the provisioning of power delivery characteristics which are specific to a particular domain. However, FIVR performance is often constrained by power efficiency issues, or is sensitive to the package dimension and process variation. With Moore's law scaling, the footprint available for inductors reduces every generation, leading to a decline in the quality factor (Q factor) of ACI inductors, increased IVR losses, and reduced efficiency.BRIEF DESCRIPTION OF THE DRAWINGSThe various embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:FIG. 1A illustrates a cross-section of package with a magnetic material based inductor, according to some embodiments of the disclosure.FIG. 1B illustrates a cross-section a magnetic material based inductor comprising a passivation material, according to some embodiments of the disclosure.FIG. 2 illustrates a method to provide passivation structures of a magnetic material based inductor according to an embodiment.FIGs. 3A, 3B illustrate a cross-section and a 3D view, respectively, of a magnetic material based inductor using PTH vias in accordance with some embodiments.FIGs. 4A-4H illustrate a process flow for fabricating a magnetic material based inductors with selective PTH wall plating, in accordance with some embodiments.FIG. 5 illustrates a top view of a package with a magnetic material based inductor compared with air core inductors, according to some embodiments.FIGs. 6A-6G illustrate cross-sectional views of an exemplary fabrication method for fabricating an inductor having an organic magnetic film embedded within a substrate, according to some embodiments.FIG. 7 is a functional block diagram illustrating a computing device in accordance with one embodiment.FIG. 8 is a functional block diagram illustrating an exemplary computer system, in accordance with one embodiment.DETAILED DESCRIPTIONEmbodiments discussed herein variously provide techniques and mechanisms for a passivation material to facilitate the fabrication of a magnetic material based inductor which comprises a magnetic based material in or on a substrate. In the following description, numerous details are discussed to provide a more thorough explanation of the embodiments of the present disclosure. It will be apparent to one skilled in the art, however, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure.Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate a greater number of constituent signal paths, and/or have arrows at one or more ends, to indicate a direction of information flow. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme.Throughout the specification, and in the claims, the term "connected" means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The term "circuit" or "module" may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."The term "device" may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus which comprises the device.The term "scaling" generally refers to converting a design (schematic and layout) from one process technology to another process technology and subsequently being reduced in layout area. The term "scaling" generally also refers to downsizing layout and devices within the same technology node. The term "scaling" may also refer to adjusting (e.g., slowing down or speeding up - i.e. scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level.The terms "substantially," "close," "approximately," "near," and "about," generally refer to being within +/- 10% of a target value. For example, unless otherwise specified in the explicit context of their use, the terms "substantially equal," "about equal" and "approximately equal" mean that there is no more than incidental variation between among things so described. In the art, such variation is typically no more than +/-10% of a predetermined target value.It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.Unless otherwise specified the use of the ordinal adjectives "first," "second," and "third," etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner.The terms "left," "right," "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms "over," "under," "front side," "back side," "top," "bottom," "over," "under," and "on" as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material "over" a second material in the context of a figure provided herein may also be "under" the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies.The term "between" may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material "between" two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices.As used throughout this description, and in the claims, a list of items joined by the term "at least one of' or "one or more of' can mean any combination of the listed terms. For example, the phrase "at least one of A, B or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C. It is pointed out that those elements of a figure having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described, but are not limited to such.In addition, the various elements of combinatorial logic and sequential logic discussed in the present disclosure may pertain both to physical structures (such as AND gates, OR gates, or XOR gates), or to synthesized or otherwise optimized collections of devices implementing the logical structures that are Boolean equivalents of the logic under discussion.The technologies described herein may be implemented in one or more electronic devices. Non-limiting examples of electronic devices that may utilize the technologies described herein include any kind of mobile device and/or stationary device, such as cameras, cell phones, computer terminals, desktop computers, electronic readers, facsimile machines, kiosks, laptop computers, netbook computers, notebook computers, internet devices, payment terminals, personal digital assistants, media players and/or recorders, servers (e.g., blade server, rack mount server, combinations thereof, etc.), set-top boxes, smart phones, tablet personal computers, ultramobile personal computers, wired telephones, combinations thereof, and the like. More generally, the technologies described herein may be employed in any of a variety of electronic devices including an inductor comprising a magnetic material, a conductor, and a passivation material disposed therebetween.Some existing technologies integrate magnetic core materials in a package to improve inductance, flux and/or power delivery characteristics. For example, various iron alloy based magnetic core materials exhibit low magnetic loss and high permeability characteristics which make them attractive for many applications. Some embodiments variously improve on these existing technologies by mitigating a tendency of iron alloy fillers (for example) to pose material compatibility problems - e.g., with respect to wet chemistry manufacturing processes such as desmear, eless Cu seed, copper roughening and subtractive processes. Examples of such problems include a risk of leaching in an acid clean/eless Cu bath, an unacceptably high reactivity in a soft etching module, poor coverage and/or discontinuity of a conductive layer formed by eless copper (Cu) - or other - deposition, or the like.Certain features of various embodiments are described herein with reference to the providing of a passivation material between a magnetic material and a conductor, wherein the passivation material, magnetic material and conductor - respectively - comprise nickel (Ni), iron (Fe), and copper (Cu). However, with the benefit of the information provided herein, it is to be appreciated by one of ordinary skill in the art that such description can be extended to additionally or alternatively apply to any of various other suitable combinations of a passivation material, a magnetic material, and a conductor.FIG. 1A illustrates a cross-section of a packaged device 100 with one or more magnetic material based inductors, according to some embodiments. Device 100 illustrates one example of an embodiment wherein an inductor comprises a passivation layer which adjoins a conductor and a first body of a magnetic material - e.g., wherein the magnetic material comprises both a carrier material, and magnetic filler particles in said carrier material. In one such embodiment, packaged device 100 comprises a type of inductor - referred to herein as a plated through-hole (PTH) inductor - in which a conductor extends into a hole formed with a magnetic material. Additionally or alternatively, packaged device 100 comprises another type of inductor - referred to herein as a planar inductor - in which a conductor extends, along a surface of a magnetic material, at a side of a substrate layer which the magnetic material forms at least in part.In the example embodiment shown, packaged device 100 is, or otherwise includes, an IC (integrated circuit) package assembly comprising first die 101, package substrate 104, interposer 105, and circuit board 122. IC package device 100 illustrates a stacked die configuration wherein (in this example embodiment) first die 101 is coupled to package substrate 104, and second die 102 is coupled with first die 101. However, the particular arrangement of first die 101, second die 102, package substrate 104, interposer 105 and circuit board 122 relative to each other is merely illustrative, and not limiting on some embodiments. For example, various other embodiments omit some or all of first die 101, second die 102, or circuit board 122 - e.g., wherein one such embodiment is provided only with a substrate (such as substrate 104 or that of interposer 105) which has formed therein or thereon inductor structures described herein.In some embodiments, first die 101 has a first side S1 and a second side S2 opposite to the first side S1. In some embodiments, the first side S1 is the side of the die commonly referred to as the "inactive" or "back" side of the die. In some embodiments, the second side S2 includes one or more transistors, and is the side of the die commonly referred to as the "active" or "front" side of the die. In some embodiments, second side S2 of first die 101 includes one or more electrical routing features 106. In some embodiments, second die 102 includes an "active" or "front" side with one or more electrical routing features 606. In some embodiments, electrical routing features 106 are bond pads (e.g., formed from a combination of bumps and solder balls 103).In some embodiments, second die 102 is coupled to first die 101 in a front-to-back configuration (e.g., the "front" or "active" side of second die 102 is coupled to the "back" or "inactive" side S1 of first die 101). In some embodiments, dies are coupled with one another in a front-to-front, back-to-back, or side-to-side arrangement. In some embodiments, one or more additional dies are coupled with first die 101, second die 102, and/or with package substrate 104. Other embodiments lack second die 102. In some embodiments, first die 101 includes one or more TSVs (through-silicon-vias). In some embodiments, second die 102 is coupled to first die 101 by die interconnects formed from combination of bumps and solder balls 103. In some embodiments, solder balls 103 are formed using a solder-on-die (SOD) process.In some embodiments, inter-die interconnects are solder bumps, copper pillars, or other electrically conductive features. In some embodiments, an interface layer 124 is provided between first die 101 and second die 102. In some embodiments, interface layer 124 is, or includes, a layer of under-fill, adhesive, dielectric, or other material. In some embodiments, interface layer 124 serves various functions, such as providing mechanical strength, conductivity, heat dissipation, or adhesion.In some embodiments, first die 101 and second die 102 are single dies (e.g., first die 101 is a single die instead of multiple dies). In other embodiments, first die 101 and/or second die 102 includes two or more dies. For example, in some embodiments first die 101 and/or second die 102 are a wafer (or portion of a wafer) having two or more dies formed on it. In some embodiments, first die 101 and/or second die 102 includes two or more dies embedded in an encapsulant. In some embodiments, the two or more dies are arranged side-by-side, vertically stacked, or positioned in any other suitable arrangement. In some embodiments, the IC package assembly includes, for example, combinations of flip-chip and wire-bonding techniques, interposers, multi-chip package configurations including system-on-chip (SoC) and/or packageon-package (PoP) configurations to route electrical signals.In some embodiments, first die 101 and/or second die 102 are a primary logic die. In some embodiments, first die 101 and/or second die 102 are configured to function as memory, an application specific circuit (ASIC), a processor, or some combination of such functions. For example, first die 101 includes a processor and second die 102 includes memory. In some embodiments, one or both of first die 101 and second die 102 are embedded in encapsulant 108. In some embodiments, encapsulant 108 can be any suitable material, such as, liquid crystalline polymers, mold film, or ABF (Ajinomoto Build-up Film) substrate, other dielectric/organic materials, resins, epoxies, polymer adhesives, silicones, acrylics, polyimides, cyanate esters, thermoplastics, and/or thermosets.In some embodiments, first die 101 is coupled to package substrate 104 (e.g., CPU substrate). In some embodiments, package substrate 104 is a coreless substrate. For example, package substrate 104 is a bumpless build-up layer (BBUL) assembly that includes a plurality of "bumpless" build-up layers. Here, the term "bumpless build-up layers" generally refers to layers of substrate and components embedded therein without the use of solder or other attaching means that are considered "bumps."In some embodiments, the one or more build-up layers have material properties that are able to be altered and/or optimized for reliability, warpage reduction, etc. In some embodiments, package substrate 104 is composed of a polymer, ceramic, glass, or semiconductor material. In some embodiments, package substrate 104 is a conventional cored substrate and/or an interposer.In some embodiments, interposer 105 is provided between circuit board 122 and substrate 104. Interposer 105 of the various embodiments is formed of a variety of materials. For example, interposer 105 is formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In some embodiments, interposer 105 is formed of alternate rigid or flexible materials, such as silicon, germanium, and other group III-V and group IV materials of the Periodic Table. In some embodiments, interposer 105 includes metal interconnects and vias including but not limited to TSVs. In some embodiments, interposer 105 includes embedded devices including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, ESD (electrostatic discharge diode) devices, and memory devices. In some embodiments, interposer 105 includes complex devices such as RF devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices, etc. In some embodiments, package interconnects 112a couple electrical routing features 110a disposed on the second side of package substrate 104 to corresponding electrical routing features 116a on interposer 105.In some embodiments, circuit board (or motherboard) 122 is a PCB (printed circuit board) composed of an electrically insulating material such as an epoxy laminate. For example, circuit board 122 includes electrically insulating layers composed of materials such as, for example, polytetrafluoroethylene, phenolic cotton paper materials such as Flame Retardant 4 (FR-4), FR-1, cotton paper and epoxy materials such as CEM-1 or CEM-3, or woven glass materials that are laminated together using an epoxy resin prepreg material.Structures such as traces, trenches, and vias (which are not shown here) are formed through the electrically insulating layers to route the electrical signals of first die 101 through the circuit board 122. Circuit board 122 is composed of other suitable materials in other embodiments. In some embodiments, circuit board 122 includes other electrical devices coupled to the circuit board that are configured to route electrical signals to or from first die 101 through circuit board 122. In some embodiments, circuit board 122 is a motherboard.In some embodiments, a one side of interposer 105 is coupled to the second side of substrate 104 via routings 116a, 112a, and 110a. In some embodiments, another side of interposer 105 is coupled to circuit board 122 by package interconnects 110b, 112b, and 116b.In some embodiments, package substrate 104 provides electrical routing features formed therein to route electrical signals between first die 101 (and/or the second die 102) and circuit board 122 and/or other electrical components external to the IC package assembly. In some embodiments, package interconnects 112a/b and die interconnects 106 include any of a wide variety of suitable structures and/or materials including, for example, bumps, pillars or balls formed using metals, alloys, solderable material, or their combinations. In some embodiments, electrical routing features 110 are arranged in a ball grid array ("BGA") or other configuration.In some embodiments, a voltage regulator 120 (e.g., an integrated VR) is provided in first die 101 (or second die 102) which includes switching elements of the voltage regulator (e.g., high-side and low-side switches or bridges). In some embodiments, the relatively large low-loss switching elements placed in series with one or more inductors. In some embodiments, one or more PTH inductors and/or one or more planar inductors are fabricated in substrate 104, as shown by reference sign 118. Additionally or alternatively, one or more PTH inductors and/or one or more planar inductors are fabricated in interposer 105, as shown by reference sign 119.In some embodiments, a control circuit disposed in the die stack (e.g., in first die 101 or second die 102) monitors the current demand placed on the voltage regulator 120 by one or more power consumers (e.g., by one or more rails coupled to a processor core). As the load (e.g., the load current demand) presented by the power consumer(s) increases, the control circuit conductively couples an inductor module (e.g., one or more magnetic inductors) while the high load condition exists. As the load decreases, the control circuit decouples the one or more inductors from the inductor module, freeing the one or more inductors for use by another power consumer.In some embodiments, a power delivery system (e.g., for and in first and/or second dies 101/102) is provided that includes a plurality of power delivery circuits (e.g., power gates driven by voltage regulator 120), each of the circuits to supply a load current to a respective one of a plurality of conductively coupled loads (e.g., processor core, cache, graphics unit, memory, etc.); a plurality of magnetic inductor modules (e.g., 118/119), each of the plurality of inductor modules having a respective allowable current threshold, each of the plurality of inductor modules conductively coupled to a respective one of the power delivery circuits; and control circuitry to: receive information indicative of the load current supplied to at least one power delivery circuit; receive information indicative of the allowable current threshold of the at least one power delivery circuit; and determine whether the load current supplied by the at least one power delivery circuit exceeds the allowable current threshold for the inductor module conductively coupled to the at least one power delivery circuit.In various embodiments, structures in or on a substrate are to provide an inductor comprising a passivation material which is deposited - e.g., via an electroless (eless) process - to facilitate a subsequent deposition of a conductor along a surface of a magnetic material. In one such embodiment, said deposition is to passivate magnetic filler particles which are in a carrier material - e.g., wherein a deposition comprising nickel (Ni) passivates particles of an iron (Fe) alloy and/or any of various other filler materials which (for example) are adapted from conventional inductor designs.Immersion copper (Cu) deposition is one typical process to form a conductor on a magnetic material which (for example) comprises particles of an iron (Fe) filler. Such immersion processing tends to form copper sooner and/or more quickly on regions where the Fe filler is exposed, which poses issues of coverage uniformity and thickness control. Often, the reaction of such immersion Cu deposition is too fast to allow for control of uniformity and/or thickness. Additionally or alternatively, voids tend to be formed during volume change processes.To improve such deposition of copper (and/or any of various other suitable conductors), some embodiments variously provide a passivation (protective) material on exposed surfaces of magnetic filler particles. In one such embodiments, this passivation material promotes a more steady reaction rate during metallization which deposits the conductor, thus improving the ability to control conductor thickness and/or continuity.By way of illustration and not limitation, some embodiments provide an eless Ni plating layer on Fe filler particles to improve filler/eless Cu compatibility. In an embodiment, the eless Ni passivation is plated on exposed Fe fillers (e.g., only on said fillers due to the surface energy). Eless Ni coverage is facilitated, for example, by a selected chemistry of one or more precleaning Ni coating chemicals, of an eless Ni bath - e.g., with, or alternatively, without sulfur - to provide a desired Ni plating thickness. In some embodiments, eless Ni passivation mitigates the risk of Fe leaching and gas generation in subsequent processing such as an eless Cu bath, etching with bis-(3-sodiumsulfopropyl) disulfide (SPS), and/or an acid rinse bath with H2SO4. The eless Ni layer promotes adhesion of Ni to Fe fillers and eless Ni to eless Cu.Some embodiments variously improve the stability and compatibility of Fe alloy (or other) magnetic materials in manufacturing processes by providing a protection eless Ni (or other suitable) layer with an auto catalytic reaction. In one such embodiment, a magnetic property of the eless Ni layer is determined (for example) by controlling a relative composition of nickel (Ni) with one or more other constituents in an eless bath. By way of illustration and not limitation, a Ni-P ratio of such an eless bath is selectively provided in some embodiments to mitigate skin effect loss.For example, FIG. 1B shows a cross-sectional side view of a portion of an inductor 150 which, in various embodiments, is provided in a substrate such as one at device 100. Inductor 150 illustrates a PTH inductor which includes a passivation material according to one embodiment. As described elsewhere herein, some embodiments additionally or alternatively provide a planar inductor comprising said passivation material.In the example embodiment shown, inductor 150 comprises a body 160 (i.e., a contiguous mass) in a substrate, such as that of interposer 105, or such as package substrate 104, for example. Body 160 exhibits magnetic characteristics which facilitate operation of inductor 150 - e.g., wherein body 160 comprises a carrier material 161, and magnetic filler particles 162 which are in carrier material 161. In one such embodiment, carrier material 161 comprises an epoxy, a rubber, a ceramic, a polymer resin - e.g., comprising anhydride modified polyethylene (AMP) - and/or any of various other materials which are suitable to support a distribution of filler particles 162 in body 160.In various embodiments, filler particles 162 exhibit magnetic properties and (for example) comprise one of a paramagnet or a ferromagnet. In one such embodiment, filler particles 162 comprise one of iron, nickel, zinc, or silicon. By way of illustration and not limitation, filler particles 162 comprises any of various Nickel-Zinc (Ni-Zn) alloys, permalloy materials, silicon (Si) steels, ferrites, amorphous alloys, iron (Fe) fillers - including an iron (Fe) alloy - and/or derivatives thereof. In some embodiments, filler particles 162 comprises a magnetic material including one or more of: Pt, Pd, W, Ce, Al, Li, Mg, Na, Cr2O3, CoO, Dy, Dy2O, Er, Er2O3, Eu, Eu2O3, Gd, Gd2O3, FeO, Fe2O3, Nd, Nd2O3, KO2, Pr, Sm, Sm2O3, Tb, Tb2O3, Tm, Tm2O3, V, V2O3. In various embodiments, filler particles 162 comprises a magnetic alloy formed (for example) of one or more of: Pt, Pd, W, Ce, Al, Li, Mg, Na, Cr, Co, Dy, Er, Eu, Gd, Fe, Nd, K, Pr, Sm, Tb, Tm, or V. In some embodiments, filler particles 162 exhibit non-insulating but magnetic properties - e.g., wherein filler particles 162 include one or more of: Heusler alloy, Co, Fe, Ni, Gd, B, Ge, Ga, permalloy, or Yttrium Iron Garnet (YIG), and wherein the Heusler alloy is a material which includes one or more of: Cu, Mn, Al, In, Sn, Ni, Sb, Ga, Co, Fe, Si, Pd, Sb, V, Ru, Cu2MnAl, Cu2MnIn, Cu2MnSn, Ni2MnAl, Ni2MnIn, Ni2MnSn, Ni2MnSb, Ni2MnGa Co2MnAl, Co2MnSi, Co2MnGa, Co2MnGe, Pd2MnAl, Pd2MnIn, Pd2MnSn, Pd2MnSb, Co2FeSi, Co2FeAl, Fe2VAl, Mn2VGa, Co2FeGe, MnGa, MnGaRu, or Mn3X, where 'X' is one of Ga or Ge.A hole 190 extends in body 160 and, for example, through one or more layers of the substrate. In one such embodiment, various ones of filler particles 162 each extend to hole 190 - e.g., wherein carrier material 161 leaves the various particles at least partially exposed. Inductor 150 further comprises a conductor 180 which extends along a surface formed by the hole 190 in body 160. By way of illustration and not limitation, conductor 180 comprises any of various suitable metals including one or more of copper (Cu), silver (Ag), gold (Au), nickel (Ni), tin (Sn), iron (Fe) and/or any of various alloys or other derivatives thereof. During operation of inductor 150, conductor 180 carries an electrical current to generate a magnetic flux with body 160.In some embodiments, inductor 150 further comprises a passivation material 170 - e.g., deposited by an eless process - which facilitates the formation of conductor 180 in hole 190. More particularly, passivation material 170 provides a material composition - different than that of conductor 180 - which (for example) promotes a relatively steady rate of reaction as a metal of conductor 180 is formed in hole 190. In various embodiments, passivation material 170 comprises one of nickel (Ni), tin (Sn), copper (Cu), palladium (Pd), or gold (Au). In one such embodiment, passivation material 170 further comprises another constituent, such as phosphorous (P), which is provided in a suitable proportion to mitigate skin effect loss.In various embodiments, passivation material 170 additionally or alternatively comprises any of various suitable inorganic nitrides including, but not limited to, titanium nitride (TiN), silicon nitride (Si3N4), or the like - e.g., wherein passivation material 170 comprises nitrogen (N), and one of titanium (Ti), or silicon (Si). Additionally or alternatively, passivation material 170 comprises any of a variety of suitable metal oxides such as aluminum oxide (Al2O3). In some embodiments, passivation material 170 additionally or alternatively comprises any of various suitable polymers, for example.Some embodiments additionally or alternatively provide improved performance with one or more cleaner chemicals and/or activator chemicals to remove Fe oxide (for example) and/or any of various other inorganic or organic residues. By way of illustration and not limitation, some embodiments variously enhance Ni chemistry wettability on Fe particles by exposing them to an alkaline based cleaner and a hydrochloric acid (HCl) which, for example, has concentration in a range of 30%-40%.In various embodiments, nucleation of passivation material 170 begins to form at the respective exposed surfaces of one or more of filler particles 162. The eless passivation (e.g., comprising Ni) enables controlled deposition on exposed Fe filler surfaces with relatively low plating time. In some embodiments, forming a thicker film with longer plating time provides continuous coverage of a passivation layer across a surface of a magnetic material - e.g., due to a direction of an isotropic eless Ni film growth. In one such embodiment, a thickness of a passivation layer is in a range of 0.5 microns (µm) to 10 µm - e.g., in a range of 0.75 µm to 5 µm (and, in some embodiments, in a range of 1 µm to 3 µm).An eless Ni layer according to some embodiments provides strong adhesion (for example) of Ni to Fe fillers, and of eless Ni to eless Cu. Very uniform coverage is provided in some embodiments, for example, with high aspect ratio plated through hole (PTH) structures of an inductor. Embodiments support the tuning of magnetic property of a passivation layer by providing nickel (Ni) and phosphorous (P) in a ratio which mitigates skin effect loss.In the example embodiment shown, passivation material 170 forms in hole 190 a continuous layer which extends both over some of filler particles 162 and over portions of carrier material 161. However, in other embodiments, passivation material 170 forms regions which are non-contiguous with each other - e.g., wherein passivation material 170 is deposited on surfaces of filler particles 162, but where at least some other surface regions of carrier material 161 are in direct contact with conductor 180. Additionally or alternatively, other portions of passivation material 170 (not shown) adjoin surfaces of filler particles 162 which are outside of hole 190 - e.g., wherein one or more such portions variously extend horizontally (along the x-axis shown) over a top side of body 160, or extend horizontally under a bottom side of body 160.In one example embodiment, for each of the two illustrative regions r1, r2 shown, a different respective portion of passivation material 170 is between body 160 and corresponding portion of conductor 180. Although some embodiments are not limited in this regard, another region r3 is between regions r1, r2 in the cross-sectional plane shown, wherein any portion of conductor 180 is outside of said region r3 (e.g., wherein region r3 is within hole 190).In some embodiments, passivation material 170 is formed by atomic layer deposition (ALD) which, for example, provides in hole 190 a thin (e.g., in a range of 0.5 µm to 10 µm), high aspect ratio deposition of an organic isolation film or metallic seed. In alternative embodiments, a passivation material is formed by physical vapor deposition (PVD) to deposit a seed layer on a planar surface of a magnetic material which comprises particles similar to filler particles 162.FIG. 2 illustrates features of a method 200 to provide structures of an inductor in or on a substrate according to an embodiment. Method 200 is one example of an embodiment wherein a passivation material is provided between a conductor and filler particles of a magnetic material. In some embodiments, method 200 provides structures of device 100 and/or inductor 150, for example.As shown in FIG. 2 , method 200 comprises (at 210) forming, in a substrate, a body comprising a carrier material and magnetic filler particles. By way of illustration and not limitation, the magnetic filler particles comprise one of iron, nickel, zinc, or silicon - e.g., wherein the carrier material comprises one of an epoxy, a polymer resin, a rubber, or a ceramic. In one example embodiment, the forming at 210 comprises performing a lamination or other suitable deposition process to provide body 160 of inductor 150.Method 200 further comprises (at 212) depositing a passivation material - such as passivation material 170 - on surfaces of the magnetic filler particles. In some embodiments, the passivation material comprises one of nickel, tin, copper, palladium, or gold - e.g., wherein the depositing at 212 comprises an eless deposition process. In other embodiments, the passivation material comprises one of an inorganic nitride, a metal oxide, or a polymer.After the depositing at 212, method 200 (at 214) forms a conductor - such as conductor 180 - which extends along a surface of the body. For example, the forming at 214 comprises an immersion deposition of copper (Cu) or other suitable conductive material. The conductor and the passivation material comprise different respective material compositions - e.g., wherein the conductor comprises one or more of copper (Cu), silver (Ag), gold (Au), nickel (Ni), tin (Sn), or iron (Fe). In various embodiments, after the forming at 214, at least a portion of the passivation material is adjacent to both the conductor and the magnetic filler particles of the body. In one such embodiments, the passivation material forms a layer which further extends to adjoin portions of the carrier material.Method 200 further comprises (at 216) coupling a first terminal and a second terminal to the conductor to provide an inductor. For example, one or more additional metallization processes are performed to form pins, pads, vias, pillars and/or other suitable contact structures in or on the substrate - e.g., wherein such contact structures facilitate coupling of the inductor, directly or indirectly, to a current source and to a current sink.In various embodiments, in a first region of the inductor, a first portion of the passivation material is between the body and a second portion of the conductor. In one such embodiment, in a second region of the inductor, a third portion the passivation material is between the body a fourth portion of the conductor - e.g., wherein any portion of the conductor is outside of a third region located between the first region and the second region.By way of illustration and not limitation, method 200 is to provide an inductor comprising a PTH via, in some embodiments. For example, forming the body at 210 comprises forming a hole which extends through the body - e.g., wherein the conductor extends in the hole, and the third region is in the hole. In one such embodiment, the conductor is a first conductor of a first PTH via - e.g., wherein method 200 further comprises additional operations (not shown) to similarly form a second PTH via of the inductor. Such additional operations comprise (for example) forming in the substrate a second body which comprises the carrier material and second magnetic filler particles, and depositing a second passivation material on surfaces of the second magnetic filler particles. After depositing the second passivation material, the additional operations form a second conductor which extends along a surface of the second body - e.g., wherein the second passivation material is adjacent to the second conductor and to the second magnetic filler particles. In one such embodiment, the coupling at 216 comprises coupling the first conductor and the second conductor in series with each other between the first terminal and the second terminal.In some embodiments, method 200 is to additionally or alternatively provide a planar inductor -- e.g., wherein the body forms at least in part a side of a layer of the substrate. In one such embodiment, the conductor is on said side of the substrate layer (and, for example, forms one or more serpentine structures on said side).FIG. 3A illustrates a cross-sectional view of a device 300 comprising an inductor which includes plate-through-hole (PTH) vias according to an embodiment. FIG. 3B illustrates a three-dimensional (3D) view 320 of device 300. More particularly, FIG. 3A illustrates a cross-section along the line A-A' shown in view 320. In some embodiments, device 300 includes features of inductor 150, and/or features of an inductor indicated by one of reference numbers 118, 119 - e.g., wherein some or all structures of device 300 are provided by operations of method 200.In this example, a five-layer substrate is used to fabricate a PTH inductor. The first layer 301 shown is, for example, a second backside of core (2BCO) layer - e.g., wherein the second layer 302 shown is a first backside of core (1BCI) layer, the third layer 303 is the core layer, the fourth layer is the first front side of core (1FCI) layer, and the fifth layer 305 is the second front side of core (2FCO) layer. The general label for conductors or non-magnetic conducting material is 306, the general label for lamination layer is 307, the general label for a dielectric or substrate is 308, the general label for passivation structures is 315, and the general label for magnetic material is 341.In various example embodiments, the conducting material 306 includes one or more of: Cu, Al, Au, Ag, Co, Graphene, or W. Although some embodiments are not limited in this regard, layer 307 is a lamination layer to protect the structural integrity of the PTH inductor and to facilitate conducting material plating on its surface. In various embodiments, layer 307 is a thermoplastic and/or thermosetting polymer. For example, composite epoxies, liquid crystalline polymers, polyimide, mold film, or ABF (Ajinomoto Build-up Film) can be used for layer 307. Other similar lamination materials can be used. Substrate or dielectric 308 can be any material commonly used in an integrated circuit package. For example, organic or inorganic material can be used for substrate 308. Examples of substrate 308 include FR4 (e.g., epoxy based laminate), bismaleimide-triaxine, polyimide, silicon, etc.In some embodiments, PTH vias 309, 319 are formed through substrate 308. Although some embodiments are not limited in this regard, PTH vias 309, 319 in this example are filled with substrate material within respective ones of conductors 306c, 306e. Conductors 306c, 306e extend along the z-axis (which is also the width of the cross-section). The PTH vias 309, 319 are coupled together by conductors 306d which are orthogonal (e.g., perpendicular) to conductors 306c, 306e. Conductors 306d are variously formed each in a respective one of layers 301, 302. The two conductive terminals of the PTH inductor are 306a and 306b, wherein conductors 306c, 306e are coupled in series between terminals 306a, 306b. In some embodiments, conducting terminal 306a is for coupling to one or more transistors (e.g., high-side and low-side switches or bridge). In some embodiments, conducting terminal 306b is for coupled to a capacitor (e.g., capacitor for a regulator). The arrows in the conducting layers 306 show the direction of currents, according to one example.In the example embodiment show, an inductor structure 310 of device 300 comprises a first portion of magnetic material 341 and conductor 306c, which extends in a first hole formed at least partially through the first portion of magnetic material 341. Inductor structure 310 further comprises a portion of lamination layer 307 which is disposed between, and adjoins, each of conductor 306c and the first portion of magnetic material 341. In one such embodiment, magnetic material 341, conductor 306c, and lamination layer 307 correspond functionally to - and include features of - body 160, conductor 180, and passivation material 170 (respectively). Lamination layer 307 is formed on a surface of magnetic material 341 (e.g., via an eless deposition process) to facilitate a plating of PTH via 309 to form conductor 306c.Additionally or alternatively, an inductor structure 318 of device 300 comprises a second portion of magnetic material 341 and conductor 306e, which extends in a second hole formed at least partially through the second portion of magnetic material 341. Inductor structure 318 further comprises another portion of lamination layer 307 which is disposed between, and adjoins, each of conductor 306e and the second portion of magnetic material 341. In one such embodiment, magnetic material 341, conductor 306e, and lamination layer 307 correspond functionally to body 160, conductor 180, and passivation material 170 (respectively).FIGs. 4A through 4H illustrate respective cross-sectional side views 400 through 407 each corresponding to a respective stage of a process for fabricating a magnetic material based inductor with a passivation material, in accordance with some embodiments. For example, fabrication processing such as that illustrated by views 400 through 407 is to provide features of inductor 150, device 300, or an inductor indicated by one of reference numbers 118, 119 - e.g., wherein some or all such processing is according to method 200.View 400 illustrates a substrate 410 (e.g., core of a multi-layer package) with conductive material 412 deposited on its top and bottom surfaces. A person skilled in the art would appreciate that many different mechanisms can be used to deposit conductive layers 412 below and above substrate 410. View 401 illustrates the case after drilling forms holes or trenches, into which are deposited respective portions of a high permeability magnetic material 411. Any suitable drilling technique can be used to form such holes or trenches. In this example two holes are formed. However, any number of holes may be drilled according to the number of desired inductor structures.In some embodiments, material 411 comprises a carrier material, and magnetic filler particles which are disposed in said carrier material. In one such embodiment, the carrier material comprises an epoxy, a rubber, a ceramic, a polymer resin and/or any of various other suitable materials. In various embodiments, filler particles of material 411 exhibit magnetic properties and (for example) comprise one of a paramagnet or a ferromagnet. In one such embodiment, the filler particles comprise one of iron, nickel, zinc, or silicon. By way of illustration and not limitation, such filler particles comprises any of various Nickel-Zinc (Ni-Zn) alloys, permalloy materials, silicon (Si) steels, ferrites, amorphous alloys, iron (Fe) fillers - including an iron (Fe) alloy - and/or derivatives thereof. In some embodiments, magnetic material 411 comprises one of a paramagnet or a ferromagnet, and includes one or more of: Pt, Pd, W, Ce, Al, Li, Mg, Na, Cr2O3, CoO, Dy, Dy2O, Er, Er2O3, Eu, Eu2O3, Gd, Gd2O3, FeO, Fe2O3, Nd, Nd2O3, KO2, Pr, Sm, Sm2O3, Tb, Tb2O3, Tm, Tm2O3, V, V2O3 or epoxy material with particles of a magnetic alloy. A magnetic alloy can be an alloy formed of one or more of: Pt, Pd, W, Ce, Al, Li, Mg, Na, Cr, Co, Dy, Er, Eu, Gd, Fe, Nd, K, Pr, Sm, Tb, Tm, or V. In some embodiments, material 411 exhibit non-insulating but magnetic properties, and wherein the material includes one or more of: Heusler alloy, Co, Fe, Ni, Gd, B, Ge, Ga, permalloy, or Yttrium Iron Garnet (YIG), and wherein the Heusler alloy is a material which includes one or more of: Cu, Mn, Al, In, Sn, Ni, Sb, Ga, Co, Fe, Si, Pd, Sb, V, Ru, Cu2MnAl, Cu2MnIn, Cu2MnSn, Ni2MnAl, Ni2MnIn, Ni2MnSn, Ni2MnSb, Ni2MnGa Co2MnAl, Co2MnSi, Co2MnGa, Co2MnGe, Pd2MnAl, Pd2MnIn, Pd2MnSn, Pd2MnSb, Co2FeSi, Co2FeAl, Fe2VAl, Mn2VGa, Co2FeGe, MnGa, MnGaRu, or Mn3X, where 'X' is one of Ga or Ge.View 402 illustrates the formation of layers 414 of a passivation material on respective top and bottom surfaces of the plugs of magnetic material 411 - e.g., wherein layers 414 are formed by eless deposition through a patterned mask (not shown). The passivation material of layers 414 facilitates subsequent deposition of one or more conductive materials along a surface of a magnetic structure formed from magnetic material 411 - e.g., wherein said passivation material comprises one of nickel (Ni), tin (Sn), copper (Cu), palladium (Pd), or gold (Au). In one such embodiment, the passivation material further comprises another constituent, such as phosphorous (P), which is provided - e.g., in a suitable proportion relative to nickel (Ni) - to mitigate skin effect loss.For example, view 403 illustrates a stage after drilling or other subtractive processing is performed to provide holes 417 though the layers of passivation material 414, and through the portions of magnetic material 411, thereby forming passivation structures 418, 419 from the layers of passivation material 414, as well as forming magnetic structures 415, 416 from the magnetic material 411. In various embodiments, an amount of inductance to be provided by the resulting device depends on the thickness of the magnetic structures 415, 416 after holes 417 are formed.View 404 illustrates the formation of passivation structures 420, 421 which each adjoin a respective one of magnetic structures 415, 416. In some embodiments, passivation structures 420, 421 are formed by an additional eless (or other) depositing of the passivation material - e.g., through a patterned mask (not shown) - into holes 417 and onto both passivation structures 418, 419 and exposed portions of magnetic structures 415, 416. For example, the passivation material is deposited onto the exposed surfaces of filler particles at the sides of magnetic structures 415, 416 which adjoin holes 417.View 405 illustrates a process stage after passivation structures 420, 421 (and, in some embodiments, portions of magnetic structures 415, 416 which remain exposed in holes 417) are plated with a conductive material 422. In view 405, passivation structures 420, 421 are variously sandwiched each between conductive material 422 and a respective one of magnetic structures 415, 416.View 406 illustrates a process stage after a dielectric 424 is deposited, through a patterned mask (not shown), into remaining portions of holes 417. Subsequently, as illustrated in view 407, additional metallization and patterning is performed to provide a PTH conductor 426 which adjoins passivation structure 420. Such metallization and patterning further provides conductors 428 which (for example) are to function as respective terminals for coupling a first inductor structure which includes magnetic structure 415, passivation structure 420, and conductive structure 426. Alternatively or in addition, such metallization and patterning provides a PTH conductor 427 which adjoins passivation structure 421. Such metallization and patterning further provides conductors 429 which (for example) are to function as respective terminals for coupling a second inductor structure which includes magnetic structure 416, passivation structure 421, and conductive structure 427.FIG. 5 illustrates a top view of a package 500 with coaxial magnetic material based inductors 502a according to some embodiments. Coaxial magnetic material based inductors 502a are in an area 502 of package 500 which is shown in comparison to another area 501 which, for example, would alternatively accommodate the illustrative air core inductors (ACIs) 502b shown. In various embodiments, coaxial magnetic material based inductors 502a are much smaller than ACIs 502b. As such, in this example, 10 coaxial magnetic material based inductors can be packed in an area 502 of package 500, as compared to a larger area 501 being able to alternatively accommodate just 8 inductors loops of ACIs 502b. In the example embodiments shown, area 501 is about 4 times larger than area 502. Accordingly, 40 coaxial magnetic material based inductors can be fit into area 501, for example. The coaxial magnetic material based inductors 502a allow for implementing high performance and smaller integrated voltage regulators. In various embodiments, package 500 includes features of device 100 - e.g., wherein one or more of coaxial magnetic material based inductors 502a are provided according to method 200.FIGS. 6A-6F show cross-sectional side views of respective stages 600-605 during an exemplary process for fabricating a planar inductor structure with a passivation material according to another embodiments. For example, fabrication processing such as that comprising stages 600-605 includes features of method 200 - e.g., wherein such processing is to provide one of magnetic inductor 118, or magnetic inductor module 119.In the stage 600 shown in FIG. 6A , a substrate 610 (e.g.., that of interposer 105, or package substrate 104) is received in a partially completed state. In the illustrated embodiment, substrate 610 is received with metallization structures formed in previous operations, which are not shown and are not limiting on some embodiments. By way of illustration and not limitation, a metallization layer of substrate 610 comprises a conductor 612, at least a portion of which is exposed by a recess 611 that is drilled, etched or otherwise formed in a side 613 of substrate 610. For simplicity, only one such metallization layer is shown at stage 600, however, it is understood that in some embodiments, substrate 610 further comprises any of various combinations of one or more dielectric layers, one or more other metallization layers, a core and/or other structures which, for example, are adapted from conventional substrate designs.At the stage 601 shown in FIG. 6B , a magnetic material 615 is laminated or otherwise deposited in recess 611 - e.g., wherein magnetic material 615 is formed on conductor 612, and/or wherein a top surface of magnetic material 615 forms at least a portion of side 613. In one such embodiment, magnetic material 615 has features of magnetic structure 160, magnetic material 341, magnetic structure 415, or magnetic structure 416 - e.g., wherein magnetic material 615 comprises a carrier material and filler particles distributed in said carrier material.At the stage 602 shown in FIG. 6C , a passivation material 620 is deposited on magnetic material 615 to facilitate later metal deposition processing - e.g., wherein passivation material 620 corresponds functionally to magnetic structure 160. In one example embodiment, passivation material 620 is deposited by an eless process through a patterned mask 622 - e.g., wherein passivation material 620 is deposited at least on exposed filler particles of magnetic material 615.For example, at the stage 603 shown in FIG. 6D , subsequent patterned metallization results in the formation of a conductor 630 which extends along side 613 and adjoins passivation material 620. In the example embodiment shown, conductor 630 forms one or more bends, curves and/or other serpentine structures along the side 613 (as illustrated in by the top-side view shown in FIG. 6G ). Conductor 630 has a material composition different than that of conductor 612 - e.g., wherein conductor 630 has features of conductor 180, conductor 306c, conductor 306e, conductive structure 426, or conductive structure 427.At the stage 604 shown in FIG. 6E - after formation of conductor 630 on passivation material 620 - an additional body 640 of magnetic material is deposited on conductor 630. Subsequently, at the stage 605 shown in FIG. 6F , another portion 650 of the passivation material is deposited on body 640 - e.g., wherein portion 650 facilitates subsequent metallization processing (not shown) for electrical coupling of the inductor structures shown.FIG. 6G shows a cross-sectional top view of a planar inductor 606 which is formed by the processing illustrated by stages 600-605. As variously shown in FIG. 6F , the conductor 630 of inductor 606 forms serpentine structures which repeatedly intersect the cross-sectional plane represented (for example) in FIG. 6G . In one such embodiment, conductor 630 extends between two terminals 632 which facilitate coupling of the planar inductor 606 to other circuitry - e.g., including circuitry of IVR 120 or other such circuitry of device 100.As shown by FIG. 6G , in the region r1, a first portion of passivation material 620 is between the body of magnetic material 615 and a second portion of conductor 630 - e.g., where, in a region r2, a third portion passivation material 620 is between the body of magnetic material 615 and a fourth portion of conductor 630. However, any portion of conductor 630 is outside of a region r3 between region r1 and region r2. Furthermore, in a region r5, a fifth portion of passivation material 620 is between the body of magnetic material 615 and a sixth portion of conductor 630, wherein region r2 is between region r3 and region r5. However, any portion of conductor 630 is outside of a region r4 between region r2 and region r5.FIG. 7 illustrates a computing device 700 in accordance with one embodiment. The computing device 700 houses a board 702. The board 702 may include a number of components, including but not limited to a processor 704 and at least one communication chip 706. The processor 704 is physically and electrically coupled to the board 702. In some implementations the at least one communication chip 706 is also physically and electrically coupled to the board 702. In further implementations, the communication chip 706 is part of the processor 704.Depending on its applications, computing device 700 may include other components that may or may not be physically and electrically coupled to the board 702. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 706 enables wireless communications for the transfer of data to and from the computing device 700. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 706 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 700 may include a plurality of communication chips 706. For instance, a first communication chip 706 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 706 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 704 of the computing device 700 includes an integrated circuit die packaged within the processor 704. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The communication chip 706 also includes an integrated circuit die packaged within the communication chip 706.In various implementations, the computing device 700 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 700 may be any other electronic device that processes data.Some embodiments may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to an embodiment. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (electrical, optical, acoustical or other form of propagated signals (e.g., infrared signals, digital signals, etc.)), etc.FIG. 8 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 800 within which a set of instructions, for causing the machine to perform any one or more of the methodologies described herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines (e.g., computers) that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies described herein.The exemplary computer system 800 includes a processor 802, a main memory 804 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 806 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 818 (e.g., a data storage device), which communicate with each other via a bus 830.Processor 802 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 802 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor 802 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processor 802 is configured to execute the processing logic 826 for performing the operations described herein.The computer system 800 may further include a network interface device 808. The computer system 800 also may include a video display unit 810 (e.g., a liquid crystal display (LCD), a light emitting diode display (LED), or a cathode ray tube (CRT)), an alphanumeric input device 812 (e.g., a keyboard), a cursor control device 814 (e.g., a mouse), and a signal generation device 816 (e.g., a speaker).The secondary memory 818 may include a machine-accessible storage medium (or more specifically a computer-readable storage medium) 832 on which is stored one or more sets of instructions (e.g., software 822) embodying any one or more of the methodologies or functions described herein. The software 822 may also reside, completely or at least partially, within the main memory 804 and/or within the processor 802 during execution thereof by the computer system 800, the main memory 804 and the processor 802 also constituting machine-readable storage media. The software 822 may further be transmitted or received over a network 820 via the network interface device 808.While the machine-accessible storage medium 832 is shown in an exemplary embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any of one or more embodiments. The term "machine-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.Techniques and architectures for providing structures of an inductor are described herein. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of certain embodiments. It will be apparent, however, to one skilled in the art that certain embodiments can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the discussion herein, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.Certain embodiments also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) such as dynamic RAM (DRAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description herein. In addition, certain embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of such embodiments as described herein.Besides what is described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow. |
Some embodiments include apparatus and methods having a memory device with diodes coupled to memory elements. Each diode may be formed in a recess of the memory device. The recess may have a polygonal sidewall. The diode may include a first material of a first conductivity type (e.g., n-type) and a second material of a second conductive type (e.g., p-type) formed within the recess. |
What is claimed is: 1. A memory device comprising: a recess including a polygonal sidewall; a diode including a first material of a first conductivity type formed within the recess and a second material of a second conductive type formed within the recess; and a memory element coupled to the diode. 2. The memory device of claim 1, wherein one of first and second materials includes a single crystalline silicon. 3. The memory device of claim 1, wherein the memory element includes a material located in the recess. 4. The memory device of claim 1, wherein the memory element includes a chalcogenide material. 5. The memory device of claim 1, wherein the diode and the memory element are coupled in series between a first conductive line and a second conductive line, and wherein the first and second conductive lines are perpendicular to each other. 6. The memory device of claim 1, wherein the polygonal sidewall includes: a first sidewall portion; a second sidewall portion perpendicular to the first sidewall portion; a third sidewall portion perpendicular to the second sidewall portion; and a fourth sidewall portion perpendicular to the third sidewall portion. 7. The memory device of claim 6, wherein the first, second, third, and fourth sidewall portions include the same material. 8. The memory device of claim 6, wherein the first and third sidewall portions include a first insulation material and the second and fourth sidewall portions include a second insulation material. 9. The memory device of claim 8, wherein the first material includes silicon oxide, and the second material includes silicon nitride. 10. An apparatus comprising: diodes arranged in rows and columns, each of the diodes include a first material of a first conductivity type and a second material of a second conductivity type; first trenches filled with a first insulation material, each of the first trenches being located between two of the rows; and second trenches filled with a second insulation material, each of the second trenches being located between two of the columns, wherein the first and second materials of at least one of the diodes contact the first and second insulation materials. 11. The apparatus of claim 10, wherein the diodes include a group of diodes arranged in one of the rows, and wherein each diode in the group of diodes includes a first diode terminal coupled to a first conductive line and a second diode terminal coupled to a second conductive line. 12. The apparatus of claim 11, wherein the first conductive line is perpendicular to the second conductive line. 13. The apparatus of claim 11 further comprising memory elements, each of the memory elements coupled between the first conductive line and one diode of the group of diodes. 14. The apparatus of claim 13, wherein the memory elements include a chalcogenide material. 15. The apparatus of claim 10, wherein a thickness of the first insulation material is greater than a thickness of the second insulation material. 16. The apparatus of claim 10, wherein at least one of the second trenches includes a bottom coupled to a conductive material. 17. The apparatus of claim 16, wherein the conductive material includes a combination of cobalt and silicon. 18. A method comprising: applying a signal to a conductive line of a memory device to access a memory element of a memory cell of the memory device, the memory device including: a recess including a polygonal sidewall; and a diode coupled between the conductive line and the memory element, the diode including a first material of a first conductivity type formed within the recess and a second material of a second conductive type formed within the recess. 19. The method of claim 18, wherein the signal is applied during a read operation of the memory device. 20. The method of claim 18, wherein the signal is applied during a write operation of the memory device. 21. A method comprising: forming rows and columns of recesses, each of the recesses having a polygonal opening and being surrounded by at least one insulation material; forming diodes in the recesses; and forming memory elements such that each of the memory elements is coupled to one of the diodes. 22. The method of claim 21, wherein forming the diodes includes growing an epitaxial silicon over a material at a bottom of each of the recesses. 23. The method of claim 22, wherein forming the memory elements includes depositing a chalcogenide material over the diodes. 24. The method of claim 21, wherein forming the diodes includes forming a first material of a first conductivity type and forming a second material of a second conductivity type over the first material. 25. The method of claim 24, wherein forming the second material includes implanting impurities of p-type into the second material. 26. The method of claim 24, wherein forming the diodes includes forming a third material over the second material, the third material has a resistivity lower than a resistivity of the second material. 27. The method of claim 21, wherein each of the recesses includes a sidewall having sidewall portions with different insulation materials. 28. A method comprising: forming device structures over a substrate, the device structures insulated from each other by a first insulation material, each of the device structures including a width and a length, the length extending in a first direction; removing a portion of the device structures to form trenches extending in a second direction perpendicular to the first direction; forming a second insulation material in the trenches; removing a first material from the device structures to expose a second material of the device structures; and forming diodes over the second material. 29. The method of claim 28, wherein forming the device structures includes: forming the second material over the substrate and forming the first material over the second material; forming a first masking structure over the first material and the second material, the first masking structure including first openings, each of the first openings having a width and a length, the length extending in the first direction; and removing a portion of the first material at the first openings and a portion of the second material at the first openings such that an unremoved portion of the first material and an unremoved portion of the second material form at least a part of the device structures. 30. The method of claim 29, wherein removing the portion of the device structures to form the trenches includes: forming a second masking structure over the device structures, the second masking structure including second openings, each of the second openings having a width and a length, the length extending in the second direction; and removing the first material at the second openings to form the trenches. 31. The method of claim 28, wherein forming diodes includes: forming an epitaxial silicon over the second material to form a first portion of each of the diodes; and inserting impurities into the epitaxial silicon to form a second portion of each of the diode. 32. The method of claim 28, wherein the first material includes insulation material and the second material includes semiconductor material. 33. The method of claim 28, wherein forming the device structures includes: forming the second material over the substrate, forming a third material over the second material, and forming the first material over the third material;forming a first masking structure over the first material, the first masking structure including first openings, each of the first openings having a width and a length, the length extending in the first direction; and removing a portion of the first material at the first openings, a portion of the second material at the first openings, and a portion of the third material at the first openings such that an unremoved portion of the first material and an unremoved portion of the second material, and an unremoved portion of the third material form at least a part of the device structures. 34. The method of claim 33, wherein removing the portion of the device structures to form the trenches includes: forming a second masking structure over the device structures, the second masking structure including second openings, each of the second openings having a width and a length, the length extending in the second direction; and removing the first material at the second openings to form the trenches and leaving at least a portion of the third material in the trenches. 35. The method of claim 34, wherein the first material includes insulation material, the second material includes semiconductor material, and the third material includes conductive material. 36. The method of claim 35, wherein the third material includes a combination of nickel and silicon. 37. The method of claim 29 further comprising: forming memory elements over the diodes. 38. The method of claim 37, wherein forming memory elements includes depositing chalcogenide material over the diodes. 39. A method comprising: forming device structures over a substrate, the device structures insulated from each other by an insulation material, each of the device structures including a width and a length, the length extending in a first direction; forming a masking structure over the device structures, the masking structure including openings such that a first portion of a first material of the device structures are exposed at the openings and a second portion of the first material is underneath the masking structure, each of the openings having a width and a length, the length extending in a second direction perpendicular to the first direction; removing a first portion of a first material at the openings from the device structures to expose a first portion of a second material of the device structures; and forming diodes over the first portion of the second material. 40. The method of claim 39, wherein forming the device structures includes: forming the second material over the substrate and forming the first material over the second material; forming an additional masking structure over the first material and the second material, the additional masking structure including first openings, each of the first openings having a width and a length, the length extending in the first direction; and removing a second portion of the first material at the first openings and a portion of the second material at the first openings such that an unremoved portion of the first material and an unremoved portion of the second material form at least a part of the device structures. 41. The method of claim 39, wherein forming the device structures includes: forming the second material over the substrate, forming the first material over the second material, and forming a third material over the second material; forming an additional masking structure over the first material, the second material, and the third material, the additional masking structureincluding first openings, each of the first openings having a width and a length, the length extending in the second first direction; and removing a second portion of the first material at the first openings, a portion of the second material at the first openings, and a portion of the third material at the first openings such that an unremoved portion of the first material, an unremoved portion of the second material, and an unremoved portion of the third material form at least a part of the device structures, wherein removing the portion of the first material at the openings also removes the portion of the third material to expose the second material. 42. The method of claim 41, wherein the first material includes insulation material, the second material includes semiconductor material, and the third material includes conductive material. 43. The method of claim 39, wherein forming the diodes includes: forming an epitaxial silicon over the second material to form a first portion of each of the diode; and inserting impurities into the epitaxial silicon to form a second portion of each of the diode. 44. The method of claim 39, wherein the insulation material at the openings is unremoved when the portion of the first material is removed. 45. The method of claim 39 further comprising: forming memory elements such that each of the memory elements is coupled in series with one of the diodes between a first conductive line and a second conductive line. 46. The method of claim 39 further comprising: removing the second portion of the first material to expose a second portion of the second material; forming spacers in openings that are created after removal of the second portion of the first material; andforming a third material over the diodes and the second portion of the second material. 47. The method of claim 46, wherein the third material has a resistivity lower than a resistivity of the second material. 48. The apparatus of claim 46, wherein the third material includes a combination of cobalt and silicon. 49. A method comprising: forming first trenches in a first insulation material over a substrate, each of the first trenches including a width and a length, the length extending in a first direction; forming an epitaxial silicon in the first trenches; forming a masking structure over the epitaxial silicon, the masking structure including openings, each of the openings having a width and a length greater than the width, the length extending in a second direction perpendicular to the first direction; removing a portion of the epitaxial silicon at the openings leaving a second portion of the epitaxial silicon unremoved; and forming diodes from at least a part of the second portion of the epitaxial silicon. 50. The method of claim 49, wherein forming the diodes includes inserting impurities of a first conductivity type into a portion of the second portion of the epitaxial silicon, and wherein the second portion of the epitaxial silicon includes a second conductivity type. 51. The method of claim 50, wherein forming the diodes further includes forming material having a resistivity lower than a resistivity of the epitaxial silicon. 52. The method of claim 49 further comprising: forming a first material over the substrate before the first trenches are formed and forming a second material over the first material before the first trenches are formed; forming an additional masking structure over the first and second materials, the additional masking structure including first openings, each of the first openings having a width and a length greater than the width, the length extending in the first direction; removing a portion of the first material at the first openings and a portion of the second material at the first openings to form device structures and second trenches between the device structures; filling the second trenches with the first insulation material to insulate the device structures from each other; and removing the second material from the device structures to form the first trenches. 53. The method of claim 49 further comprising: forming phase change memory elements such that each of the phase change memory elements is coupled in series with one of the diodes between a first conductive line and a second conductive line. 54. A method comprising: forming device structures over a substrate, the device structures insulated from each other by a first insulation material, each of the device structures including a width and a length greater than the width, the length extending in a first direction; removing a portion of the device structures to form trenches extending through the device structures such that each of the trenches includes a width and a length greater than the width, the length extending in a second direction perpendicular to the first direction such that each of the device structures includes protrusions, each of the protrusions being between two of the trenches; and forming diodes from at least one material of the protrusions. 55. The method of claim 54, wherein the protrusions includes a material of a first conductivity type, and wherein forming the diodes includes inserting impurities of a second conductivity type into the material of the protrusions. 56. The method of claim 55, wherein forming the diodes further includes forming a material having a resistivity lower than a resistivity of the epitaxial silicon. 57. The method of claim 54 further comprising: forming memory elements coupled to the diodes. 58. The method of claim 57, wherein forming the memory elements includes depositing chalcogenide material over the diodes. |
MEMORY DEVICE HAVING SELF-ALIGNED CELL STRUCTURE Related Application This patent application claims priority benefit from U.S. Application No. 12/367,395 filed 6 February 2009 which is incorporated herein by reference. Background Computers and other electronic products usually have a memory device with numerous memory cells to store data and other information. A conventional memory device is normally formed using various fabrication processes or steps. For example, one or more processes may form one part of the device and one or more additional processes may form another part of the device. Further processes may also form features that connect the parts of the device together. If the processes are not carefully planned, device defects or poor device performance may occur. Brief Description of the Drawings FIG. 1 shows a block diagram of a memory device having a memory array with memory cells, according to an embodiment of the invention. FIG. 2 shows a partial schematic diagram of a memory device having a memory array with memory cells having diodes and memory elements, according to an embodiment of the invention. FIG. 3 shows a partial three-dimension (3D) diagram of a memory device, according to an embodiment of the invention. FIG. 4 shows a view of the memory device of FIG. 3 without some of its features. FIG. 5 through FIG. 16 show processes of forming a memory device, according to an embodiment of the invention. FIG. 17 through FIG. 24 show processes of forming a memory device with conductive material formed between diodes of memory cells of the memory device, according to an embodiment of the invention.FIG. 25 through FIG. 29 show processes of forming a memory device with recesses having different sidewall materials, according to an embodiment of the invention. FIG. 30 through FIG. 39 show processes of forming a memory device with epitaxial silicon formed before diode formation, according to an embodiment of the invention. FIG. 40 through FIG. 49 show processes of forming a memory device without forming epitaxial silicon to form diodes of the memory device, according to an embodiment of the invention. FIG. 50 through FIG. 58 show processes of forming a memory device with conductive materials simultaneously formed over diodes and between diodes of the memory device, according to an embodiment of the invention. FIG. 59 shows a partial 3D diagram of a memory device including a memory cell, according to an embodiment of the invention. FIG. 60 shows a partial 3D diagram of another memory device including a memory cell, according to an embodiment of the invention. Detailed Description FIG. 1 shows a block diagram of a memory device 100 having a memory array 102 with memory cells 101, according to an embodiment of the invention. Memory cells 101 may be arranged in rows and columns along with conductive lines 104 (e.g., wordlines having signals WLO through WLm) and conductive lines 106 (e.g., bit lines having signals BLO through BLn). Memory device 100 uses conductive lines 104 and conductive lines 106 to transfer information to and from memory cells 101. Row decoder 107 and column decoder 108 receive address signals AO through AX on lines 109 (e.g., address lines) to determine which memory cells 101 are to be accessed. A sense amplifier circuit 110 operates to determine the value of information read from memory cells 101 and provide the information in the form of signals to conductive lines 106. Sense amplifier circuit 110 also uses the signals on conductive lines 106 to determine the value of information to be written to memory cells 101. Memory device 100 includes circuitry 112 to transfer information between memory array 102 and lines (e.g., data lines) 105. Signals DQO through DQN on lines 105 representinformation read from or written into memory cells 101. Lines 105 may include nodes within memory device 100 or pins (or solder balls) on a package where memory device 100 may reside. Other devices external to memory device 100 (e.g., a memory controller or a processor) may communicate with memory device 100 through lines 105, 109, and 120. Memory device 100 performs memory operations such as a read operation to read information from memory cells 101 and a programming operation (sometime referred to as write operation) to program (e.g., write) information into memory cells 101. A memory control unit 118 controls the memory operations based on control signals on lines 120. Examples of the control signals on lines 120 include one or more clock signals and other signals to indicate which operation, (e.g., a programming or read operation) memory device 100 may perform. Other devices external to memory device 100 (e.g., a processor or a memory controller) may control the values of the control signals on lines 120. Specific values of a combination of the signals on lines 120 may produce a command (e.g., programming or read command) that causes memory device 100 to perform a corresponding memory operation (e.g., programming or read operation). Each of memory cells 101 may be programmed to store information representing a value of a single bit or a value of multiple bits such as two, three, four, or another number of bits. For example, each of memory cells 101 may be programmed to store information representing a binary value "0" or "1" of a single bit. In another example, each of memory cells 101 may be programmed to store information representing a value of multiple bits, such as one of four possible values "00", "01", "10", and "11" of two bits, one of eight possible values "000", "001", "010", "011", "100", "101", "110" and "111", or one of other values of another number of multiple bits. Memory device 100 receives a supply voltage, including supply voltage signals Vcc and Vss, on lines 130 and 132, respectively. Supply voltage signal Vss may operate at a ground potential (e.g., having a value of approximately zero volts). Supply voltage signal Vcc may include an external voltage supplied to memory device 100 from an external power source such as a battery or an alternating-current to direct-current (AC-DC) converter circuitry.Circuitry 112 of memory device 100 includes a select circuit 115 and an input/output (I/O) circuit 116. Column decoder 108 selectively activates the SELO through SELn signals based on the AO through AX address signals on lines 109. Select circuit 115 responds to signals SELO through SELn to select the signals on conductive lines 106 and 113 that represent the information read from or programmed into memory cells 101. Select circuit 115 selects the signals on conductive lines 106 and 113 to provide communication between memory array 102 and I/O circuit 116 during read and programming operations. Memory device 100 may include a non-volatile memory device and memory cells 101 may include non-volatile memory cells such that memory cells 101 may retain information stored thereon when power (e.g., Vcc or Vss, or both) is disconnected from memory device 100. For example, memory device 100 may include a phase change memory device such that each of memory cells 101 may include a memory element having a material (e.g., chalcogenide material) in which at least a portion (e.g., programmable portion) of the material may be programmed to cause that portion to change between different phases. The phases may include a crystalline phase (which is sometimes referred to as a crystalline state) and an amorphous phase (which is sometimes referred to as an amorphous state). Each of memory cells 101 may have a resistance state corresponding to a resistance value when the memory cell is programmed. Different resistance values may represent different values of information programmed in each of memory cells 101. Memory device 100 performs a programming operation when it receives (e.g. from an external processor or a memory controller) a programming command and value of information to be programmed into one or more of selected memory cells among memory cells 101. Based on the value of the information, memory device 100 programs the selected memory cells to cause them to have appropriate resistance values to represent the values of the information. One skilled in the art may recognize that memory device 100 may include other features that are not shown to help focus on the embodiments described herein.Memory device 100 includes memory devices and memory cells that are similar to or identical to those described below with reference to FIG. 2 through HG. 49. FIG. 2 shows a partial schematic diagram of a memory device 200 having a memory array 202 including memory cells 201, according to an embodiment of the invention. Memory cells 201 may include phase change memory cells. Memory array 202 may correspond to memory array 102 of FIG. 1. As shown in FIG. 2, memory cells 201 are arranged in rows 230, 231, and 232 along with conductive lines 204 having signals WLO, WLl, and WL2, and columns 240, 241, and 242 along with conductive lines 206 having signals BLO, BLl, and BL2. Each memory cell 201 may include a diode 211 and a memory element 299. As shown in FIG. 2, each diode within a group of diodes in the same row (e.g., row 230) includes one diode terminal coupled to the same conductive line (e.g., the same line with the signal WLO) and another diode terminal coupled (through a memory element 299) to a different conductive line among conductive lines with signals BLO, BLl, and BL2. Diodes 211 may turn on (e.g., by using appropriate values of signals WLO, WLl, and WL2) to allow access to memory elements 299 to read information (e.g., measure a resistance value) from memory elements 299 or program information into memory elements 299 (e.g., causing memory elements 299 to have a specific resistance value). For example, a programming operation may apply appropriate values to signals WLO, WLl, and WL2 to selectively turn on diode 211 of a selected memory cell 201 and then apply a current (e.g., programming current) through a selected memory element 299 of the selected memory cell. The current causes at least a portion of the material of the memory element 299 to heat up. After the material heats, the programming operation allows the material to rapidly cool. These heating and cooling actions may change the phase of the material, such as from a crystalline phase before the programming operation to an amorphous phase after the programming operation. The phase change may be reversible (e.g., changing from an amorphous phase to a crystalline phase). Different phases of the material may cause selected memory element 299 to have differentresistance states corresponding to different resistance values, which correspond to different values of the information that is being stored in the selected memory element 299. In another example, a read operation may apply appropriate values to signals WLO, WLl, and WL2 to selectively turn on diode 211 of a selected memory cell 201 and then apply a current (e.g., read current) through a selected memory element 299 of the selected memory cell. The read operation may measure the resistance of the memory cell based on a read voltage generated from the read current to determine the corresponding value of information stored therein. For example, in each of memory cells 201, a different resistance value may provide a different value (e.g., current or voltage value) on signals BLO, BLl, and BL2 when the current passes through memory elements 299. Other circuitry of the memory device (e.g., circuit such as I/O circuit 116 of FIG. 1) may use signals BLO, BLl, and BL2 to measure the resistance value of memory elements 299 to determine the value of the information. The current used during a read operation may have a value different from the current used during a programming operation. For example, in a programming operation, the value of the signal (e.g., WLO, WLl, or WL2 in FIG. 2) that creates a current flowing through a selected memory element 299 may be sufficient to cause the material of at least a portion of the selected memory element to change between different phases to alter the resistance value of the selected memory element based on the value of the information to be stored in that selected memory elements. In a read operation, the value of the signal (e.g., WLO, WLl, or WL2 in FIG. 2) that creates a current flowing through a selected memory element 299 may be sufficient to create the current but insufficient to cause any portion of the selected memory element to change between different phases so that the value of the information stored in the selected memory element may remain unchanged in the read operation. Memory cells 201 of FIG. 2 may include memory cells similar to or identical to the memory cells described below with reference to FIG. 3 through FIG. 49. FIG. 3 through FIG. 49 show some features that are the same or similar. Thus, for simplicity, the description for the same or similar features in FIG. 3through FIG. 49 may not be repeated. For example, this description may not repeat the description of the same or similar features among the memory devices shown in FIG. 3 through FIG. 49, such as memory device 300 (FIG. 3 and FIG. 4), memory device 500 (HG. 5 through FIG. 16), memory device 1700 (HG. 17 through FIG. 14), memory device 2500 (FIG. 25 through HG. 29), memory device 3000 (FIG. 30 through FIG. 39), and memory device 4000 (FIG. 40 through FIG. 49). FIG. 3 shows a partial 3D diagram of a memory device 300, according to an embodiment of the invention. Memory device 300 includes a memory array 302, which may correspond to memory array 102 of FIG. 1 and memory array 202 of FIG. 2. FIG. 3 also shows an x-direction, a y-direction perpendicular to the x-direction, and a z-direction perpendicular to both the x-direction and the y- direction. Memory device 300 includes memory cells 301 arranged in rows 330, 331, and 332 in the y-direction and columns 340, 341, and 342 in the x-direction. Insulation material 370 is formed between rows 330, 331, and 332 to insulate the memory cells in one row from the memory cells in another row. Insulation material 371 is formed between 340, 341, and 342 to insulate the memory cells in one column from memory cells in another column. Memory cells 301 in rows 330, 331, and 332 may be coupled to conductive lines 304 through conductive contacts 381. Memory cells 301 in columns 340, 341, and 342 may be coupled to conductive lines 306 through conductive contacts 380. Conductive lines 304 and 306 may be arranged over memory cells 301 in the z-direction. Conductive lines 304 and conductive lines 306 may correspond to conductive lines 204 and conductive lines 206, respectively, of FIG. 2. As shown in FIG. 3, memory cells 301 in the same row (e.g., row 332) may include the same material 320 extending in the y-direction and coupled to one of conductive lines 304 through a conductive contact 381. Memory cells 301 in the same column (e.g., column 342) are coupled to one of conductive lines 306 through multiple conductive contacts 380. FIG. 3 show only three memory cells 301 in each row and three memory cells 301 in each column as an example. The number of memory cells 301 in each row and each column may vary.Each memory cell 301 may include different materials 320, 321, 322, 324, and 399 that are arranged as multiple layers with respect to the z-direction over a substrate 310. In each memory cell 301, materials 321, 322, 324, and 399 may form a diode and a memory element of the memory cell. The diodes and memory elements of memory cells 301 may be shown schematically like diodes 211 and memory elements 299 of FIG. 2. In FIG. 3, materials 321 and 322 may form at least a part of a diode of each memory cell 301. For example, material 321 may include one conductivity type (e.g., n-type silicon material) and material 322 may include another conductivity type (e.g., p-type silicon material). The n-type and p-type materials may form at least a part of a diode in each memory cell 301. For example, the n- type and p-type materials may form a p-n junction of a diode in each memory cell 301. Although this description discusses p-n junction diodes, other types of diodes, such as various kinds of metal-insulator- metal diodes or low temperature oxide diodes, may be formed. Material 399 may form a memory element of each memory cell 301. Material 399 may include a chalcogenide material. Examples of chalcogenide materials include materials that have various combinations of one or more of germanium (Ge), antimony (Sb), and telluride (Te), and other similar materials. For example, material 399 may include a compound of germanium (Ge), antimony (Sb), and telluride (Te), such as Ge2SbSTeS. Material 324 may include conductive material with a resistivity lower than the resistivity of materials 321 and 322. Material 324 may also include conductive material with a resistivity lower than the resistivity of materials 399. The relatively lower resistivity of material 324 may reduce contact resistance of the diode that is formed by materials 321 and 322 to improve electrical conductivity through the diode and improve overall electrical conductivity of memory cells 301. An example of material 324 may include cobalt suicide (e.g., CoSi2) or nickel suicide (e.g., NiSi). Other conductive materials with a resistivity lower than the resistivity of materials 321 and 322 may be used. FIG. 4 shows memory device 300 of FIG. 3 without conductive lines 304 and 306 and conductive contacts 380 and 381 to help discuss details of memory device 300. As shown in FIG. 4, memory device 300 includes trenches 315extending in the y-direction along multiple cells and trenches 351 extending x- direction along multiple cells such that trenches 315 and trenches 351 are perpendicular to each other. Each trench 315 is located between two of the rows 330, 331, and 332 and filled with material 370. Each trench 351 is located between two of the columns 340, 341, and 342 and filled with material 371. As shown in FIG. 4, each trench 315 has a depth 335 in the z-direction. Thus, material 370 filled in each trench 315 may have a thickness 345 that corresponds to depth 335. Each trench 351 has a depth 334 in the z-direction. Thus, material 371 filled in each trench 351 may have a thickness 344 that corresponds to depth 334. As shown in FIG. 4, depth 335 is greater than depth 334 and thickness 345 is greater than thickness 344. Memory device 300 may also include material 317 located at the bottom of each trench 351 and are arranged in the y-direction and coupled to material 321 of each memory 301. In some devices, material 317 may be omitted. However, the inclusion of material 317 in memory device 300 may reduce parasitic effect created from materials of different conductivity types between adjacent diodes in the same row in the y-direction. Material 317 may also create a path to improve heat dissipation from memory cells 301 to conductive lines 304. Further, material 317 may reduce resistance of the connection between memory cells 301 and conductive lines 304 to improve electrical conductivity between them. Various processes described below with reference to FIG. 5 through FIG. 49 may be used to form one or more portion of memory device 300. FIG. 5 through FIG. 16 show processes of forming a memory device 500, according to an embodiment of the invention. FIG. 5 shows memory device 500 having a substrate 505 and multiple materials 520, 530, and 540 formed in or over substrate 505. Substrate 505 may initially include p-type semiconductor (e.g., silicon) material. Forming material 520 may include inserting (e.g., implanting) n-type impurities into a portion (e.g., top portion) of substrate 505. Examples of n-type impurities include an element such as phosphorus (P) or arsenic (As). Thus, material 520 may include an n-type semiconductor material. The remaining portion (e.g., bottom portion) of substrate 505, which includesmaterial 510, that has not been inserted with n-type impurities may remain a p- type semiconductor material. Forming material 530 may include depositing an insulation material, such as a silicon based material (e.g., silicon oxide) over material 520. Forming material 540 may include depositing an insulation material, such as a silicon-based material (e.g., silicon nitride) over material 530. In some cases, material 540 (e.g., silicon nitride) may create undesirable stress to material 520. Therefore, in some cases, forming material 530 between materials 520 and 540 may reduce or prevent stress to material 520 caused by material 540. In some other cases, however, material 530 may be omitted if material 540 is selected such that it may not cause stress to material 520 or such that potential stress may have an insignificant effect to material 520 or memory device 500, or both. Thus, in some cases, material 530 may be omitted and material 540 may be formed directly on material 520. FIG. 5 also shows an x-direction, a y-direction perpendicular to the x- direction, and a z-direction perpendicular to both the x-direction and the y- direction. As shown in FIG. 5, materials 510, 520, 530, and 540 may form different layers, one layer over (or on) one or more other layers in the z- direction. As used herein, the term "on" used with respect to two or more materials (or layers), one "on" the other, means at least some contact between the materials (or layers), while "over" means the materials (or layers) are in close proximity, but possibly with one or more additional intervening materials (or layers) such that contact is possible but not required. Neither "on" nor "over" implies any directionality as used herein unless stated as such. FIG. 5 also shows a masking structure 550 formed over materials 540, 530, 520, and 510. Masking structure 550 may be used to pattern, e.g., to selectively remove, portions of materials underneath masking structure 550 during some of the processes of forming memory device 500. As shown in FIG. 5, masking structure 550 includes a pattern defined by masking portions 551 and openings 552. Each opening 552 has a width 553 extending in the x-direction and a length 554 extending in the y-direction. Length 554 is substantially greater than width 553. Masking structure 550 may include a photoresist thatmay be used in a photolithography patterning process to pattern materials 540, 530, 520, and 510. FIG. 6 shows memory device 500 after device structures 610 and trenches 615 have been formed and masking structure 550 (FIG. 5) has been removed. A process such as etching (e.g., dry etch) may be used to remove portions of materials 540, 530, 520, and 510 at openings 552 (FIG. 5). The remaining portions of materials 540, 530, 520, and 510 (portions underneath masking portions 551) form device structures 610. Each device structure 610 has a width 611 extending in the x-direction and a length 612 extending in the y- direction. Length 612 is substantially greater than width 611. Each trench 615 may have a bottom on material 510, a width 616 extending in the x-direction, and a length 617 extending in the y-direction. Length 617 is substantially greater than width 616. FIG. 7 shows memory device 500 after a material 710 has been formed, e.g., by deposition, to fill trenches 615 to insulate device structures 610 (FIG. 6) from each other. Material 710 may include insulation material, e.g., silicon oxide or other insulation material. FIG. 8 shows memory device 500 after material 710 has been planarized, e.g., through chemical mechanical polishing (CMP) or etch back to expose a portion, e.g., an upper surface 541, of material 540. As shown in FIG. 8, an upper surface 711 of material 710 and upper surface 541 of material 540 are on the same plane following the planarization or etch back process. FIG. 9 shows memory device 500 after a masking structure 950 has been formed over device structures 610 and material 710. Masking structure 950 may be used to pattern, e.g., to selectively remove, portions of materials underneath masking structure 950 during further processes to form memory device 500. As shown in FIG. 9, masking structure 950 includes a pattern defined by masking portions 951 and openings 952. Each opening 952 has a width 953 extending in the y-direction and a length 954 extending in the x-direction. Length 954 is substantially greater than width 953. Masking structure 950 may include a photoresist that may be used in a photolithography patterning process to pattern device structures 610.As shown in FIG. 9 and FIG. 5 masking structures 950 and 550 are positioned such that their patterns are perpendicular to each other. For example, the greater dimension (length 954 in the x-direction) of openings 952 of masking structure 950 of FIG. 9 is perpendicular to the greater dimension (length 554 in the y-direction) of openings 552 of masking structure 550 of FIG. 5. Positioning masking structures 950 and 550 perpendicularly to each other during formation of memory device 500 may allow self-alignment of some features, such as diodes (to be formed in additional processes), of memory device 500 to improve its material quality and functions, as described below. FIG. 10 shows memory device 500 after trenches 1015 have been formed and masking structure 950 of FIG. 9 has been removed. A removal process such as etching (e.g., dry etch or wet etch) may be used to remove portions of materials 540 and 530 of each device structure 610 at openings 952 and portions of material 710 at openings 952 to form trenches 1015. Each trench 1015 may have a bottom on material 520, a width 1006 extending in the y-direction, and a length 1007 extending in the x-direction. Length 1007 is substantially greater than width 1006. As shown in FIG. 10, the greater dimension (length 1007) of each trench 1015 extends in the x-direction through device structures 610 to form protrusions 1040 having a perimeter 1041. Since protrusions 1040 are formed using masking structures 550 (FIG. 5) and 950 (FIG. 9) with patterns that are positioned perpendicularly to each other, perimeter 1041 of FIG. 10 may have a polygonal shape. FIG. 11 shows memory device 500 after a material 1110 has been formed, e.g., by deposition, to fill trenches 1015 to insulate protrusions 1040 from each other in the y-direction. Material 1110 may include insulation material, e.g., silicon oxide or other insulation material. Material 1110 may include the same material composition as that of material 710. For example, both materials 1110 and 710 may include silicon oxide. FIG. 12 shows memory device 500 after material 1110 has been planarized, e.g., through CMP or etch back, to expose material 540 of protrusions 1040. As shown in FIG. 12, protrusions 1040 along the y-direction of the same device structure 610 are insulated from each other by material 1110,and protrusions 1040 along the x-direction between different device structures 610 are insulated from each other by material 710. FIG. 13 shows memory device 500 after recesses 1325 have been formed. A process such as etching (e.g., dry etch or wet etch) may be used to remove material 540 and material 530 from each protrusion 1040 to expose material 520. As described above with reference to FIG. 5, in some cases, material 530 may be omitted and material 540 may be formed directly on material 520. Thus, the process associated with FIG. 13 described here may remove only material 540 (if material 530 is omitted) to expose material 520 when recesses 1325 are formed. As shown in FIG. 13, each recess 1325 includes a bottom on material 520 and an opening that is shaped by perimeter 1041. Since perimeter 1041 may have a polygonal shape, each recess 1325 may also have a polygonal opening and a polygonal sidewall associated with the polygonal opening. The polygonal sidewall of each recess 1325 may be defined by four sidewall portions 1326, 1327, 1328, and 1329. As shown in FIG. 13, sidewall portions 1326 and 1328 are opposite from each other and are formed from material 710. Sidewall portions 1327 and 1329 are opposite from each other and are formed from material 1110. Sidewall portion 1326 is perpendicular to sidewall portion 1327, which is perpendicular to sidewall portion 1328. Sidewall portion 1328 is perpendicular to sidewall portion 1329. Since materials 710 and 1110 may include the same material (e.g., silicon oxide) the sidewall of each recess 1325 may also include the same material. Features, such as a diode and a memory element, may be formed in each recess 1325. FIG. 14 shows memory device 500 after materials 1420, 1422, and 1424 have been formed in recesses 1325 (FIG. 13). Material 1420 may include n-type semiconductor material (e.g., n-type silicon). Material 1422 may include p-type semiconductor material (e.g., p-type silicon). Materials 1420 and 1422 may form at least a part of a diode. Material 1424 may include conductive material with a resistivity lower than the resistivity of materials 1420 and 1422. For example, material 1424 may include cobalt suicide or nickel suicide. Forming materials 1420, 1422, and 1424 in recesses 1325 may include growing epitaxial silicon on material 520 to form material 1420. Thus, material1420 may include single crystalline silicon. Impurities of n-type may be inserted (e.g., in situ doped or implanted) into the grown epitaxial silicon, so that material 1420 may include n-type epitaxial silicon. Impurities of p-type impurities may be inserted (e.g., in situ doped or implanted) into material 1420 such that a portion (e.g., top portion) of material may form material 1422. An example of p- type impurities includes an element such as boron (B). After material 1422 is formed, a silicidation process may be performed to form material 1424. As shown in FIG. 14, materials 1420 and 1422 may directly contact materials 710 and 1110 at sidewall portions 1326, 1327, 1328, and 1329. FIG. 15 shows memory device 500 after a material 1599 has been formed in recesses 1325. Material 1599 may directly contact materials 710 and 1110 at sidewall portions 1326, 1327, 1328, and 1329. Material 1599 may form memory elements of memory cells 1501. Forming material 1599 may include depositing a chalcogenide material over material 1424 in recesses 1325. Each memory cell 1501 may include a diode formed by at least materials 1420 and 1422 and a memory element having material 1599. Since materials 1420 and 1422 of the diode in each recess 1325 are formed within the same recess and materials 1420 and 1422 can be self-aligned to material 520 from using masking structures 550 (FIG. 5) and 950 (FIG. 9) that are perpendicular to each other, the diode of each recess 1325 can be considered as self-aligned diode. Since the diodes in memory device 500 can be self- alignment diodes, misalignment of the diodes and other features (e.g., between materials 520 and 1420) in memory device 500 may be substantially reduced or absent. Therefore, in memory device 500, defects associated with the diodes may be reduced or absent. Moreover, some conventional devices may include device features, such as diodes and other features, that may be misaligned. The misalignment may create a constriction in a current path between the misaligned features in the conventional devices. The constriction may generate a phenomenon such as hot spot when the conventional devices operate, leading to poor device performance. In memory device 500, however, a reduction in or absence of the misalignment between materials 1420 and 520 may reduce or prevent the occurrence of hot spot. Thus, device performance may be improved.FIG. 16 shows memory device 500 after additional features of memory device 500 have been formed. For example, conductive contacts 1680 and 1681 and conductive lines 1604 and 1606 have been formed. Conductive lines 1604 and conductive lines 1606 may correspond to conductive lines 204 and conductive lines 206, respectively, of FIG. 2. Conductive lines 1604 and conductive lines 1606 of FIG. 16 may also correspond to conductive lines 304 and conductive lines 306, respectively, of FIG. 3. One skilled in the art may readily recognize that additional processes may be performed to form additional features of a memory device, such as memory device 500 described above. Thus, to help focus on the embodiments described herein, FIG. 5 through FIG. 16 described above and FIG. 17 through FIG. 49 described below show only some of the features of a memory device, such as memory device 500 (FIG. 5 through FIG. 16), memory device 1700 (FIG. 17 through FIG. 14), memory device 2500 (FIG. 25 through FIG. 29), memory device 3000 (FIG. 30 through FIG. 39), and memory device 4000 (FIG. 40 through FIG. 49). FIG. 17 through FIG. 24 show processes of forming a memory device 1700 with conductive material 1730, according to an embodiment of the invention. Some of the processes used to form memory device 500 described above with reference to FIG. 5 through FIG. 16 may be used to form memory device 1700 described herein with reference to FIG. 17 through FIG. 24. Thus, for simplicity, similar materials and features between memory device 500 of FIG. 5 through FIG. 16 and memory device 1700 of FIG. 17 through FIG. 24 are given the same reference numbers. The difference between memory device 1700 (FIG. 17) and memory device 500 (FIG. 5) is that material 1730 of memory device 1700 is different from material 530 of memory device 500. For example, material 1730 includes an electrically conductive material such as cobalt suicide or nickel suicide. In contrast, as described above, material 530 (FIG. 5) includes an insulating material such as silicon oxide. Another difference between memory device 1700 (FIG. 24) and memory device 500 (FIG. 16) is that a portion of material 1730 remains in memory device 1700 (FIG. 24) upon completion of the memory device. In contrast, as described above with reference to FIG. 5 through FIG. 16,material 530 between adjacent memory cells 1501 (FIG. 15) of memory device 500 is removed upon completion of the memory device. In FIG. 17, the structure of memory device 1700 up to this point may be formed using processes similar to those described above with reference to FIG. 5 through FIG. 9. However, as shown in FIG. 17, material 1730 (instead of material 530) has been formed between materials 520 and 540. FIG. 18 shows memory device 1700 after trenches 1015 have been formed and masking structure 950 (FIG. 17) has been removed. A removal process such as etching (e.g., dry etch or wet etch) may be used to remove portions of material 540 of each device structure 610 at openings 952 (FIG. 17) and portions of material 710 at openings 952 to form trenches 1015. Material 1730 is not removed. Thus, each trench 1015 has a bottom on material 1730. Each trench 1015 has a width 1006 extending in the y-direction and a length 1007 extending in the x-direction. In FIG. 19 through FIG. 24, processes similar to those described above with reference to FIG. 11 through FIG. 16 may be used to form other features of memory device 1700. However, as shown FIG. 19 through FIG. 24, only some portions of material 1730 is removed, and some other portions of materials 1730 that are between adjacent memory cells remain in memory device 1700. The presence of material 1730 in memory device 1700 may improve memory device 1700 in ways similarly to material 317 in memory device 300 of FIG. 4. FIG. 25 through FIG. 29 show processes of forming a memory device 2500 with recesses having different sidewall materials, according to an embodiment of the invention. Some of the processes used to form memory device 500 described above with reference to FIG. 5 through FIG. 16 may be used to from memory device 2500 described herein with reference to FIG. 25 through FIG. 29. Thus, for simplicity, similar materials and features between memory device 500 of FIG. 5 through FIG. 16 and memory device 2500 of FIG. 25 through FIG. 29 are given the same reference numbers. In FIG. 25, the structure of memory device 2500 up to this point may be formed using processes similar to those described above with reference to FIG. 5 through FIG. 9. As shown in FIG. 25, masking structure 950 has been formed.Masking structure 950 has a pattern defined by masking portions 951 and openings 952 with width 953 and length 954. FIG. 26 shows memory device 2500 after recesses 2625 have been formed and masking structure 950 (FIG. 25) has been removed. Unlike the process associated with FIG. 9 where both materials 540 and 710 at openings 952 (FIG. 9) would be removed to form trenches 1015 (FIG. 10), the process (e.g., dry etch or wet etch) associated with FIG. 25 removes only material 540 at openings 952 (FIG. 25) to form recesses 2625 of FIG. 26. Material 710 at openings 952 in FIG. 25 may remain in memory device 2500 when material 540 at openings 952 is removed. As shown in FIG. 26, each recess 2625 includes a bottom on material 520 and an opening that is shaped by a perimeter 2641 including edges of materials 710 and 540. Since each recess 2625 is surrounded with materials 710 and 540 that are formed using masking structures (e.g., 550 of FIG. 5 and 950 of FIG. 25) with patterns that are positioned perpendicularly to each other, perimeter 2641 may have a polygonal shape. Thus, the opening (shaped by perimeter 2641) of each recess 2625 may also have a polygonal shape. Since each recess 2625 is surrounded with materials 710 and 540 that are formed using masking structures (e.g., 550 of FIG. 5 and 950 of FIG. 25) with patterns that are positioned perpendicularly to each other, each recess 2625 may also have a polygonal sidewall that may be defined by four sidewall portions 2626, 2627, 2628, and 2629. As shown in FIG. 26, sidewall portions 2626 and 2628 are opposite from each other and are formed form material 710. Sidewall portions 2627 and 2629 are opposite from each other and are formed form material 540. Since material 710 (e.g., silicon oxide) and material 540 (e.g., silicon nitride) may include different materials, the sidewall of each recess 2625 may also include different materials. For example, both sidewall portions 2626 and 2628 may include material 710 (e.g., silicon oxide) and both sidewall portions 2627 and 2629 may include material 540 (e.g., silicon nitride). Features of a memory cell, such as a diode and memory element, may be formed in each recess 2625. HG. 27 shows memory device 2500 after materials 2720, 2722, and 2724 have been formed in recesses 2625. Materials 2720, 2722, and 2724 may beformed by processes similar to processes used to form materials 1420, 1422, and 1424, respectively, of FIG. 14. Materials 2720 and 2722 may form at least a portion of a diode. Material 2724 may include conductive material such as material 324 of FIG. 3. As shown in FIG. 27, materials 2720 and 2722 may directly contact materials 710 and 540 at sidewall portions 2626, 2627, 2628, and 2629. FIG. 28 shows memory device 2500 after a material 2899 has been formed in recesses 2625. Material 2899 may directly contact materials 710 and 540 at sidewall portions 2626, 2627, 2628, and 2629. Material 2899 may be formed by processes similar to processes used to form material 1599 of FIG. 15. Material 2899 may form memory elements of memory cells 2801. Each memory cell 2801 may include a diode that is formed by at least materials 2720 and 2722, and a memory element that includes material 2899. Since materials 2720 and 2722 in each recess 2625 are formed within the same recess, these materials can be self-aligned to the sidewall of recess 2625 (sidewall defined by sidewall portions 2626, 2627, 2638, and 2629). Thus, in each memory cell 2801, the diode formed by materials 2720 and 2722 can be considered as self-aligned diode. FIG. 29 shows memory device 2500 after additional features of memory device 2500 have been formed. For example, conductive contacts 2980 and 2981 and conductive lines 2904 and 2906 have been formed. Conductive lines 2904 and conductive lines 2906 may correspond to conductive lines 204 and conductive lines 206, respectively, of FIG. 2. Conductive lines 2904 and conductive lines 2906 of FIG. 29 may also correspond to conductive lines 304 and conductive lines 306, respectively, of FIG. 3. FIG. 30 through FIG. 39 show processes of forming a memory device 3000 with epitaxial silicon formed before diode formation, according to an embodiment of the invention. Some of the processes used to form memory device 500 described above with reference to FIG. 5 through FIG. 16 may be used to from memory device 3000 described herein with reference to FIG. 30 through FIG. 39. Thus, for simplicity, similar materials and features between memory device 500 of FIG. 5 through FIG. 16 and memory device 3000 of FIG. 30 through FIG. 39 are given the same reference numbers.In FIG. 30, the structure of memory device 3000 up to this point may be formed using processes similar to those described above with reference to FIG. 5 through FIG. 8. As shown in FIG. 30, device structures 610 insulated by material 710 have been formed. FIG. 31 shows memory device 3000 after materials 540 and 530 (FIG. 30) have been removed by a process such as etching (e.g., dry etch or wet etch). The removal of materials 540 and 530 forms trenches 3135 extending in the y- direction. FIG. 32 shows memory device 3000 after material 3220 has been formed in trenches 3135 (FIG. 31). Material 3220 may include n-type semiconductor material (e.g., n-type silicon). Forming material 3220 may include growing epitaxial silicon on material 520. Thus, material 3220 may include single crystalline silicon. Impurities of n-type may be inserted (e.g., in situ doped or implanted) into the grown epitaxial silicon such that material 3220 may include n-type epitaxial silicon. A process (e.g., CMP) may be performed to planarize material 3220 to achieve the structure shown in FIG. 32. FIG. 33 shows memory device 3000 after a masking structure 950 has been formed over material 3220 and 710. As shown in FIG. 33, masking structure 950 has a pattern defined by masking portions 951 and openings 952 with width 953 and length 954. FIG. 34 shows memory device 3000 after trenches 3435 have been formed and masking structure 950 of FIG. 33 has been removed. A removal process such as etching (e.g., dry etch) may be used to remove portions of materials 3220 of each device structure 610 at openings 952 and portions of material 710 at openings 952 to form trenches 3435. Each trench 3435 may have a bottom on material 520, a width 3406 extending in the y-direction, and a length 3407 extending in the x-direction. Length 3407 is substantially greater than width 1006. As shown in FIG. 34, the greater dimension (length 3407) of each trench 3435 extends in the x-direction through device structures 610 to form protrusions 3440 having a perimeter 3441. Since protrusions 3440 are formed using masking structures 550 (FIG. 5) and 950 (FIG. 9) with patterns that are positioned perpendicularly to each other, perimeter 3441 of FIG. 34 may have a polygonal shape.FIG. 35 shows memory device 3000 after a material 3510 has been formed, e.g., by deposition, to fill trenches 3435 to insulate protrusions 3440 from each other in the y-direction. Material 3510 may include insulation material, e.g., silicon oxide or other insulation material. Material 3510 may include the same material composition as that of material 710. For example, both materials 3510 and 710 may include silicon oxide. FIG. 36 shows memory device 3000 after material 3510 has been planarized, e.g., through CMP or etch back to expose material 3320 of protrusions 3440. As shown in FIG. 36, protrusions 3440 along the y-direction of the same device structure 610 are insulated from each other by material 3510, and protrusions 3440 along the x-direction between different device structures 610 are insulated from each other by material 710. Each protrusion 3440 includes a bottom on material 520 and an opening that is shaped by perimeter 3441. As shown in FIG. 36, each protrusion 3440 includes perimeter 3441 surrounded by materials 3510 and 710 and includes a bottom on material 520. Since each protrusion 3440 is formed using masking structures (e.g., 550 of FIG. 5 and 950 of FIG. 9) with patterns that are positioned perpendicularly to each other perimeter 3441 may have a polygonal shape. As shown in FIG. 36, each protrusion 3440 also includes a sidewall defined by four sidewall portions 3626, 3627, 3628, and 3629. Since each protrusion 3440 is formed using masking structures (e.g., 550 of FIG. 5 and 950 of FIG. 9) with patterns that are positioned perpendicularly to each other, each protrusion 3440 may also have a polygonal sidewall defined by sidewall portions 3626, 3627, 3628, and 3629, which are surrounded by materials 3510 and 710. Features of a memory cell, such as a diode may be formed in each protrusion 3340. FIG. 37 shows memory device 3000 after materials 3722 and 3724 have been formed in protrusions 3440. Materials 3220 and 3722 may directly contact materials 710 and 3510 at sidewall portions 3636, 3637, 3628, and 3629. Material 3722 may include p-type semiconductor material (e.g., p-type silicon). Materials 3220 and 3722 may form at least a part of a diode. Material 3724 may include conductive material with a resistivity lower than the resistivity ofmaterials 3220 and 3722. Material 3724 may include a material 324 of FIG. 3. Forming materials 3722 and 3724 may include inserting (e.g., implanting) p-type impurities into material 3220 to form material 3722 and performing a silicidation process after the p-type impurity is inserted into material 3722 to form material 3724. FIG. 38 shows memory device 3000 after material 3899 has been formed. FIG. 38 shows each material 3899 having a cylindrical structure as an example. Material 3899 may be formed with a different structure. Forming material 3899 may include depositing a chalcogenide material over material 3724 followed by an additional process (e.g., dry etch) to form material 3899. A conductive material may be formed over the chalcogenide material before the additional process that forms material 3899 is performed, so that material 3899 can be protected during the additional process. Alternatively, forming material 3899 may include depositing an insulation material over materials 710, 3510, and 3724, forming vias in the insulation material, and then depositing material 3899 into the vias. For clarity, FIG. 38 omits the insulation material and the vias. Material 3899 may form memory elements of memory cells 3801. FIG. 39 shows memory device 3000 after additional features of memory device 3000 have been formed. For example, conductive contacts 3980 and 3981 and conductive lines 3904 and 3906 have been formed. In some cases, material for at least a portion (e.g., bottom portion) of conductive contacts 3980 may be formed in the same vias (described above with reference to FIG. 38) where material 3899 is formed. In FIG. 39, conductive lines 3904 and lines 3906 may correspond to conductive lines 204 and conductive lines 206, respectively, of FIG. 2. Conductive lines 3904 and conductive lines 3906 of FIG. 39 may also correspond to conductive lines 304 and conductive lines 306, respectively, of FIG. 3. FIG. 40 through FIG. 49 show processes of forming a memory device 4000 without forming epitaxial silicon to form diodes of the memory device, according to an embodiment of the invention. Some of the processes used to form memory device 500 described above with reference to FIG. 5 through FIG. 16 may be used to from memory device 4000 described herein with reference to FIG. 40 through FIG. 49. Thus, for simplicity, similar materials and featuresbetween memory device 500 of FIG. 5 through FIG. 16 and memory device 4000 of FIG. 40 through FIG. 49 are given the same reference numbers. FIG. 40 shows memory device 4000 having a substrate 4005 and multiple materials 4010, 4020, and 4021 formed in or over substrate 4005. Substrate 4005 may initially include p-type semiconductor (e.g., silicon) material. Forming material 4020 and 4021 may include inserting (e.g., implanting) n-type impurities into a portion (e.g., top portion) of substrate 4005. Thus, material 4020 and 4021 may include an n-type semiconductor material. The remaining portion (e.g., bottom portion) of substrate 4005, which includes material 4010, that has not been inserted with n-type impurities may remain a p- type semiconductor material. Different concentration of n-type impurities may be used such that materials 4020 and 4021 may have different impurity implantation (or doping). For example, the concentration of n-type impurities may be controlled such that material 4020 may have a greater impurity concentration than that of material 4021. FIG. 40 also shows an x-direction, a y-direction perpendicular to the x- direction, and a z-direction perpendicular to both the x-direction and the y- direction. As shown in FIG. 40, materials 4010, 4020, and 4021 may form different layers, one layer over (or on) one or more other layers with respect to the z-direction. FIG. 40 also shows a masking structure 550 formed over materials 4010, 4020, and 4021. As shown in FIG. 40, masking structure 550 has a pattern defined by masking portions 551 and openings 552 with width 553 and length 554. FIG. 41 shows memory device 4000 after device structures 4110 and trenches 4115 have been formed and masking structure 550 (FIG. 40) has been removed. A process such as etching (e.g., dry etch) may be used to remove portions of materials 4021, 4020, and 4010 at openings 552 (FIG. 40). The remaining portions of materials 4021, 4020, and 4010 (portions underneath masking portions 551) form device structures 4110. Each device structure 4110 has a width 4111 extending in the x-direction and a length 4112 extending in the y-direction. Length 4112 is substantially greater than width 4111. Each trench 994115 may have a bottom on material 4010, a width 4116 extending in the x- direction, and a length 4117 extending in the y-direction. Length 4117 is substantially greater than width 4116. FIG. 42 shows memory device 4000 after a material 4210 has been formed, e.g., by deposition, to fill trenches 4115 to insulate device structures 4110 from each other. Material 4210 may include insulation material, e.g., silicon oxide or other insulation material. A process, such as CMP, may be used to planarize material 4210 after it is formed to obtain the structure of memory device 4000 of FIG. 42. FIG. 43 shows memory device 4000 after a masking structure 950 has been formed over device structures 4110 and material 4210. As shown in FIG. 43, masking structure 950 includes a pattern defined by masking portions 951 and openings 952 with width 953 and length 954. FIG. 44 shows memory device 4000 after trenches 4415 have been formed and masking structure 950 (FIG. 43) has been removed. A removal process such as etching (e.g., dry etch) may be used to remove portions of material 4210 at openings 952 and a portion of material 4021 or portions of each of materials 4021 and 4020 at openings 952 and to form trenches 4415 in FIG. 44. Each trench 4415 may have a bottom on material 4020, a width 4406 extending in the y-direction, and a length 4407 extending in the x-direction. Length 4407 is substantially greater than width 4406. As shown in FIG. 44, the greater dimension (length 4407) of each trench 4415 extends in the x-direction through device structures 4110 to form protrusions 4440 having a perimeter 4441. Since protrusions 4440 are formed using masking structures 550 (FIG. 5) and 950 (FIG. 9) with patterns that are positioned perpendicularly to each other, perimeter 4441 of FIG. 44 may have a polygonal shape. FIG. 45 shows memory device 4000 after a material 4510 has been formed, e.g., by deposition, to fill trenches 4415 to insulate protrusions 4440 from each other in the y-direction. Material 4510 may include insulation material, e.g., silicon oxide or other insulation material. Material 4510 may include the same material composition as that of material 4210. For example, both materials 4510 and 4210 may include silicon oxide.FIG. 46 shows memory device 4000 after material 4510 has been planarized, e.g., through CMP or etch back to expose material 4021 of protrusions 4440. As shown in FIG. 46, protrusions 4440 along the y-direction of the same device structures 4110 are insulated from each other by material 4510, and protrusions 4140 along the x-direction between different device structures 4110 are insulated from each other by material 4210. Each protrusion 4440 includes a bottom on material 4020 and an opening that is shaped by perimeter 4441. As shown in FIG. 46, each protrusion 4440 includes perimeter 4441 surrounded by materials 4510 and 4210 and includes a bottom on material 4020. Since each protrusion 4440 is formed using masking structures (e.g., 550 of FIG. 40 and 950 of FIG. 43) with patterns that are positioned perpendicularly to each other perimeter 4441 may have a polygonal shape. Each protrusion also includes a sidewall defined by four sidewall portions 4626, 4627, 4628, and 4629. Since each protrusion 4440 is formed using masking structures (e.g., 550 of FIG. 40 and 950 of FIG. 43) with patterns that are positioned perpendicularly to each other, each protrusion 4440 may also have a polygonal sidewall defined by sidewall portions 4626, 4627, 4628, and 4629, which are surrounded by materials 4510 and 4210. Features of a memory cell, such as a diode may be formed in each protrusion 4440. FIG. 47 shows memory device 4000 after materials 4722 and 4724 have been formed in protrusions 4440. Materials 4722 and 4724 may directly contact materials 4210 and 4510 at sidewall portions 4646, 4647, 4628, and 4629. Material 4722 may include p-type semiconductor material (e.g., p-type silicon). Materials 4021 and 4722 may form at least a part of a diode. Material 4724 may include conductive material with a resistivity lower than the resistivity of materials 4021 and 4722. Material 4724 may include conductive material such as material 324 of FIG. 3. Forming materials 4722 and 4724 may include inserting (e.g., implanting) p-type impurities into material 4021 to form material 4722 and performing a silicidation process after the p-type impurity is inserted into material 4021 to form material 4724. FIG. 48 shows memory device 4000 after material 4899 has been formed. FIG. 48 shows each material 4899 having a shape with a cylindrical structure asan example. Material 4899 may be formed with a different structure. Forming material 4899 may include depositing a chalcogenide material over material 4724 followed by an additional process (e.g., dry etch) to form material 4899. A conductive material may be formed over the chalcogenide material before the additional process that forms material 4899 is performed, so that material 4899 can be protected during the additional process. Alternatively, forming material 4899 may include depositing an insulation material over materials 4210, 4510, and 4724, forming vias in the insulation material, and then depositing material 4899 into the via. For clarity, FIG. 48 omits the insulation material and the vias. Material 4899 may form memory elements of memory cells 4801. FIG. 49 shows memory device 4000 after additional features of memory device 4000 have been formed. For example, conductive contacts 4980 and 4981 and conductive lines 4904 and 4906 have been formed. In some cases, material for at least a portion (e.g., bottom portion) of conductive contacts 4980 may be formed in the same vias (described above with reference to FIG. 48) where material 4899 is formed. Conductive lines 4904 and conductive lines 4906 may correspond to conductive lines 204 and conductive lines 206, respectively, of FIG. 2. Conductive lines 4904 and conductive lines 4906 of FIG. 49 may also correspond to conductive lines 304 and conductive lines 306, respectively, of FIG. 3. FIG. 50 through FIG. 58 show processes of forming a memory device 5000 with conductive materials simultaneously formed over diodes and between diodes of the memory device, according to an embodiment of the invention. Some of the processes used to form memory device 500 (FIG. 5 through FIG. 16) and memory device 2500 (FIG. 25 and FIG. 26) may be used to form memory device 5000 described herein with reference to FIG. 50 through FIG. 58. Thus, for simplicity, similar materials and features among memory device 500 (FIG. 5 through FIG. 16), memory device 2500 (FIG. 25 and FIG. 26), and memory device 5000 of FIG. 50 through FIG. 58 are given the same reference numbers. In FIG. 50, the structure of memory device 5000 up to this point may be formed using processes similar to those described above with reference to FIG. 25. As shown in FIG. 50, masking structure 950 includes openings 540 suchthat a first portion 5051 of device structures 610 are exposed at openings 540 and a second portion 5052 of device structures 610 is underneath masking structure 950. In FIG. 51, the structure of memory device 5000 up to this point may be formed using processes similar to those described above with reference to FIG. 26. As shown in FIG. 51, recesses 2625 have been formed and masking structure 950 (FIG. 50) has been removed. Each recess 2625 includes a bottom on material 520. FIG. 52 shows memory device 5000 after materials 5220 and 5222 have been formed in recesses 2625. Materials 5220 and 5222 may form at least a portion of a diode. Materials 5220 and 5222 may be formed by processes similar to processes used to form materials 1420 and 1422, respectively, of FIG. 14, such that material 5220 of FIG. 52 may include n-type semiconductor material and material 5222 may include p-type semiconductor material. FIG. 53 shows memory device 5000 after materials 540 and 530 (FIG. 52) have been removed by a process such as etching (e.g., dry etch or wet etch). The removal of materials 540 and 530 creates openings 5325. FIG. 54 shows memory device 5000 after spacers 5454 have been formed in openings 5325. Spacers 5454 may include an insulation material, such as a silicon based material (e.g., silicon oxide). FIG. 55 shows memory device 5000 after conductive materials 5524 and 5530 have been formed. Materials 5524 and 5530 can be the same material and may include a material with a resistivity lower than the resistivity of materials 5220 and 5222. For example, materials 5524 and 5530 may include cobalt suicide or nickel suicide. Since materials 5524 and 5530 can be the same material, they can be formed simultaneously. For example, after spacers 5454 are formed, a silicidation process may be performed to simultaneously form materials 5524 and 5530. Materials 5524 and 5530 may include characteristics similar to those of materials 314 and 317 described above with reference to FIG. 3. FIG. 56 shows memory device 5000 after a material 5610 has been formed, e.g., by deposition, to fill openings 5325. Material 5610 may include insulation material, e.g., silicon oxide or other insulation material. Material5610 may include the same material composition as that of material 710. For example, both materials 5610 and 710 may include silicon oxide. FIG. 57 shows memory device 5000 after material 5610 has been planarized, e.g., through CMP or etch back, to expose spacers 5424 and material 5524. FIG. 58 shows memory device 5000 including memory cells 5801 after formation of additional features of memory device 5000, such as electrodes 5811 and 5812, memory elements 5899, conductive contacts 5880 and 5881, and conductive lines 5804 and 5806. Conductive lines 5804 and conductive lines 5806 may correspond to conductive lines 204 and conductive lines 206, respectively, of FIG. 2. Conductive lines 5804 and conductive lines 5806 of FIG. 58 may also correspond to conductive lines 304 and conductive lines 306, respectively, of FIG. 3. As shown in FIG. 58, each memory cell 5801 may includes a diode formed by at least materials 5220 and 5222, one of electrodes 5811, one of electrodes 5812, one of memory elements 5899, and one of contacts 5880. Forming electrodes 5811 may include depositing a first insulation material over materials 5424 and 5610, forming vias in the first insulation material, and then depositing a conductive material into the first vias to form electrode 5811. For clarity, FIG. 58 omits the first insulation material and the first vias. Forming memory element 5899 may include depositing a second insulation material over electrodes 5811, forming second vias in the second insulation material, and then depositing a chalcogenide material into the second vias to form memory elements 5899. For clarity, FIG. 58 omits the second insulation material and the second vias. Forming electrodes 5812 may include depositing a conductive material over memory elements 5899. Electrodes 5812 can be formed in the same vias (second via described above) where memory elements 5899 are formed. Alternatively, electrodes 5812 and memory elements 5899 can be formed together. For example, forming electrodes 5812 and memory elements 5899 may include depositing a chalcogenide material (to form memory elements 5899) over electrodes 5811 and depositing a conductive material (to formelectrodes 5812) over the chalcogenide material. Then, an additional process (e.g., dry etch) may be performed to form individual mesas with each mesa including one memory element 5899 and one electrode 5812. The additional process may alternatively form (e.g., etch) the chalcogenide material and the conductive material into lines (instead of individual mesas) parallel to lines 5806, such that each line includes memory elements 5899 and electrodes 5812 of multiple memory cells. Contacts 5880 and 5881 and conductive lines 5880 and 5881 may be formed following the formation of electrodes 5812. FIG. 59 shows a partial 3D diagram of a memory device 5900 including a memory cell 5901, according to an embodiment of the invention. Memory device 5900 includes many memory cells arranged in rows and columns similar to the arrangements of the memory cells of the memory devices described above, such as memory device 300 (FIG. 3); memory device 500 (FIG. 16), memory device 1700 (FIG. 24); memory device 2500 (FIG. 29); memory device 3000 (FIG. 39); and memory device 4000 (FIG. 49). However, to focus on the differences between memory device 5900 of FIG. 59 and the other memory devices described above, FIG. 59 shows only one memory cell 5901 and some features of memory device 5900, such as materials 5920, 5922, and 5924; electrodes 5911 and 5912, memory element 5999; conductive contact 5980; and conductive line 5906. Electrodes 5911 and 5912 and memory element 5999 can be formed with processes similar to processes that form electrodes 5811 and 5812 and memory element 5899 of memory device 5000 of HG. 58. Materials 5920 and 5922 may form at least a portion of a diode of memory cell 5901. Materials 5920, 5922, and 5924 of memory device 5900 may correspond to materials 321, 322, and 324, respectively, of memory device 300 of FIG. 3; materials 1420, 1422, and 1424, respectively, of memory device 500 of HG. 16 and memory device 1700 of FIG. 24; materials 2820, 2822, and 2824, respectively, of memory device 2500 of FIG. 29; materials 3220, 3722, and 3724, respectively, of memory device 3000 of FIG. 29; and materials 4021, 4722, and 4724, respectively, of memory device 4000 of FIG. 49. Thus, materials 5920,5922, and 5924 of memory device 5900 of FIG. 59 can be formed with processes similar to processes that form the corresponding materials described above. Some of the features of memory device 5900 shown in FIG. 59 can be substituted for a portion of memory devices 300, 500, 1700, 2500, 3000, and 4000, such that each memory cell in each of memory devices 300, 300, 500, 1700, 2500, 3000, and 4000 described above can have a structure of memory cell 5901 shown in FIG. 59. For example, the features between material 5924 and contact 5980, such as electrodes 5911 and 5912 and memory element 5999 of memory device 5900, can be substituted for memory element 399 of memory device 300 of FIG. 3; memory element 1599 of memory device 500 of FIG. 16 and memory device 1700 of FIG. 24; memory element 2899 of memory device 2500 of HG. 29; memory element 3899 of memory device 3000 of FIG. 39; or memory element 4899 of memory device 4000 of FIG. 49. FIG. 60 shows a partial 3D diagram of a memory device 6000 including a memory cell 6001, according to an embodiment of the invention. Memory device 6000 includes many memory cells arranged in rows and columns similar to the arrangements of the memory cells of the memory devices described above, such as memory device 300 (FIG. 3), memory device 500 (FIG. 16); memory device 1700 (FIG. 24); memory device 2500 (FIG. 29); memory device 3000 (FIG. 39); and memory device 4000 (FIG. 49). However, to focus on the differences between memory device 6000 of FIG. 60 and the other memory devices described above, FIG. 60 shows only one memory cell 6001 and some features of memory device 6000, such as materials 6020, 6022, and 6024, electrodes 6011 and 6012, memory element 6099 (e.g., a chalcogenide material), conductive contact 6080, and conductive line 6006. Materials 6020 and 6022 may form at least a portion of a diode of memory cell 6001. Materials 6020, 6022, and 6024 of memory device 6000 may correspond to materials 321, 322, and 324, respectively, of memory device 300 of FIG. 3; materials 1420, 1422, and 1424, respectively, of memory device 500 of HG. 16 and memory device 1700 of FIG. 24; materials 2820, 2822, and 2824, respectively, of memory device 2500 of FIG. 29; materials 3220, 3722, and 3724, respectively, of memory device 3000 of FIG. 29; and materials 4021, 4722, and 4724, respectively, of memory device 4000 of FIG. 49. Thus, materials 6020,6022, and 6024 of memory device 6000 of FIG. 60 can be formed with processes similar to processes that form the corresponding materials described above. Electrode 6011 of memory device 6000 can be formed following the formation of materials 6020, 6022, 6024. As shown in FIG. 59, the structure of electrode 6011 is similar to, but having a different material from, the memory element described above, such as memory element 399 of memory device 300 of FIG. 3, memory element 1599 of memory device 500 of FIG. 16 and memory device 1700 of FIG. 24, memory element 2899 of memory device 2500 of HG. 29, memory element 3899 of memory device 3000 of FIG. 39, or memory element 4899 of memory device 4000 of HG. 49. Thus, in FIG. 60, forming electrode 6011 may include depositing a conductive material (instead of a chalcogenide material) over material 6024. Some of the features of memory device 6000 shown in FIG. 60 can be substituted for a portion of memory devices 300, 500, 1700, 2500, 3000, and 4000, such that each memory cell in each of memory devices 300, 300, 500, 1700, 2500, 3000, 4000, and 5000 described above can have a structure of memory cell 6001 shown in FIG. 60. For example, the features between material 6024 and contact 6080, such as electrodes 6011 and 6012 and memory element 6099 of memory device 6000, can be substituted for memory element 399 of memory device 300 of FIG. 3; memory element 1599 of memory device 500 of FIG. 16 and memory device 1700 of FIG. 24; memory element 2899 of memory device 2500 of FIG. 29; memory element 3899 of memory device 3000 of FIG. 39; or memory element 4899 of memory device 4000 of FIG. 49. In another example, electrodes 6011 and 6012 and memory element 6099 of memory device 6000 can also be substituted for electrodes 5811 and 5812 and memory element 5899 of memory device 5000 of FIG. 58. One or more embodiments described herein include apparatus and methods having a memory device with diodes coupled to memory elements. Each diode may be formed in a recess of the memory device. The recess may have a polygonal sidewall. The diode may include a first material of a first conductivity type (e.g., n-type) and a second material of a second conductive type (e.g., p-type) formed within the recess. Other embodiments includingadditional apparatus methods are described above with reference to FIG. 1 through FIG. 16. The illustrations of apparatus such as memory devices 100, 200, 300, 500, 1700, 2500, 3000, 4000, 5000, 5900, and 6000, and memory cells 101, 201, 1501, 2801, 3801, 4801, 5801, 5901, and 6001 are intended to provide a general understanding of the structure of various embodiments and not a complete description of all the elements and features of the apparatus that might make use of the structures described herein. The apparatus of various embodiments may include or be included in electronic circuitry used in high-speed computers, communication and signal processing circuitry, memory modules, portable memory storage devices (e.g., thumb drives), single or multi-processor modules, single or multiple embedded processors, multi-core processors, data switches, and application-specific modules including multilayer, multi-chip modules. Such apparatus may further be included as sub-components within a variety of electronic systems, such as televisions, cellular telephones, personal computers (e.g., laptop computers, desktop computers, handheld computers, tablet computers, etc.), workstations, radios, video players, audio players (e.g., MP3 (Motion Picture Experts Group, Audio Layer 3) players), vehicles, medical devices (e.g., heart monitor, blood pressure monitor, etc.), set top boxes, and others. The above description and the drawings illustrate some embodiments of the invention to enable those skilled in the art to practice the embodiments of the invention. Other embodiments may incorporate structural, logical, electrical, process, and other changes. In the drawings, like features or like numerals describe substantially similar features throughout the several views. Portions and features of some embodiments may be included in, or substituted for, those of others. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The Abstract is provided to comply with 37 C.F.R. §1.72(b) requiring an abstract that will allow the reader to quickly ascertain the nature and gist of the technical disclosure. The Abstract is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. |
An integrated circuit fabrication process to pattern reduced feature size is disclosed herein. The process includes reducing the width of a patterned area of a patterned photoresist layer provided over a substrate before patterning the substrate. The patterned area is representative of a feature to be formed in the substrate. The width of the feature is reduced by an electron beam mediated heating and flowing of select areas of the patterned photoresist layer. |
What is claimed is: 1. A method of fabricating reduced feature size in an integrated circuit, the integrated circuit including a patterned photoresist layer provided over a substrate, the patterned photoresist layer being patterned with radiation at a lithographic wavelength and in accordance with a pattern on a mask or reticle, the method comprising the steps of:providing an electron beam to at least one area of an aperture included in the patterned photoresist layer, the aperture having sidewalls and a width; transforming the sidewalls in response to the electron beam, thereby creating shrink areas, wherein the electron beam at least partially liquefies or plasticizes molecules on the sidewalls to form the shrink areas; and forming a feature in the substrate in accordance with the transformed sidewalls of the aperture. 2. The method of claim 1, wherein providing an electron beam includes providing the electron beam at a dose in the range of approximately 500-2000 [mu]C/cm<2>.3. The method of claim 1, wherein providing an electron beam includes providing the electron beam at an accelerating voltage in the range of approximately 5-20 keV and a current in the range of approximately 2-5 mA.4. The method of claim 1, wherein the transforming step includes increasing the temperature of the sidewalls.5. The method of claim 1, wherein the transforming step includes at least partially liquefying the sidewalls.6. The method of claim 5, wherein the transforming step includes moving the at least partially liquefied sidewalls.7. The method of claim 1, wherein the transforming step reduces the width of the aperture.8. The method of claim 7, wherein the forming step includes forming the feature having a width smaller than the width of the aperture.9. The method of claim 1, wherein the providing step includes flood exposing the aperture with the electron beam.10. An integrated circuit fabrication process, the process comprising:reducing a dimension associated with a patterned area in a photoresist layer provided over a substrate, the patterned area representative of a feature; and forming the feature in the substrate, the feature having the reduced dimension associated with the patterned area, wherein the reducing step includes having an electron beam incident on at least a part of the patterned area, the electron beam forming shrink areas by bringing molecules on sidewalls of the feature to a higher temperature. 11. The process of claim 10, wherein the reducing step includes at least partially liquefying and moving at least a part of the patterned area.12. The process of claim 11, wherein the at least a part of the patterned area includes sidewalls.13. The process of claim 10, wherein the feature has a dimension smaller than a dimension of one lithographic feature.14. The process of claim 10, wherein the feature is selected from a group including a space, a contact hole, a conductive via, an interconnect, and a trench.15. A process of fabricating reduced feature size in an integrated circuit fabrication system, the integrated circuit fabrication system including a mask or reticle including an image of a feature, a photoresist layer provided over a semiconductor substrate and including an aperture representative of the image of the feature, the aperture having a width and sidewalls, and a source of electromagnetic radiation, the process comprising:reducing a dimension associated with the aperture of the photoresist layer; and forming the feature in the semiconductor substrate, the feature having the reduced dimension associated with the aperture, wherein reducing a dimension includes having an electromagnetic beam radiation incident on at least a part of the aperture wherein the reduced dimension is formed by shrink areas, the shrink areas are formed when the electromagnetic beam energy heats molecules on the part of the aperture. 16. The process of claim 15, wherein reducing a dimension includes the electromagnetic radiation being a flood electron beam.17. The process of claim 16, wherein the flood electron beam has a dose in the range of approximately 500-2000 [mu]C/cm<2>.18. The process of claim 16, wherein the flood electron beam is generated at an accelerating voltage in the range of approximately 5-20 keV.19. The process of claim 16, wherein the flood electron beam is generated at a current in the range of approximately 2-5 mA.20. The process of claim 16, wherein reducing a dimension includes the flood electron beam at least partially liquefying the sidewalls.21. The process of claim 20, wherein reducing a dimension includes the sidewalls flowing to form modified sidewalls having a downwardly sloping shape.22. The process of claim 15, wherein forming the feature in the semiconductor substrate includes forming the feature having a width smaller than the width of the aperture. |
CROSS-REFERENCE TO RELATED APPLICATIONSThe present invention is related to U.S. application Ser. No. 09/771,820 by Uzodinma Okoroanyanwu entitled "Process for Reducing the Pitch of Contact Holes, Vias, and Trench Structures in Integrated Circuits" filed on an even date herewith and assigned to the Assignee of the present application.FIELD OF THE INVENTIONThe present invention relates generally to integrated circuit (IC) features. More particularly, the present invention relates to a method and an apparatus for fabricating reduced contact holes, vias, and trench features in integrated circuits.BACKGROUND OF THE INVENTIONThe semiconductor or integrated circuit (IC) industry aims to manufacture integrated circuits (ICs) with higher and higher densities of devices on a smaller chip area to achieve greater functionality and to reduce manufacturing costs. This desire for large scale integration has led to continued shrinking of circuit dimensions and device features. The ability to reduce the size of structures, such as, gate lengths in field-effect transistors and the width of conductive lines, is driven by lithographic performance.Features, such as, contacts and vias, provide a conducting path for electrically connecting one device to another or for electrically connecting circuits on various layers of the chip. As the number of devices per unit area have increased, so has the number of contacts and vias necessary to route signals and power throughout the chip. This, in turn, has required a decrease in feature sizes, including contacts and vias, and in feature pitches.Feature size has been steadily decreasing with the use of shorter lithographic or exposure wavelengths and resolution enhancement techniques, such as, phase shifting masks and off-axis illumination. However, even with such lithographic techniques, the feature size is usually constrained to a dimension approximately equal to the lithographic wavelength divided by two times the numerical aperture (NA) of the lens of the exposure system. For example, for 193 nanometer (nm) lithographic systems with NA=0.63, the minimum feature size is approximately 150 nm.Thus, there is a need for a process of fabricating an integrated circuit having a feature size smaller than the lithographic wavelength associated therewith. There is a further need for a process of fabricating an integrated circuit having reduced dimensions of contacts, vias, lines, spaces, interconnects, gates, doped regions, and/or etched regions than is achievable using conventional lithographic systems. There is still a further need for a process of fabricating an integrated circuit having a reduced feature size that utilizes existing equipment and materials and does not significantly decrease throughout.BRIEF SUMMARY OF THE INVENTIONAn exemplary embodiment relates to a method of fabricating reduced feature size in an integrated circuit. The integrated circuit includes a patterned photoresist layer provided over a substrate. The patterned photoresist layer being patterned with radiation at a lithographic wavelength and in accordance with a pattern on a mask or reticle. The method includes providing an electron beam to at least one area of an aperture included in the patterned photoresist layer. The aperture has sidewalls and a width. The method further includes transforming the sidewalls in response to the electron beam, and forming a feature in the substrate in accordance with the transformed sidewalls of the aperture.Another exemplary embodiment relates to an integrated circuit fabrication system. The system includes a mask or reticle including an image of a feature. A photoresist layer provided over a semiconductor substrate including an aperture representative of the image of the feature, the aperture having a width and sidewalls. The system further includes a source of electromagnetic radiation configured to form a reduced width of the aperture by moving the sidewalls.Still another exemplary embodiment relates to an integrated circuit fabrication process. The process includes reducing a width associated with a patterned area in a photoresist layer provided over a substrate. The patterned area is representative of a feature. The process further includes forming the feature in the substrate. The feature has the reduced width associated with the patterned area. The reducing step includes having an electron beam incident on at least a part of the patterned area.BRIEF DESCRIPTION OF THE DRAWINGSThe preferred embodiment will become more fully understood from the following detailed description, taken in conjunction with the accompanying drawings, wherein like reference numerals denote like elements, in which:FIG. 1 is a cross-sectional view of a portion of an integrated circuit in accordance with an exemplary embodiment, showing a reduced feature size;FIG. 2 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 1, showing an exposure step;FIG. 3 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 2, showing a developing step;FIG. 4 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 3, showing an electron beam exposure step;FIG. 5 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 4, showing an etching step;FIG. 6 is a top planar view of an SEM image of nominal contact holes;FIG. 7 is a cross-sectional view of an SEM image of the nominal contact holes of FIG. 6;FIG. 8 is a top planar view of an SEM image of reduced contact holes formed from the electron beam exposure step;FIG. 9 is a cross-sectional view of an SEM image of the reduced contact holes of FIG. 8;FIG. 10 is a top planar view of an SEM image of another reduced contact holes formed from the electron beam exposure step; andFIG. 11 is a cross-sectional view of an SEM image of the another reduced contact holes of FIG. 10.DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTSWith reference to FIGS. 1-5, an exemplary embodiment of an advantageous process for fabricating reduced feature size in an integrated circuit (IC) will be described. The advantageous process is preferably implemented to fabricate reduced features such as, contacts, vias, interconnects, and/or trenches. The advantageous process permits such features to be reduced in size by up to four times smaller than the size of features achievable using conventional lithographic techniques and systems. The advantageous process also provides fabrication of reduced feature pitch in an IC.With reference to FIG. 1, a portion 10 of an integrated circuit (IC) includes etched areas 12 in a substrate 16. Each of etched areas 12 can be a feature, such as, a contact hole, a via, a trench, or a space in portion 10. Each of etched areas 12 is similar to each other and has a width 19. The distance from the center of a given etched area 12 to the center of an adjacent etched area 12 is a conventional pitch 18.In FIG. 1, width 19 of each of etched areas 12 is approximately three times smaller than the dimension of an aperture 34 (i.e., a conventional aperture) (FIG. 3). For example, in a 193 nm lithography system, aperture 34 may be approximately 150 nm while each of etched areas or aperture 12 may approach 50 nm. Alternatively, it is contemplated that each of etched areas of aperture 12 may be less than half the dimension of aperture 34. Among others, the feature or structure size, the dimension of conventional pitch 18, the characteristics of the photoresist material used to fabricate etched areas 12 and/or the density of features or structures provided on a mask used to fabricate etched areas 12 determine the possible minimum width 19 for a given IC.The fabrication of etched areas 12 will be described with respect to FIGS. 2-5. With reference to FIG. 2, an exposure step comprising the advantageous process is performed on portion 10 of the IC. Portion 10 includes a photoresist layer 22 provided on substrate 16. Substrate 16 can be an IC wafer, a semiconductive material, an insulative material, a conductive material, layers above any of the listed materials, or a base layer. Substrate 16 can be an industry standard silicon wafer. Substrate 16 can include one or more layers of materials and/or features, such as, lines, interconnects, vias, doped regions, etc., and can further include devices, such as, transistors, microactuators, diodes, etc. Substrate 16 is not described in a limiting fashion.Photoresist layer 22 is selected to have photochemical reactions in response to electromagnetic radiation 24 from a light source (not shown). Photoresist layer 22 is preferably a chemically amplified, positive tone, photoresist designed for 248 nm, 193 nm, 157 nm, or 13.4 nm exposure, applied to substrate 16 by spin coating. Alternatively, photoresist layer 22 may be a negative resist material applied to substrate 16.Radiation 24 is provided by a single light source or multiple light sources at various wavelength ranges depending on the material composition of photoresist layer 22. Radiation 24 can be electromagnetic energy emitted from an excimer laser, an ND:YAG laser, a frequency multiplied ND:YAG laser, a He-Ne scanning laser, or other light source. For example, radiation 24 can be radiation having a wavelength in the ultraviolet (UV), vacuum ultraviolet (VUV), deep ultraviolet (DUV), or extreme ultraviolet (EUV) range.Radiation 24 is provided via a mask or reticle 26 in accordance with a pattern on mask or reticle 26 to photoresist layer 22. Mask or reticle 26 is a conventional mask or reticle including a transparent substrate 28 (e.g., fused silica) and an opaque or absorbing material 30 (e.g., chromium). Mask 26 provides a pattern of desirable features, such as, lines, spaces, contacts, and/or vias, using material 30. Although not shown, other components or equipment can be provided between radiation 24 and portion 10 to transfer the image on mask 26 to photoresist layer 22, such as, an optical system (e.g., one or more lens assemblies).In one embodiment, photoresist layer 22 is a positive resist material. Layer 22 and radiation 24 are selected to transfer the pattern or image provided on mask 26 to layer 22. Areas of layer 22 where radiation 24 is incident thereon (i.e., exposed areas 32) undergo a photochemical reaction and become soluble. In a developing step (FIG. 3), exposed areas 32 are removed from portion 10, leaving behind apertures 34. The developing step utilizes a wet or dry developer. For example, a solvent developer, such as, tetramethylammonium hydroxide, can be selected to develop the material comprising layer 22.In one embodiment, apertures 34 are circular, square, or rectangularly shaped. Each of apertures 34 is approximately the dimension of one lithographic feature that is consistent with the technology node.After the developing step, an electron-beam exposure step is performed on portion 10 (FIG. 4). A flood electron source, preferably of a cold cathode type, generates electrons from the energetic input of ions to perform the electron-beam exposure. Patterned layer 22 is flood electron beam exposed to create shrink areas 36. The electron beam interacts with the molecules comprising layer 22, in particular, the molecules comprising the sidewalls of apertures 34, to the extent that these molecules are brought to a higher temperature sufficient to at least partially liquefy or plasticize layer 22. The sidewalls of each of apertures 34 in this liquefied or viscous state flow downward, due to gravity, to decrease the width of apertures 34.Through careful selection of electron beam parameters, and additionally through selection of the material comprising layer 22, the melt and flow conditions associated with generation of areas 36 can be controlled. For example, the electron beam typically needs to come in contact with a given molecule in layer 22 to cause the heating of that molecule. Thus, the penetration distance or depth of the electron beam should be considered in generating areas 36.By varying the electron beam energy or the accelerating voltage, beam current, dose, processing gas, and/or substrate temperature, it is possible to control the penetration depth of the electron beam such that a curing depth and response of layer 22 can be controlled. The penetration depth of the electrons, comprising the electron beam, into a target material (i.e., layer 22) is a function of the electron beam energy, and the relationship is approximately given by: where Rg is the penetration depth in microns, Va is the accelerating voltage or energy in keV, and d is the density of the target material in g/cm<3>.Although not shown, formation of areas 36 may further include additional processing steps and/or equipment to ensure that desired areas 36 are formed in layer 22 without otherwise distorting the pattern transferred to layer 22 or substrate 16. For example, the sidewalls of apertures 34 may be cooled once flowing has commenced to specify the shape of areas 36. It is also contemplated that exposures other than electron beam exposure may be used to form areas 36, as long as the etched areas 12 etched from the pattern defined by areas 36 can be achieved, as described herein.Areas 36 represent distortions to the morphology of apertures 34. These distortions, however, advantageously reduce the width of each of apertures 34, especially at the bottoms of apertures 34 (i.e., at the interface of layer 22 and substrate 16). In a cross-sectional view of portion 10, a pair of areas 36 (e.g., a left sidewall 38 and a right sidewall 40) form new sidewalls for each of apertures 34. Because left and right sidewalls 38, 40 both slope downwardly toward the bottom of a given aperture 34, the hole width at the top of walls 38, 40 is larger than the hole width at the bottom of walls 38, 40. For example, if the width of a given aperture 34 prior to the electron beam exposure step is 150 nm, then the bottom width of a given aperture 34 with left and right sidewalls 38, 40 becomes 130 nm. Accordingly, after etching, the dimension of each of etched areas 12 is in the range of 120-130 nm and can approach a critical dimension of sub-100 nm.Utilizing the pattern defined by areas 36 of layer 22, an etching step performed on portion 10 forms etched areas 12 (FIG. 5). Areas 36 effectively pattern reduced or shrunken features, such as, etched areas 12 into substrate 16. For example, etched areas 12 can be contact holes, conductive vias, or trench features utilized in ICs or in the manufacture of ICs. The width or dimension of each of etched areas 12 is smaller than the width of any of apertures 34 without areas 36 (i.e., conventional holes patterned using merely a mask or reticle, such as mask 26, and using conventional lithographic techniques). The width or dimension of each of etched areas 12 is determined by the bottom width of aperture 34, as further defined by areas 36 (e.g., left and right sidewalls 38, 40).Shown in FIGS. 6-11 are scanning electron microscope (SEM) images of contact holes formed using the electron beam exposure step. FIG. 6 is a top planar SEM image of nominal contact holes 50, each having a diameter of approximately 150 nm. FIG. 7 is a cross-sectional SEM image of nominal contact holes 50 of FIG. 6. Nominal contact holes 50 were formed without an electron beam treatment. In FIGS. 8-9, there are shown contact holes 52 formed by electron beam irradiation, the electron beam having the following parameters: accelerating voltage=20 keV, beam current=4 mA, and dose=750 [mu]C/cm<2>. Under these conditions, it is possible to shrink the diameter or width of each of contact holes 52 from a nominal value of 150 nm to 130 nm, as shown. In FIGS. 10-11, contact holes 54 are formed by irradiating with an electron beam having the following parameters: accelerating voltage=20 keV, beam current=5 mA, and dose=1000 [mu]C/cm<2>. Under these conditions, the diameter or width of each of contact holes 54 is reducible from a nominal value of 150 nm to approximately 84 nm, as shown.In one embodiment, electron beam parameters suitable to form areas 36, to form the reduced width or critical dimensions associated with areas 12, are: accelerating voltage=approximately 5-20 keV, beam current=approximately 2-5 mA, and dose=approximately 500-2000 [mu]C/cm<2>. Because the dimension of each of areas 12 is dependent on the parameters of the applied electron beam in the electron beam exposure step, it is possible to achieve a wide range of reduced dimensions, as desired, and it is even possible to completely close apertures 34, i.e., prevent formation of areas 12 via apertures 34, with an electron beam having aggressive enough beam parameters.It is understood that while the preferred embodiment and specific examples are given, they are for the purpose of illustration only and are not limited to the precise details described herein. For example, features other than contact holes, or vias, such as, trenches, can benefit from the advantageous process. Various modifications may be made in the details within the scope and range of the equivalence of the claims without departing from what is claimed. |
In one embodiment, a processor mode is provided for guest software. The processor mode enables the guest software to operate at a privilege level intended by the guest software. When the guest software attempts to perform an operation restricted by processor mode, the processor mode is exited to transfer control of the operation to a virtual-machine monitor, which runs outside this processor mode. |
CLAIMSWhat is claimed is: 1. A method comprising: running guest software in a processor mode that enables the guest software to operate at a privilege level intended by the guest software; and responsive to an attempt of the guest software to perform an operation restricted by said processor mode, exiting said processor mode to transfer control over the operation to the VMM running outside said processor mode.2. The method of claim 1 further comprising: responding to the operation; and transferring control over the operation to the guest software by entering said processor mode.3. The method of claim 2 wherein entering said processor mode includes loading processor state expected by the guest software.4. The method of claim 1 wherein exiting said processor mode further comprises: saving processor state used by the guest software; and loading processor state required by the VMM. 5. The method of claim 1 wherein exiting said processor mode further comprises automatically transferring from an address space associated with the guest software to an address space associated with the VMM. 6. The method of claim 1 further comprising maintaining a flag in a processor control register to indicate whether the processor is in said processor mode.7. The method of claim 1 further comprising reporting an ability of a processor to support said processor mode using one of a plurality of reserved feature bits that are returned in a processor register.8. The method of claim 1 wherein exiting said processor mode comprises generating one of a plurality of interrupts and exceptions in response to the attempt of the guest software to perform the operation restricted by said processor mode.9. The method of claim 8 wherein generating one of the plurality of interrupts and exceptions further includes: identifying the attempt of the guest software to perform the operation restricted by said processor mode; and determining that the attempt of the guest software is potentially successful. 10. The method of claim 8 further comprising: maintaining a redirection bitmap for the plurality of the interrupts and exception, the redirection bitmap indicating whether each of the plurality of the interrupts and exceptions is allowed to be handled by the guest software; and consulting the redirection bitmap to determine whether to exit said processor mode.11. The method of claim 8 further comprising: identifying an attempt of the guest software to modify an interrupt flag; and modifying the interrupt flag if the interrupt flag does not control masking of interrupts.12. The method of claim 8 further comprising: identifying an attempt of the guest software to modify an interrupt flag; and preventing the attempt of the guest software to modify the interrupt flag.13. The method of claim 12 wherein preventing the attempt of the guest software to modify the interrupt flag includes providing a shadow interrupt flag for modifications by the guest software. 14. The method of claim 12 wherein preventing the attempt of the guest software to modify the interrupt flag includes generating one of the plurality of interrupts and exceptions in response to the attempt of the guest software to modify the interrupt flag.15. A system comprising: a memory; and a processor, coupled to the memory, to run guest software in a processor mode that enables the guest software to operate at a privilege level intended by the guest software, to identify an attempt of the guest software to perform an operation restricted by said processor mode, and to exit said processor mode, in response to the attempt, to transfer control over the operation to a virtual-machine monitor (VMM) running outside said processor mode.16. The system of claim 15 wherein the processor is to re-enter said processor mode after the VMM responds to the operation.17. The system of claim 16 wherein the processor is to load processor state expected by the guest software when re-entering said processor mode.18. The system of claim 15 wherein the processor is to save processor state used by the guest software and to load processor state required by the VMM when exiting said processor mode. 19. The system of claim 15 wherein exiting said processor mode further comprises automatically transferring from an address space associated with the guest software to an address space associated with the VMM.20. The system of claim 15 wherein the processor is to maintain a flag in a processor control register to indicate whether the processor is in said processor mode.21. The system of claim 15 wherein the processor is to reporting an ability to support said processor mode using one of a plurality of reserved feature bits that are returned in a processor register.22. The system of claim 15 wherein the processor is to generate one of a plurality of interrupts and exceptions in response to the attempt of the guest software to perform the operation restricted by said processor mode.23. The system of claim 22 wherein the processor is to generate one of the plurality of interrupts and exceptions upon determining that the attempt of the guest software to perform the operation restricted by said processor mode is potentially successful. 24. The system of claim 22 wherein the processor is to consult a redirection bitmap to determine whether to exit said processor mode, the redirection bitmap indicating whether each of the plurality of the interrupts and exceptions is allowed to be handled by the guest software.25. The system of claim 22 wherein the processor is to identify an attempt of the guest software to modify an interrupt flag and to modify the interrupt flag if the interrupt flag does not control masking of interrupts.26. The system of claim 22 wherein the processor is to identify an attempt of the guest software to modify an interrupt flag and to prevent the attempt of the guest software to modify the interrupt flag.27. The system of claim 26 wherein the processor is to prevent the attempt of the guest software to modify the interrupt flag by providing a shadow interrupt flag for modifications by the guest software.28. A computer readable medium that provides instructions, which when executed on a processor, cause said processor to perform operations comprising: running guest software in a processor mode that enables the guest software to operate at a privilege level intended by the guest software; and responsive to an attempt of the guest software to perform an operation restricted by said processor mode, exiting said processor mode to transfer control over the operation to the VMM running outside said processor mode.29. The computer readable medium of claim 28 providing further instructions causing the processor to perform operations comprising: responding to the operation; and transferring control over the operation to the guest software by entering said processor mode.30. The computer readable medium of claim 28 comprising further instructions causing the processor to perform operations comprising: maintaining a redirection bitmap for the plurality of the interrupts and exception, the redirection bitmap indicating whether each of the plurality of the interrupts and exceptions is allowed to be handled by the guest software; and consulting the redirection bitmap to determine whether to exit said processor mode. |
NEW PROCESSOR MODE FOR LIMITING THE OPERATION OF GUESTSOFTWARE RUNNING ON A VIRTUAL MACHINE SUPPORTED BY AVIRTUAL MACHINE MONITORField of the InventionThe present invention relates generally to virtual machines, and more specifically to providing processor support for a virtual-machine monitor. Background of the InventionA conventional virtual-machine monitor (VMM) typically runs on a computer and presents to other software the abstraction of one or more virtual machines. Each virtual machine may function as a self-contained platform, running its own"guest operating system" (i. e., an operating system hosted by theVMM). The guest operating system expects to operate as if it were running on a dedicated computer rather than a virtual machine. That is, the guest operating system expects to control various computer operations and have access to hardware resources during these operations. The hardware resources may include processor-resident resources (e. g., control registers) and resources that reside in memory (e. g., descriptor tables). However, in a virtual-machine environment, the VMM should be able to have ultimate control over these resources to provide proper operation of virtual machines and protection from and between virtual machines. To achieve this, the VMM typically intercepts and arbitrates all accesses made by the guest operating system to the hardware resources. Current implementations of VMMs may be based on software techniques for controlling access to hardware resources by the guest operating system.However, these software techniques may lack the ability to prevent guest software from accessing some fields in the processor's control registers and memory. For instance, the guest operating system may not be prevented from accessing a requestor privilege level (RPL) field in the code segment register ofIA-32 microprocessors. In addition, existing software techniques typically suffer from performance problems. Thus, an alternative mechanism is needed for supporting the operation of the VMM. Brief Description of the DrawingsThe present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:Figure 1 illustrates one embodiment of a virtual-machine environment;Figure 2 illustrates operation of a virtual-machine monitor based on guest deprivileging;Figure 3 is a block diagram of a system for providing processor support to a virtual-machine monitor, according to one embodiment of the present invention;Figure 4 is a flow diagram of a method for providing processor support to a virtual-machine monitor, according to one embodiment of the present invention;Figure 5 is a flow diagram of a method for performing a transition out of V32 mode, according to one embodiment of the present invention;Figure 6 is a flow diagram of a method for generating virtualization traps, according to one embodiment of the present invention;Figure 7 is a flow diagram of a method for maintaining a redirection map, according to one embodiment of the present invention ;Figure 8 is a flow diagram of a method for controlling masking of interrupts, according to one embodiment of the present invention; andFigure 9 is a block diagram of one embodiment of a processing system. Description of EmbodimentsA method and apparatus for providing processor support to a virtual-machine monitor are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention can be practiced without these specific details. Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as"processing"or"computing" or"calculating"or"determining"or"displaying"or the like, may refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer-system memories or registers or other such information storage, transmission or display devices. The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Instructions are executable using one or more processing devices (e. g., processors, central processing units, etc.). The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose machines may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. In the following detailed description of the embodiments, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views.These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. Moreover, it is to be understood that the various embodiments of the invention, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described in one embodiment may be included within other embodiments. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled. The method and apparatus of the present invention provide processor support for a virtual-machine monitor (VMM). Figure 1 illustrates one embodiment of a virtual-machine environment 100, in which the present invention may operate. In this embodiment, bare platform hardware 116 comprises a computing platform, which may be capable, for example, of executing a standard operating system (OS) or a virtual-machine monitor (VMM), such as a VMM 112. A VMM, though typically implemented in software, may export a bare machine interface, such as an emulation, to higher level software.Such higher level software may comprise a standard or real-time OS, although the invention is not limited in scope in this respect and, alternatively, for example, aVMM may be run within, or on top of, another VMM. VMMs and their typical features and functionality are well-known by those skilled in the art and may be implemented, for example, in software, firmware or by a combination of various techniques. As described above, a VMM presents to other software (i. e.,"guest" software) the abstraction of one or more virtual machines (VMs). Figure 1 shows two VMs, 102 and 114. The guest software of each VM includes a guest OS such as a guest OS 104 or 106 and various guest software applications 108-110. Each of the guest OSs 104 and 106 expects to control access to physical resources (e. g., processor registers, memory and memory-mapped I/O devices) within the hardware platform on which the guest OS 104 or 106 is running and to perform other functions. However, in a virtual-machine environment, the VMM 112 should be able to have ultimate control over the physical resources to provide proper operation of VMs 102 and 112 and protection from and between VMs 102 and 114. The VMM 112 achieves this goal by intercepting all accesses of the guestOSs 104 and 106 to the computer's physical resources. Various techniques may be used to enable the VMM 112 to intercept the above accesses. One of such techniques is a guest-deprivileging technique which forces all guest software to run at a hardware privilege level that does not allow that software access to certain hardware resources. As a result, whenever the guest OS 104 or 106 attempts to access any of these hardware resources, it"traps"to the VMM 112, i. e., the VMM 112 receives control over an operation initiated by the guest OS if this operation involves accessing such hardware resources. Figure 2 illustrates a prior art embodiment of the operation of a VMM that supports guest deprivileging. As described above, guest deprivileging forces a guest OS to execute in a less privileged mode of execution. For IA-32 microprocessors, the nature of page-based protection is such that all guest software runs at the least privileged level (i. e., ring 3). That is, a guest OS 206 and guest applications 204 run at the same privilege level. As a result, the guest OS 206 may not be able to protect itself from the guest applications 206, thereby possibly compromising the integrity of the guest OS 206. This problem is known as ring compression. Guest deprivileging may also cause an address-space compression problem.As described above, certain attempts of guest software to access hardware resources result in traps that transfer control to the VMM 220. In order to enable this transfer of control, a portion of VMM code and/or data structures may be architecturally required to reside in the same virtual-address space as the guestOS 206. For instance, the IA-32 instruction-set architecture (ISA) may require that an interrupt descriptor table (IDT) 212, a global descriptor table (GDT) 210 and trap handling routines reside at the same virtual space as the guest OS 206. The VMM code and data structures 220 that reside in the virtual space 202 must be protected from accesses by guest software (e. g., by running at ring 0).Accordingly, the guest OS 206 does not control the entire address space 202 as the guest OS 206 expects. This causes an address-space compression problem. Another limitation of VMMs that use guest deprivileging pertains to some cases in which the processors fail to prevent guest software from reading privileged hardware resources. For instance, the IA-32 microprocessors allow the guest OS 206 to execute PUSH CS instructions which store a code segment register into memory. One of this register's fields stores information about the current privilege level. Accordingly, the guest OS 206 can become aware that its privilege level is 3, and not 0 as the guest OS 206 expects, by reading the value of the current privilege level from the memory. As a result, the guest OS 206 may be exposed to the fact that it is running on a virtual machine, and the integrity of the guest OS 206 may be compromised. Similarly, in some cases, the processors do not trap an attempt of the guest software to modify privileged software resources. For instance, the IA-32 processors allow the guest OS 206 to issue POPF instructions which attempt to load EFLAGS, and instead of generating a trap, simply ignore all or part of such attempts of the guest OS 206 because the guest OS 206 executes these instructions with insufficient privilege. As a result, the guest OS 206 believes that a corresponding EFLAGS field has been modified but the VMM 220 is not aware of that and cannot properly emulate this modification. Accordingly, the guest OS 206 may be exposed to the fact that it is running on a virtual machine, and the integrity of the guest OS 206 may be compromised. Yet another limitation of VM monitors that use guest deprivileging is caused by excessive trapping. Because the number of hardware resource elements that need to be protected from accesses by guest software is significant and such accesses may be frequent, traps may occur often. For instance, the IA-32 microprocessors support CLI instructions. The CLI instructions are issued to modify an interrupt flag, which is an element of the privileged hardware resources and which thus cannot be accessed by unprivileged software. The guestOS 206 commonly issues these instructions during its operation, thereby causing frequent traps to the VMM 220. Frequent trapping negatively affects system performance and reduces the utility of the VMM 220. The present invention addresses the above problems and various other limitations by providing processor support for a VMM. Figure 3 is block diagram of a system for providing processor support to a virtual-machine monitor, according to one embodiment of the present invention. Referring to Figure 3, all guest software runs at a processor mode referred to herein as a virtual 32-bit mode (V32 mode). V32 mode allows the guest software to run at its intended privilege level. For instance, for the IA-32 ISA, the guest OS 308 runs at the most privileged level (i. e., ring 0) and guest applications 306 run at the least privileged level (i. e., ring 3). V32 mode restricts the operation of the guest software by preventing the guest software from performing operations that may result in its access of certain privileged hardware resources. V32 mode is exited when the guest software attempts to perform such an operation. The VMM 320 runs outside V32 mode. When a transition out of V32 mode occurs, the VMM 320 receives control over the operation initiated by the guest OS 308 or guest application 306. The VMM 320 then performs this operation, and transfers control back to the guest software by entering V32 mode, thereby emulating the functionality desired by the guest software. In one embodiment, V32 mode is implemented by maintaining a flag in one of the processor's control registers (e. g., CRO) to indicate whether the processor is inV32 mode or not. In another embodiment, this flag (referred to herein asEFLAGS. V32) is maintained in one of the reserved bits in the upper half ofEFLAGS. The EFLAGS. V32 flag is modified either by a transition out of V32 mode or a transition into V32 mode. In one embodiment, the ability of the processor to support V32 mode are reported using one of the reserved feature bits that are returned in EDX when theCPUID instruction is executed with the value 1 in EAX. It should be noted that a variety of other mechanisms can be used to implement V32 mode and to report the ability of the processor to support V32 mode without loss of generality. In one embodiment, certain exceptions and interrupts cause a transition out ofV32 mode. These exceptions and interrupts include"virtualization traps."A virtualization trap is generated when guest software that runs in V32 mode attempts to perform an operation that may result in its access of certain privileged hardware resources. In one embodiment, when a transition out of V32 mode occurs, the guest address space 304 is automatically changed to the VMM address space 302. In addition, the processor state that was used by guest software is saved and stored in temporary registers, and the processor state required by theVMM 320 is loaded. In one embodiment, when a transition into V32 mode occurs, the processor state that was saved on the transition out of V32 mode (i. e., to the VMM 320) is automatically restored, the VMM address space 302 is changed to the guest address space 304, and control is returned to the guest OS 308. In one embodiment, when guest software runs in V32 mode, software interrupts (e. g., interrupts caused by execution of BOUND, INT or INTO instructions) are handled by the guest OS 308 using the guest IDT (i. e., the IDT residing in the guest address space 304). All other interrupts and exceptions including virtualization traps cause a transition out of V32 mode which results in a change of the guest address space 304 to the VMM address space 302. The IDT 316 is then used to point to code that handles a corresponding exception or interrupt. In one embodiment, a new interrupt flag (i. e., a virtual-machine interrupt flag) is maintained for accesses by guest software. Whenever guest software attempts to access the interrupt flag (IF), it will instead access the virtual machine interrupt flag (VMIF). In one embodiment, an attempt of guest software to access VMIF (e. g., using the CLI instruction) does not cause a transition out of V32 mode, except when the guest OS 308 has just set VMIF to 1 (e. g., through the STI instruction) and the VMM 320 wishes to deliver a pending interrupt to the guest OS 308. Such pending interrupts referred to herein as"virtual pending interrupts"generate virtualization traps which allow the VMM 320 to deliver a pending interrupt to the guest software when the guest OS 308 signals that it is ready to process such an interrupt. In one embodiment, one of the reserved bits in the upper half of the EFLAGS register is used to maintain a flag indicating whether guest software has a pending virtual interrupt. The implementation of V32 mode allows resolving all of the problems caused guest deprivileging as described above. In particular, because guest software runs in V32 mode at its intended privilege level, the problem of ring compression is eliminated. In addition, address-space compression is no longer a problem because a virtualization trap automatically causes a switch to the VMM address space 302, and therefore neither the tables controlling such transfers nor the code handling a corresponding virtualization trap is required to reside in the guest address space 304. Furthermore, because V32 mode enables the guest software to run at its intended privilege level, the hardware resources that need to be protected no longer include those elements of hardware resources that control the privilege level. For instance, the PUSH CS instruction described above can no longer reveal to the guest OS 308 that it runs on a virtual machine because the field of the code segment register that stores information about a current privilege level now stores the privilege level intended by the guest OS 308. Similarly, POPF instructions which attempt to load EFLAGS are no longer ignored when executed by the guest OS 308 because the guest OS 206 executes these instructions with sufficient privilege. Accordingly, the number of elements of hardware resources that need to be protected is reduced. If any of them allow non-trapping read or write accesses by guest software, they are specifically architected to cause traps when executed inV32 mode. Thus, the problems caused by non-trapping read and write accesses are eliminated. In addition, because the implementation of V32 mode reduces the number of elements of hardware resources that need to be protected, the number of traps that occur when guest software attempts to access these elements is also reduced. Frequency of traps is further reduced by providing mechanisms for eliminating traps caused by the most frequently used instructions. For instance,STI instructions no longer cause traps except when guest software has a pending virtual interrupt. Figure 4 is a flow diagram of a method 400 for providing processor support to a virtual machine monitor, according to one embodiment of the present invention.At processing block 404, guest software is executed in a processor mode (i. e., V32 mode) that allows guest software to operate at a privilege level intended by the guest software. That is, a guest OS may operate at a supervisor privilege level, and guest applications may operate at a user privilege level. At processing block 406, an attempt of the guest software to perform an operation restricted by V32 mode is identified. In response to this attempt, V32 mode is exited to transfer control over the operation initiated by the guest software to the VMM which runs outside V32 mode (processor block 408). In one embodiment, the VMM configures what operations should cause a transition out of V32 mode as will be described in greater detail below in conjunction withFigure 7. In one embodiment, such operations generate virtualization traps that cause a transition out of V32 mode. Alternatively, any other mechanism known in the art can be used to cause a transition out of V32 mode. One embodiment of performing a transition out of V32 mode is described in greater detail below in conjunction with Figure 5. Further, the VMM responds to the operation intended by the guest software (processing block 410). Afterwards, V32 mode is re-entered to transfer control over this operation back to the guest software (processing block 412), and method 400 returns to processing block 404. In one embodiment, when a transition intoV32 mode occurs, the processor state expected by the guest software is automatically restored and the VMM address space is changed to the guest address space. Figure 5 is a flow diagram of a method 500 for performing a transition out ofV32 mode, according to one embodiment of the present invention. Method 500 begins with saving processor state used by guest software (processing block 504).In one embodiment, the saved processor stated is stored in the processor's temporary registers. At processing block 506, the processor state required by theVMM is loaded into processor registers. In one embodiment, loading the processor state affects a change of the guest address space to the VMM address space (e. g., the processor state is loaded by loading the control register CR3). In an alternative embodiment, loading the processor state does not cause a change in the address space. In such an embodiment, at processing block 508, an address space switch is performed to transfer from the guest address space to the VMM address space. Accordingly, when an interrupt or exception causing the transition occurs, the IDT residing in the VMM address space is automatically used to point to the VMM-resident code for handling this interrupt or exception. Figure 6 is a flow diagram of a method 600 for generating virtualization traps, according to one embodiment of the present invention. Method 600 begins with identifying an attempt of guest software to perform an operation that may be restricted by V32 mode (processing block 604). At decision box 606, a determination is made as to whether the attempt of the guest software can potentially succeed. If the determination is positive, a virtualization trap is generated (processing block 608). Alternatively, no virtualization trap is generated, and the guest software proceeds with the operation (processing block 610). For instance, according to the IA-32 ISA, the RDMSR instruction can be executed only by software running with supervisor privilege. Consequently, if the guest software OS which runs with supervisor privilege executes this instruction, its attempt may be successful. If a guest application which runs with user privilege executes this instruction, its attempt will not be successful, and a general-protection fault will occur. Accordingly, an attempt of the guest OS to execute the RDMSR instruction will cause a virtualization trap but an attempt of a guest application will be handled by the guest OS. In one embodiment, virtualization traps will be caused by potentially successful attempts of the guest OS to access the processor's control registers (e. g., CRO-CR4). For instance, for IA-32 processors, virtualization traps will be generated in response to an attempt of the guest software to execute MOV CR (except the attempts to store CR2, which do not need to cause virtualization traps), CLTS, LMSW or SMSW instructions, or a task switch. Virtualization traps may be also caused by a potentially successful attempt of the guest software to set an interrupt flag IF (e. g., via STI, POPF or IRET instructions) if guest software has a pending virtual interrupt. For IA-32 ISA, successful attempts to execute such instructions as, for example, HLT, IN, INS/INSB/INSW/INSD, INVD, OUT, OUTS/OUTSB/OUTSW/OUTSD, RDMSR, and WRMSR, will cause virtualization traps. These virtualization traps will prevent guest software from halting the processor and from directly accessing I/O ports, caches or modelspecific registers. In addition, virtualization traps may be caused by attempts to execute CPUID instructions to allow the VMM to present the abstraction of processor features chosen by the VMM, by attempts to execute INVLPG instructions to enable the VMM to properly virtualize address translations, and by attempts to execute IRET instructions (if IRET is used to transition into V32 mode) used by guest software to implement a VMM to allow recursively nested VMMs. Figure 7 is a flow diagram of a method 700 for maintaining a redirection map, according to one embodiment of the present invention. According to this embodiment, the VMM maintains a redirection map to configure which interrupts and exceptions should result in a virtualization trap (processing block 704). At processing block 706, an occurrence of an interrupt or exception is identified. The redirection map is then consulted to find a bit associated with this interrupt or exception in the redirection bitmap (processing block 708). At decision box 710, a determination is made as to whether this interrupt is allowed to be handled by the guest OS. If the determination is positive, the interrupt or exception is delivered to V32 mode and is handled by the guest OS (processing block 714). Alternatively, a virtualization trap is generated, causing a transition out of V32 mode (processing block 712). Figure 8 is a flow diagram of a method 800 for controlling masking of interrupts, according to one embodiment of the present invention. Various embodiments may be used to control the masking of interrupts. In one embodiment, all interrupts are unmasked when guest software is running. In this embodiment, the guest software is permitted to manipulate an interrupt flag (e. g., for IA-32 microprocessors, this flag is identified as EFLAGS. IF), but this manipulation will be ignored with respect to the masking of interrupts. In another embodiment, the masking of interrupts is dependent on the interrupt flag. In this embodiment, the guest software is not permitted to manipulate the interrupt flag. In particular, the guest software may be prevented from accessing the interrupt flag by providing a shadow interrupt flag (e. g., EFLAGS. VMIF) for modifications by the guest software, by generating a virtualization trap in response to such an attempt of the guest software, or by using any other technique known in the art. Method 800 begins with identifying an attempt of guest software to modify an interrupt flag that may potentially control masking of interrupts (processing block 804). At decision box 806, a determination is made as to whether the interrupt flag controls the masking of interrupts. If the determination is negative, i. e., all interrupts are unmasked, the guest software is allowed to modify the interrupt flag (processing block 808). As described above, this modification will not have an effect on the masking of the interrupts. Otherwise, if the masking of interrupts is dependent on the interrupt flag, a determination is then made as to whether a shadow interrupt flag exists, i. e., whether the attempt of the guest software to affect the masking of interrupts is affecting the shadow flag (decision box 810). If the determination is negative, i. e., the guest software attempts to modify the actual interrupt flag, a virtualization trap occurs (processing block 812), causing a transition out of V32 mode (processing block 816). Alternatively, if the actual interrupt flag is not accessible to the guest software, the guest software is allowed to modify the shadow interrupt flag (processing block 814). Figure 9 is a block diagram of one embodiment of a processing system.Processing system 900 includes processor 920 and memory 930. Processor 920 can be any type of processor capable of executing software, such as a microprocessor, digital signal processor, microcontroller, or the like. Processing system 900 can be a personal computer (PC), mainframe, handheld device, portable computer, set-top box, or any other system that includes software. Memory 930 can be a hard disk, a floppy disk, random access memory (RAM), read only memory (ROM), flash memory, or any other type of machine medium readable by processor 920. Memory 930 can store instructions for performing the execution of the various method embodiments of the present invention such as methods 400,500,600,700 and 800 (Figures 4-8). It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description.The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. |
Methods, apparatuses and storage medium associated with providing firmware to a device are disclosed herein. In various embodiments, an apparatus may include a device, and a processor to host a computing environment that includes the device and a device driver of the device. Further, the apparatus may include a firmware agent, disposed outside the computing environment, to provide, on behalf of the device driver, firmware to the device on power-on of the device. Other embodiments may be described and claimed. |
Claims What is claimed is: 1. An apparatus, comprising a device; a processor, coupled with the device, to host a computing environment that includes the device and a device driver of the device; and a firmware agent, disposed outside the computing environment and coupled with the device, to provide, on behalf of the device driver, firmware to the device on power-on of the device. 2. The apparatus of claim 1, wherein the device comprises a selected one of an encoder, a decoder, a graphics unit, a transceiver, or a global positioning system. 3. The apparatus of claim 1, wherein the computing environment further includes a power management agent, coupled to the device, to power on or off the device, wherein the firmware agent is configured to provide the firmware to the device in response to the power management agent powering on the device. 4. The apparatus of claim 3, wherein the power management agent is configured to power off the device whenever the apparatus enters a power saving mode that consumes less power than a normal operating mode. 5. The apparatus of claim 4, wherein the power management agent is further configured to power off the device whenever the device has not been used for a period of time while the apparatus is in the normal operating mode. 6. The apparatus of claim 3, wherein the computing environment further includes an operating system that comprises the power management agent. 7. The apparatus of claim 3, wherein the firmware agent is further configured to couple the power management agent to the device, and relay power on or off commands or signals of the power management agent to the device. 8. The apparatus of claim 1, wherein the firmware agent is further configured to obtain the firmware from the device driver during a start-up of the apparatus. 9. The apparatus of claim 8, further comprising secure storage, disposed outside the computing environment and coupled with the firmware agent, wherein the firmware agent is further configured to store the firmware in the secure storage, on obtaining the firmware during a start-up of the apparatus, and retrieve the firmware from the secure storage to provide to the device on power-on of the device. 10. The apparatus of claim 9, wherein the firmware agent is further configured to authenticate the firmware prior to storing the firmware into the secure storage. 11. The apparatus of any one of claims 1 - 9, further comprising a security engine, disposed outside the computing environment, wherein the security engine includes the firmware agent. 12. The apparatus of claim 1, wherein the apparatus is a selected one of a smartphone or a computing tablet. 13. A method comprising: detecting, by a firmware agent of a computing device, for power-on events of a device of the computing device; and providing firmware to the device, by the firmware agent, on behalf of a device driver of the device, in response to a detection of a power-on event of the device, wherein the device and the device driver are part of a computing environment hosted by a processor of the computing device, and the firmware agent is disposed outside of the computing environment and coupled with the device. 14. The method of claim 13, wherein the computing environment further includes a power management agent, coupled to the device, to power on or off the device, wherein providing comprises providing the firmware to the device, the firmware agent, in response to the power management agent powering on the device. 15. The method of claim 14, further comprising coupling, the firmware agent, the power management agent to the device, and relaying, by the firmware agent, power on or off commands or signals of the power management agent to the device. 16. The method of claim 13, further comprising obtaining the firmware, by the firmware agent, from the device driver during a start-up of the computing device. 17. The method of claim 16, wherein the computing device further comprises secure storage, disposed outside the host computing environment and coupled with the firmware agent, wherein the method further comprises storing the firmware, by the firmware agent, in the secure storage, on obtaining the firmware during a start-up of the computing device, and retrieving the firmware, by the firmware agent, from the secure storage to provide to the device on power-on of the device. 18. The method of claim 17, further comprising authenticating the firmware, by the firmware agent, prior to storing the firmware into the secure storage. 19. At least one computer-readable storage medium comprising a plurality of instructions, wherein the instructions, in response to execution of the instructions by a security engine of a computing apparatus, cause the computing apparatus to perform any one of the methods of claims 13 - 18. 20. An apparatus, comprising a device; a processor, coupled with the device, to host a computing environment that includes the device and a device driver of the device; and means, disposed outside the computing environment and coupled with the device, for providing, on behalf of the device driver, firmware to the device on power-on of the device. 21. The apparatus of claim 20, wherein the means is further for obtaining the firmware from the device driver during a start-up of the apparatus. 22. The apparatus of claim 21, further comprising secure storage, disposed outside the computing environment and coupled with the firmware agent, wherein the means is further for storing the firmware in the secure storage, on obtaining the firmware during a start-up of the apparatus, and retrieve the firmware from the secure storage to provide to the device on power-on of the device. 23. The apparatus of claim 22, wherein the means is further for authenticating the firmware prior to storing the firmware into the secure storage. 24. The apparatus of any one of claims 20 - 22, further comprising a security engine, disposed outside the computing environment, wherein the security engine includes the means. |
Firmware Agent Cross Reference to Related Application The present application claims priority to U.S. Patent Application No. 13/618,508, filed September 14, 2012, entitled "FIRMWARE AGENT." Technical Field This application relates to the technical field of data processing, more specifically to methods, apparatuses and storage medium associated with providing firmware to a device. Technical Field The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section. Traditionally, firmware of a device is often provided to the device by a device driver of the device. However, before the firmware can be provided to the device, communication must be established between the device and the device driver. The protocol for establishing communication is often non-trivial. In a power consumption sensitive apparatus, e.g., a mobile computing device, a device may be frequently shut down. For example, a decoder in a computing tablet may be shut down hundreds of times in between decoding groups of video frames of a video, in the course of the video being played. Thus, the prior art approach to have the device driver provides the device firmware to the device is relatively inefficient. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the present invention will be described by way of exemplary embodiments, but not limitations, illustrated in the accompanying drawings in which like references denote similar elements, and in which: Figure 1 illustrates an overview of an arrangement incorporated with a firmware agent to provide firmware to a device within a computing environment from outside the computing environment; Figure 2 illustrates a process for obtaining device firmware by the firmware agent; Figure 3 illustrates a process for providing firmware to a device by the firmware agent; Figure 4 illustrates an example computing device incorporated with a firmware agent; and Figure 5 illustrates an example non-transitory computer-readable storage medium having instructions configured to practice all or selected aspects of the operational flows of the firmware agent illustrated with references to Figures 2-3; all arranged in accordance with embodiments of the present disclosure. DETAILED DESCRIPTION Methods, apparatuses and storage medium associated with providing firmware to a device are disclosed herein. In various embodiments, an apparatus, e.g., a power consumption sensitive apparatus such as a computing tablet, may include a device, e.g., a decoder, and a processor to host a computing environment that includes the device and a device driver of the device. Further, the apparatus may include a firmware agent, disposed outside the computing environment, to provide, on behalf of the device driver, firmware to the device on power-on of the device. Accordingly, firmware may be provided to the device in a more efficient manner, especially where the device may be power off frequently, in the case of a power consumption sensitive computing apparatus. Other benefits and advantages may also be described and/or apparent to those skilled in art from the description to follow. Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that alternate embodiments may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that alternate embodiments may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative embodiments. Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. Further, descriptions of operations as separate operations should not be construed as requiring that the operations be necessarily performed independently and/or by separate entities. Descriptions of entities and/or modules as separate modules should likewise not be construed as requiring that the modules be separate and/or perform separate operations. In various embodiments, illustrated and/or described operations, entities, data, and/or modules may be merged, broken into further sub-parts, and/or omitted. The phrase "in one embodiment" or "in an embodiment" is used repeatedly. The phrase generally does not refer to the same embodiment; however, it may. The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrase "A/B" means "A or B". The phrase "A and/or B" means "(A), (B), or (A and B)". The phrase "at least one of A, B and C" means "(A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C)". Figure 1 illustrates an overview of an arrangement incorporated with a firmware agent to provide firmware to a device within a computing environment from outside the computing environment, in accordance with various embodiments. As illustrated, arrangement 100, such as a computing device, may include computing environment 102 hosted by one or more processors 1 12, and security engine 108, separately disposed outside computing environment 102 and coupled with computing environment 102 as shown. Computing environment 102, in addition to processor(s) 1 12, may further include storage 1 14, operating system (OS) 1 16, and one or more devices 118. Further, OS 116 may include one or more device drivers 122 of the one or more devices 1 18 and a power management agent 124. And storage 114 may include firmware 126 of the one or more devices 1 18. Security engine 108 may include firmware agent 104 and storage 106, having also firmware 126 of the one or more devices 1 18, obtained from storage 114. In embodiments, computing environment 102 and security engine 108 may be coupled with each other via one or more buses, e.g. an I 2C bus or a peripheral component interconnect (PCI) bus, and so forth. Processor(s) 112 may be any one of a number of processors or processor cores known in the art, e.g., Intel® Architecture processors available from Intel Corporation of Santa Clara, CA. Storage 1 14 may be any one of a number magnetic, optical, or solid state storage known in the art. OS 1 16, likewise, may be any one of a number OS known in the art, e.g., one of the Window® family's OS available from Microsoft Corporation of Redmond, WA. Examples of devices 1 18 may include, but are not limited to, an encoder, a decoder, a graphics unit, a transceiver, a global position system, and other devices of the like. Security engine 108, as described earlier, may include firmware agent 104 and secure storage 106 coupled with each other. Firmware agent 104 may be coupled with storage 1 14, OS 116 and devices 118, as shown. Security engine 108 may be any one of a number of trusted computing environments or hardened embedded computing environment, separate and independent of computing environment 102. As will be described in more detail below, firmware agent 104 may be configured to provide devices 118 with their firmware 126, on detection of power-on events of devices 1 18. In some embodiments, firmware agent 104 may intercept the power-on/off signals from power management agent 124, and relay them to devices 1 18. In other embodiments (not shown), firmware agent 104 may be coupled to the signal path between power management agent 124 and devices 1 18 to detect the power-on/off events. As described earlier, provision of firmware to devices 118 by firmware agent 104 may be more efficient than the traditional approach (i.e., provision by device drivers 122), especially for frequently shut down devices 1 18 in a power consumption sensitive computing environment. Similar to storage 1 14, storage 106 may be any one of a number of magnetic, optical or solid state storage devices known in the art. In embodiments, computing arrangement 100 may be a power consumption sensitive device, such as, but not limited to, a smartphone, a personal digital assistant (PDA), a computing tablet, an ultrabook, an e-reader, a game console, a set-top box, and so forth. In particular, power management agent 124 may be configured to power off one or more devices 1 18, e.g., whenever they are not in use, and power on the one or more devices 1 18, e.g., only when they are needed. Before further describing firmware agent 104, it should be noted that while for ease of understanding, firmware agent 104 is being described as part of security engine 108, in alternate less security sensitive environment, firmware agent 104 may be disposed e.g., in a conventional unhardened embedded controller. Further, arrangement 100 is intended to represent a broad range of computing devices known in the art. Examples of arrangement 100 will be further described later with references to Figure 4. Figure 2 illustrates a process for obtaining device firmware by the firmware agent, in accordance with embodiments of the present disclosure. As illustrated, for the embodiments, process 200 may start at block 202. At block 202, computing arrangement 100 may be powered on. From block 202, process 200 may proceed to block 204. At block 204, firmware agent 104 may determine whether computing arrangement 100 has new devices 118 or devices 1 18 with updated device firmware. Firmware agent 104 may determine the presence of new devices 118 via a number of known techniques, e.g., by enumerating devices attached to the various buses (not shown) in computing environment 102. Firmware agent 104 may also determine whether certain device firmware has been updated by checking with OS 116 and/or device drivers 122, or checking data structures maintained by OS 1 16 and/or device drivers 122. On determining either the presence of a new device or at least one updated device firmware, process 200 may proceed to block 206. At block 206, firmware agent 104 may obtain the new or updated firmware from device driver(s) 122 or storage 114. For the latter case, firmware agent 104 may be provided with the location(s) of the new or updated firmware by device driver(s) 122. From block 206, process 200 may optionally proceed to block 208, or proceed to block 210 directly, without performing the operations of block 208. At block 208, for more security sensitive embodiments, firmware agent 104 may authenticate the firmware provided. Authentication may be performed using any one of a number of authentication techniques known in the art. At block 210, on successful authentication or without authentication, depending on implementation, firmware agent 104 may store the provided firmware in storage 106 (which in some embodiments, as described earlier, may be secured storage). From block 210, process 200 may return to block 204 to determine if there are additional new or updated device firmware to be obtained, and repeat the operations of blocks 206-210 if necessary. On determination that there is no (additional) new or updated device firmware, process 200 may proceed to block 212, where the process may end. In alternate embodiments, instead of having firmware agent 104 determining whether firmware of one or more devices have been updated, and proceed to obtain the updated version, with or without authentication, the update of firmware 126 stored in storage 106 may be triggered by the corresponding device driver 122, on receipt of updates to firmware 126 stored in storage 1 14. Figure 3 illustrates a process for providing firmware to a device by the firmware agent, in accordance with various embodiments of the present disclosure. Process 300 may start at block 302. At block 302, firmware agent 104 may monitor for a power on event of devices 118. When no device power on event is detected, process 300 may stay at block 302, and loop until such an event is detected. On detection of a power on event of a device 118, process 300 may proceed from block 302 to block 304. At block 304, firmware agent 104 may retrieve firmware 126 of the device from storage 106, and provide the firmware 126 to device 1 18. By doing so, firmware 126 may be provided to device 118 more efficiently. In embodiments, at block 304, prior to retrieving and provide firmware 126 to device 1 18, firmware agent 104 may further relay power-on signals from power management agent 124 to devices 118. In embodiments, firmware agent 104 may also relay power-off signals from power management agent 124 to devices 118 (not shown). Figure 4 illustrates an example computing device incorporated with a firmware agent, in accordance with various embodiments of the present disclosure. As shown, computing device 400 may include a number of processors or processor cores 402, coprocessors) 414, and system memory 404. For the purpose of this application, including the claims, the terms "processor" and "processor cores" may be considered synonymous, unless the context clearly requires otherwise. Additionally, computing device 400 may include mass storage devices 406 (such as diskette, hard drive, compact disc read only memory (CD-ROM) and so forth), input/output devices 408 (such as display, keyboard, cursor control and so forth), communication interfaces 410 (such as network interface cards, modems and so forth) and security engine 416 (with firmware agent and storage as earlier described). The elements may be coupled to each other via system bus 412, which may represent one or more buses. In the case of multiple buses, they may be bridged by one or more bus bridges (not shown). Except for element 416, the constitution of these elements 402-414 are known, and accordingly will not be further described. Security engine 416 may be a trusted execution environment or hardened embedded controller with its own processor or processor(s). Firmware agent in security engine 416 may be implemented in assembler instructions supported by processor(s) of security engine 416 or high-level languages, such as, for example, C, that can be compiled into such instructions. The programming instructions may be placed into security engine 416 in the factory, or in the field, through, for example, a distribution medium (not shown), such as a compact disc (CD), or through communication interface 410 (from a distribution server (not shown)). That is, one or more distribution media having an implementation of the firmware agent may be employed to facilitate its distribution. Figure 5 illustrates an example non-transitory computer-readable storage medium having instructions configured to practice all or selected aspects of the processes of Figure 2-3; in accordance with various embodiments of the present disclosure. As illustrated, non-transitory computer-readable storage medium 502 may include a number of programming instructions 504. Programming instructions 504 may be configured to enable a device, e.g., computing device 400, in response to execution of the programming instructions, to perform various operations of the various flows of Figures 2-3. In alternate embodiments, programming instructions 504 may be disposed on multiple non- transitory computer-readable storage media 502 instead. Thus, embodiments disclosed include an apparatus having a device, and a processor, coupled with the device, to host a computing environment that includes the device and a device driver of the device. The apparatus may further includes a firmware agent, disposed outside the computing environment and coupled with the device, to provide, on behalf of the device driver, firmware to the device on power-on of the device. In embodiments, the device may be a selected one of an encoder, a decoder, a graphics unit, a transceiver, or a global positioning system. The computing environment may further include a power management agent, coupled to the device, to power on or off the device. The firmware agent may be configured to provide the firmware to the device in response to the power management agent powering on the device. The power management agent may power off the device whenever the apparatus enters a power saving mode that consumes less power than a normal operating mode. The power management agent may further power off the device whenever the device has not been used for a period of time while the apparatus is in the normal operating mode. In embodiments, the computing environment may further include an operating system that comprises the power management agent. The firmware agent may further configured to couple the power management agent to the device, and relay power on or off commands or signals of the power management agent to the device. In embodiments, the firmware agent may be further configured to obtain the firmware from the device driver during a start-up of the apparatus. The apparatus may further include secure storage, disposed outside the computing environment and coupled with the firmware agent. And the firmware agent may be further configured to store the firmware in the secure storage, on obtaining the firmware during a start-up of the apparatus, and retrieve the firmware from the secure storage to provide to the device on power-on of the device. The firmware agent may be further configured to authenticate the firmware prior to storing the firmware into the secure storage. The apparatus may further include a security engine, disposed outside the computing environment, wherein the security engine includes the firmware agent. The apparatus may be a selected one of a smartphone or a computing tablet. Embodiments disclosed also include at least one non-transitory computer-readable storage medium comprising a plurality of instructions, wherein the instructions, in response to execution of the instructions by a security engine of a computing apparatus, implement a firmware agent for the computing apparatus to provide firmware to a device of the computing apparatus, on behalf of a device driver of the device, on power-on of the device, wherein the device and the device driver are part of a computing environment hosted by a processor of the computing apparatus, and the security engine, including the firmware agent, is disposed outside of the computing environment and coupled to the device. In embodiments, the computing environment may further include a power management agent, coupled to the device, to power on or off the device. The firmware agent may be further configured to provide the firmware to the device in response to the power management agent powering on the device. In embodiments, the firmware agent may be further configured to couple the power management agent to the device, and relay power on or off commands or signals of the power management agent to the device. In embodiments, the firmware agent may be further configured to obtain the firmware from the device driver during a start-up of the computing apparatus. The computing apparatus may further include secure storage, disposed outside the computing environment and coupled with the firmware agent. The firmware agent may be further configured to store the firmware in the secure storage, on obtaining the firmware during a start-up of the computing apparatus, and retrieve the firmware from the secure storage to provide to the device on power-on of the device. The firmware agent may be further configured to authenticate the firmware prior to storing the firmware into the secure storage. Embodiments disclosed also include a method for providing firmware. The method may include detecting, by a firmware agent of a computing device, for power-on events of a device of the computing device; and providing firmware to the device, by the firmware agent, on behalf of a device driver of the device, in response to a detection of a power-on event of the device. The device and the device driver may be part of a computing environment hosted by a processor of the computing device, and the firmware agent may be disposed outside of the computing environment and coupled with the device. In embodiments, the computing environment may further include a power management agent, coupled to the device, to power on or off the device. Providing may include providing the firmware to the device, the firmware agent, in response to the power management agent powering on the device. The method may further include coupling, the firmware agent, the power management agent to the device, and relaying, by the firmware agent, power on or off commands or signals of the power management agent to the device. The method may further include obtaining the firmware, by the firmware agent, from the device driver during a start-up of the computing device. In embodiments, the computing device may further include secure storage, disposed outside the host computing environment and coupled with the firmware agent. The method may further include storing the firmware, by the firmware agent, in the secure storage, on obtaining the firmware during a start-up of the computing device, and retrieving the firmware, by the firmware agent, from the secure storage to provide to the device on power-on of the device. The method may further include authenticating the firmware, by the firmware agent, prior to storing the firmware into the secure storage. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described, without departing from the scope of the embodiments of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that the embodiments of the present disclosure be limited only by the claims. |
The application discloses a predictor for hard-to-predict branches. A processor, including: an execution unit including branching circuitry; a branch predictor, including a hard-to-predict (HTP) branch filter to identify an HTP branch; and a special branch predictor to receive identification of an HTP branch from the HTP branch filter, the special branch predictor including a convolutional neuralnetwork (CNN) branch predictor to predict a branching action for the HTP branch. |
1.A processor comprising:An execution unit, the execution unit including a branch circuit system;a branch predictor, the branch predictor including an HTP branch filter for identifying a hard-to-predict HTP branch;A special branch predictor for receiving an identification of an HTP branch from the HTP branch filter, the special branch predictor comprising a convolutional neural network CNN branch predictor for predicting a branch action for the HTP branch.2.The processor of claim 1 wherein said special branch predictor comprises a coprocessor or a field programmable gate array.3.The processor of claim 1 wherein said special branch predictor is an on-die circuit block.4.The processor of claim 1 wherein said special branch predictor is for employing a simplified one-hot binary circuit system.5.The processor of claim 1 wherein said special branch predictor comprises a dual layer CNN.6.The processor of claim 5 wherein said special branch predictor comprises a binary one-dimensional convolution layer and a fully connected binary layer.7.The processor according to claim 6, wherein said one-dimensional convolution layer is configured to: receive an incoming (program counter PC, direction) pair, mask the incoming pair, and mask The bits are used as an index into the filter response table and return an L-bit vector as a response.8.The processor of claim 7 wherein said one-dimensional convolutional layer is further for: pushing said response into an N x L bit FIFO buffer.9.The processor of claim 8 wherein said fully connected binary layer is operative to XOR the contents of said FIFO buffer with a binary linear layer weight and to generate the number of ones The count is the total number of integers.10.The processor of claim 9, wherein the fully connected binary layer is further for comparing the total number of integers to generate a selection or not to select a branch prediction.11.The processor according to any one of claims 1 to 8, wherein the special branch predictor is configured to: receive metadata from the trained CNN.12.The processor of any of claims 1 to 8, wherein the special branch predictor further comprises a CNN assisted predictor.13.An on-chip system comprising:Input-output circuitry;a memory for accommodating a program, the program including a branch circuit system;a processor, the processor comprising:An execution unit, the execution unit including a branch circuit system;a branch predictor, the branch predictor including an HTP branch filter for identifying a hard-to-predict HTP branch;A special branch predictor for receiving an identification of an HTP branch from the HTP branch filter, the special branch predictor comprising a convolutional neural network CNN branch predictor for predicting a branch action for the HTP branch.14.The system on a chip of claim 13 wherein said special branch predictor comprises a coprocessor or a field programmable gate array.15.The system on a chip of claim 13 wherein said special branch predictor is an on-die circuit block.16.The system-on-a-chip of claim 13 wherein said special branch predictor is for employing a simplified one-hot binary circuit system.17.The system on a chip of claim 13 wherein said special branch predictor comprises a dual layer CNN.18.The system-on-a-chip of claim 17, wherein the special branch predictor comprises a binary 1-dimensional convolution layer and a fully connected binary layer.19.The system-on-chip according to claim 18, wherein said one-dimensional convolution layer is configured to: receive an incoming (program counter PC, direction) pair, mask the incoming pair, and mask the The bits of the code are used as an index into the filter response table and return an L-bit vector as a response.20.The system-on-chip of claim 19 wherein said one-dimensional convolution layer is further for: pushing said response into an N x L bit FIFO buffer.21.The system-on-chip according to claim 20, wherein said fully connected binary layer is configured to: XOR the content of said FIFO buffer with a binary linear layer weight, and to generate the generated 1 The quantity count is the total number of integers.22.The system-on-a-chip of claim 21, wherein the fully connected binary layer is further for comparing the total number of integers to a threshold to generate a selection or not to select a branch prediction.23.The system on a chip according to any one of claims 13 to 22, wherein the special branch predictor is configured to: receive metadata from the trained CNN.24.The system on a chip according to any one of claims 13 to 22, wherein the special branch predictor further comprises a CNN assisted predictor.25.A computer implemented method of performing a hard to predict HTP branch prediction, the method comprising:Applying a branch filter to the branch circuitry to identify the HTP branch;The branching action for the HTP branch is predicted according to a convolutional neural network CNN algorithm. |
Predictor for difficult to predict branchesTechnical fieldThe present disclosure relates generally to the field of semiconductor devices, and more particularly, but not exclusively, to a system and method for predicting hard to predict branches.Background techniqueMultiprocessor systems are becoming more and more common. In the modern world, computing resources play an increasingly integrated role in human life. As computers become more ubiquitous, controlling everything from the grid to large industrial machines to personal computers to light bulbs, the need for more powerful processors increases.DRAWINGSThe disclosure will be best understood from the following detailed description of the invention. It should be emphasized that, depending on standard practice in the industry, the various features are not necessarily to scale, and are for illustrative purposes only. When the ratio is shown explicitly or implicitly, it merely provides an illustrative example. In other embodiments, the dimensions of the various features may be arbitrarily expanded or reduced for clarity of discussion.1 is a block diagram of selected components of a branch predictor in accordance with one or more examples of the present specification.2 is a mathematical flow diagram showing a two-layer convolutional neural network (CNN) in accordance with one or more examples of the present specification.3 is a block diagram showing the application of CNN to a branch prediction problem in accordance with one or more examples of the present specification.4 is a block diagram illustration of a training set in accordance with one or more examples of the present specification.5 is a block diagram of a branch predictor model in accordance with one or more examples of the present specification.6 and 7 are block diagrams of CNN branch predictors in accordance with one or more examples of the present specification.8 is a block diagram of a special branch prediction apparatus and method in accordance with one or more examples of the present specification.Figures 9a and 9b are block diagrams showing a generic vector friendly instruction format and its instruction templates in accordance with one or more examples of the present specification.Figures 10a through 10d are block diagrams showing example specific vector friendly instruction formats in accordance with one or more examples of the present specification.11 is a block diagram of a register architecture in accordance with one or more examples of the present specification.Figure 12a is a block diagram showing both an example in-order pipeline and an example register renaming out-of-order issue/execution pipeline in accordance with one or more examples of the present specification.Figure 12b is a block diagram showing both an example of an in-order architecture core to be included in a processor and an example register renaming out-of-order issue/execution architecture core in accordance with one or more examples of the present specification.13a and 13b show block diagrams of a more specific ordered core architecture in accordance with one or more examples of the specification, which will be a plurality of logical blocks in a chip (including other types of the same type and/or different types) Nuclear) One.14 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics, in accordance with one or more examples of the present specification.15 through 18 are block diagrams of computer architectures in accordance with one or more examples of the present specification.19 is a block diagram of the conversion of binary instructions in a source instruction set to binary instructions in a target instruction set using a software instruction converter in accordance with one or more examples of the present specification.Detailed waysThe following disclosure provides many different embodiments or examples for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the disclosure. Of course, these are merely examples and are not intended to be limiting. Further, the present disclosure may repeat reference numerals and/or letters in various examples. This repetition is for the sake of simplicity and clarity and does not in itself specify the relationship between the various embodiments and/or configurations discussed. Different embodiments may have different advantages and do not necessarily require any embodiment to have particular advantages.Branch prediction is a key contributor to the performance of contemporary microprocessors. In the case of branch mispredictions, even very fast microprocessors with high capacity pipelines and large caches are likely to approach a pause. Branch mispredictions can disrupt program flow, causing the pipeline to be reset, which may result in having to repopulate the cache from slow main memory and may have other performance impacts.For many types of conditional branches, existing hardware branch predictors achieve high accuracy. This accuracy can be about 98% to 99% or better. However, the pattern recognition mechanism of the traditional branch predictor does not perform well for some subset of the hard-to-predict (HTP) branches. These HTP branches may be caused by, for example, a highly variable program structure that causes historical data for branch prediction. These HTP branches are difficult for traditional branch predictors such as partial pattern matching branch predictors because those branch predictors may be based on a positive sequence that is also known as a perceptron based on identifying the location correlation of the capture.Since even 1% to 2% of branch mispredictions can cause severe performance loss in the microprocessor, it is advantageous to provide supplemental branch prediction circuitry such as special branch predictors, which provide complementary branch prediction circuitry An algorithm that focuses on certain types of HTP branch prediction. The special branch predictor can be directly placed in the processor hardware, in the microcode, can be implemented in supplemental software, or can be encoded in, for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a coprocessor. Inside the hardware accelerator.In some embodiments, the HTP branch filter can be used to filter the branch sequence to determine if the branch should be predicted by a mainline branch predictor that might use conventional methods such as local pattern matching (PPM) or should be sent to A special branch predictor that may use a more complex branch prediction algorithm. Examples of this specification include special branch predictors that use Convolutional Neural Networks (CNN) to perform better branch prediction on HTP branches.In general, the branch predictor works by performing pattern recognition on the branch history data and adjusting the likelihood of selecting a branch based on the observed program state. Embodiments of such a branch predictor can include both a data model for learning to train runtime statistics and an inference to generate new predictions based on the model. A successful branch predictor can balance the accuracy of both functions with respect to data, storage, and computational constraints that operate at the front end of the pipeline.A height-adjusted and optimized main line branch predictor such as PPM is capable of predicting branches of approximately 98% to 99% or better. However, the remaining 1% to 2% branch mispredictions can cause significant performance impacts as the entire execution pipeline may need to be dumped and the losses increase in proportion to machine width and mispredicted costs.Therefore, the special branch predictor described herein can provide ancillary functions that can use CNN to improve the accuracy of HTP branches. CNN can be used to capture patterns from noisy, highly variable data. The CNN hierarchically combines the position in the sensitive pattern matching at the lower layer with the position-specific matching at the upper level to improve the tolerance for data changes such as mode shifting. A conditional statement inside a variable iteration count loop or other program structure such as a switch statement may cause this change in the historical data and thus produce an HTP branch. Therefore, some structures in this structure can be modeled more closely using CNN and then PPM.The special branch predictor of this specification is configured to enhance the main line or baseline predictor for high performance use. This is particularly relevant in applications where high-performance computing (HPC) is performed thousands of times across thousands of machines. This is also useful in the case of widely distributed software that can run multiple times on a large number of heterogeneous computing devices. Embodiments of the present specification identify HTP branches in runtime data, stream their historical data to special branch predictors that may be embodied in coprocessors or FPGAs in some cases, and train special branch predictors CNN. The special branch predictor can then compute the auxiliary predictor metadata from the trained network and cache and reuse the results to achieve a dedicated performance boost.Certain embodiments of the special branch predictor of the present invention may require as few as seven least significant bits of the program counter (PC) value from the path history, thereby causing the application to be loaded there for execution. The base virtual address is unknown. In addition, the prediction gain can be maintained for the trajectory of one billion instructions, thereby demonstrating that the CNN-based special branch predictor extracts a stable prediction mode.The training module can be offline to the branch predictor and train the CNN every difficult predictive branch, and then allocate metadata that accommodates the pre-computed network response to a special branch predictor on the chip such as a coprocessor or FPGA. The training module can be used for situations where stable application behavior can be learned offline and used to improve large-scale distributed binaries, thereby amortizing training costs over time and across many different systems. As described above, when the PC address is masked as few as six or seven least significant bits during training, the CNN of this specification may be resilient to aliasing, which allows this approach to tolerate the basis between application execution. The virtual address changes without retraining. With the programmer modifying the source code and releasing new binaries, the network can be retrained and metadata can be updated to improve application performance. In some cases, this process can be automated without the need for specialized program analysis knowledge, and this process can be provided by, for example, a microprocessor vendor.Multi-layer CNNs can implement pattern matching in branch history data in a flexible manner. The CNN applies a small set of learned filters (i.e., convolution) in a number of locations to detect critical patterns that are subject to distortion such as position shifts. In contrast, the perceptron can learn the simpler position-related correlations in the existing history of the branch. These perceptrons are less tolerant of non-linearly separable data changes. Therefore, in the case where the branch depends on the program structure that the perceptron and the PPM predictor cannot predict well, as in the front of the branch, its iteration count changes throughout the execution to shift the prediction mode in the global history data. The CNN branch predictor is especially useful when looping.The branch predictor of this specification uses a multi-layer CNN that is optimized to make on-chip push feasible without the need for heavy front end calculations at predicted times. In particular, when network topology and weight precision are limited during training, the convolution filter response can be pre-computed and pipelined to simplify later on-chip predictions into a single binary inner product.An embodiment of a 1-bit CNN predictor can be trained offline using full-precision backward propagation along with binary constraints, such as following the four-step procedure:1.The candidate HTP branch is identified under the baseline predictor in the client workload.2.A historical data training set is established for each HTP branch.3.A 1-bit CNN predictor is trained via backward propagation on a dedicated platform.4.The network response is extracted and uploaded as metadata to the on-chip special branch predictor.Metadata that carries pre-computed convolution filter responses and network parameters can first be assigned to the client and installed in an on-chip special branch predictor dedicated to the HTP branch, providing a dedicated performance boost. This training and distribution process can be automated and can be provided as a service to clients that execute performance-sensitive binary on a large scale.The CNN of this specification uses a learned filter to implement multi-layer convolution mode matching to identify patterns that are subject to distortion and positional variations within noisy data. This situation often occurs in historical data for a large portion of the branches of traditional PPMs, perceptrons, and domain-specific predictors that perform poorly.However, the computational complexity of both CNN training and inference may be an obstacle to implementing a complete CNN as an auxiliary predictor on an on-chip or FPGA. Thus, embodiments of the present disclosure may be directed to the case where CNN predictors can be trained offline for individual hard-to-predict branches, and can be amortized by continuous performance improvement over large-scale distributed applications over time. Associated costs. Examples include binding branch prediction metadata to binary to enable private IPC promotion, or providing cloud-based optimization services to customers who deploy performance-sensitive barriers to many machines in the data center.To address the complexity of CNN inference when performing on-chip prediction, embodiments of the present specification provide optimizations resulting from specific choices of data encoding, network topology, and weight constraints imposed during network training. Using these, network parameters and pre-computed filter responses can be extracted from the trained CNN and installed on a single on-chip special branch predictor. Special branch predictors can be invoked only for HTP branches in a particular application, and special branch predictors can use a small number of logic and integer operations to generate predictions that are algebraically equivalent to the feedforward CNN inference.This is beneficial because it has been found that the accuracy of CNN in visual and audio classification tasks is often only slightly reduced when the accuracy of its parameters is severely limited. Accordingly, embodiments of the present specification provide a CNN-based branch predictor that requires 4,000 bits of on-chip memory per HTP branch and only requires parallel exclusive OR (XOR), accumulation, shift, integer Multiplication and subtraction to generate predictions.When training based on the same branch history data, the CNN can perform highly flexible pattern matching.The additional perceptron predictor multiplies the end dimension vector of the global history bit (e.g., the direction representing the existing end branch) by the n x 1 weight vector, and thresholds the result for prediction. The weight vector can be learned for each branch being predicted, and the weight vector captures the statistical correlation between the global history of the branch and the bits in each of its orientations.In contrast, the special branch predictor of this specification uses convolution to perform pattern matching that is intentionally not sensitive to positional shifts in historical data. This is because the normal program structure naturally shifts the pattern in the global history, for example, when a varying iteration loop may cause two related branches to be separated by an unpredictable number of temporary bits in the global history.Systems and methods for predicting difficult to predict branches will now be described with more specific reference to the drawings. It should be noted that some of the reference numerals may be repeated throughout the drawings to indicate that a particular device or block is fully or substantially identical within the drawings. However, this is not intended to suggest any particular relationship between the various embodiments disclosed. In some examples, certain types of elements may be referenced by a particular reference mark ("small part 10"), and individual categories or examples in the class may pass a hyphenated mark ("first specific widget" 10-1" and "second specific widget 10-2") reference.Some of the figures in the following figures detail example architectures and systems for implementing embodiments of the above. In some embodiments, one or more of the hardware components and/or instructions described above are emulated or implemented as software modules as detailed below.In some examples, the (multiple) instructions may be embodied as a "general vector friendly instruction format" as detailed below. In other embodiments, another instruction format is used. The following description of write mask registers, various data transformations (mixing, broadcasting, etc.), addressing, etc., is generally applicable to the description of embodiments of the above (multiple) instructions. In addition, the example systems, architecture, and pipeline are detailed below. Embodiments of the above (multiple) instructions may be executed on these systems, architectures, and pipelines, but are not limited to the systems, architectures, and pipelines detailed.The instruction set can include one or more instruction formats. A given instruction format may define various fields (eg, the number of bits, the location of the bits) to specify the operation to be performed (eg, an opcode) and the operand(s) on which the operation will be performed and/or (multiple) other data fields (eg, masks), and so on. Further decompose some of the instruction formats by defining instruction templates (or subformats). For example, an instruction template for a given instruction format can be defined as a different subset of fields with an instruction format (the included fields are usually in the same order, but at least some have different bit positions because fewer fields are included) And/or defined as a given field with a different interpretation. Thus, each instruction of the ISA is represented using a given instruction format (and, if defined, a given instruction template in an instruction template in this instruction format) and includes fields for specifying operations and operands.In one embodiment, the example ADD instruction has a particular opcode and instruction format, the instruction format including an opcode field for specifying the opcode and an operand field for selecting the operand (source 1 / destination and source) 2); and the occurrence of this ADD instruction in the instruction stream will have specific content in the operand field of the selected particular operand.A set of SIMD extensions called Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extension (VEX) coding scheme have been released and/or published (see, for example, theand IA-32 Architecture Software Developer's Manual). (64and IA-32 Architectures Software Developer's Manual), September 10014; and see "Advanced Vector Extensions Programming Reference" (October 10014).Sample instruction formatEmbodiments of the (multiple) instructions described herein can be embodied in different formats. In addition, the example systems, architecture, and pipeline are detailed below. Embodiments of the (multiple) instructions may be executed on such systems, architectures, and pipelines, but are not limited to the systems, architectures, and pipelines detailed.Universal vector friendly instruction formatThe vector friendly instruction format is an instruction format suitable for vector instructions (e.g., there are certain fields that are specific to vector operations). Although embodiments are described that support both vector operations and scalar operations through a vector friendly instruction format, alternative embodiments use vector operations only through vector friendly instruction formats.1 is a block diagram of selected components of a branch predictor 100 in accordance with one or more examples of the present specification. In the illustration, branch predictor 100 includes an HTP branch filter 104. The HTP branch filter 104 checks the incoming branch to determine if the branch should be classified as an HTP branch. If the branch is not an HTP branch, the branch can be predicted from the main line branch predictor 112 according to a conventional method such as PPM or Perceptron.However, if the branch is determined to be an HTP branch, it can be sent to the special branch predictor 116. In some embodiments, special branch predictor 116 may be a coprocessor or FPGA or on-die circuit that provides special branch prediction in accordance with the methods described herein. In particular, the special branch predictor 116 can employ the two-layer CNN method described herein.2 is a mathematical flow diagram showing a two-layer CNN in accordance with one or more examples of the present specification. In this example, input history data 204 is provided to a layer 1 convolution 208, which ultimately provides its results to a layer 2 binary classifier 212.The CNN 200 of Figure 2 maintains a plurality of one-bit precision weight vectors, each referred to as a binary filter, and uses a binary inner product to match the vector to each location in the global history. The binary filter of CNN 200 is formulated as a mode detector with a position-agnostic position, as compared to a perceptron formula in which the weight value represents a position-specific correlation between the branches. In this model, the detection results are fed to a second CNN layer, specifically layer 2, for binary classification of capture location specific patterns.In this example, the input history data 204 includes P, which refers to the m x n one-hot (1-hot) matrix of historical data.In layer 1 208, each filter is convolutionally applied to all end positions in the history. In the second layer 212, prediction is performed according to the predicted mode.1The bit CNN 200 can utilize the input's unique thermal encoding along with convolution and 1-bit weighting constraints to alleviate the large storage space that the PPM predictor may require, which may grow exponentially with the space that may be input. The CNN 200 maps the history of the length of the (PC, direction) pair to the index of the m×n binary matrix, where 1 is located in the (i, j)th position if the flag i appears in the history j, otherwise, zero Located in the location (ie, a matrix with a unique heat column). Since the inner product with the heat vector produces a single non-zero value, all of the first layer convolution for the 1 binary filter can be performed by looking up this value in the m*L*1 bit table. For perceptrons with b-bit integer weights, the storage device therefore scales O(m*L*1) instead of O(n*m*b), where L is much smaller than n and 1 is much smaller than b.This simplification is specific to the combination of one-hot coding, convolution, and 1-bit weight constraints that exist in CNN 200, and makes it possible to accelerate predictions using the calculations discussed below along with reasonable on-chip storage requirements. Specifically, in order to perform pattern matching on the same history of (PC, directional) pairs, the difference is that the CNN can use 4,000 bits of the storage device for a history of length 155, using the least significant bits (LSBs) of the PC per location. And one direction bit, relative to 952, 320 bits for a conventional perceptron with 6-bit integer weights.In CNN 200, the result of layer 1 convolution 208 is fed to a second layer sigmoid or softmax predictor that is constrained to have a binary input weight in layer 2 classifier 212. Layer 2 212 captures position-specific relationships among Layer 1 filter responses and can utilize fast binary inner product calculations. As described below, since the table lookup for the layer 1 filter response can be pipelined as the data arrives, the prediction may eventually only require parallel XOR, accumulation, shift, integer, multiply, and subtraction to calculate the second layer. Respond and generate new forecasts. This procedure is significantly simpler and more accurate than the speculative accumulation that may be required to streamline an integer inner product in a path-driven perceptron.Most branch mispredictions appear systematically. For example, the following code snippet shows two HTP branches:Although HTP 1 is data-dependent, HTP 2 is accurately correlated with the results of HTP 1. Both are biased at 33% of the time and are separated by a loop with a variable number of iterations. Although HTP 1 ensures that the global history contains the prediction mode of HTP 2, the uncorrelated branches inserted between these correlation branches by the loop cause the relative position in the historical data to change each time a prediction of HTP 2 is required. This is an example of a shift change. Ideally, without additional information about the data values, HTP 1 should be accurately predicted at least 66% of the time and HTP 2 should be predicted 100% accurately.However, traditional branch predictors may not be able to satisfy these ideals. Although the global history predictor stores HTP 2 statistics in each of its history tables to capture a sequence of increasing length, all but 35 predictions within 10,000 function calls to the randomized data The predictions are all derived from the estimated offset of the branch. In some cases, up to ten uncorrelated branches separating the HTP result in an explosion of a unique historical pattern that must be memorized by the predictor.The variable iteration loop in this code sample also limits the effectiveness of the perceptron predictor. Changes such as mode shifts can naturally be caused by common program structures, and these can corrupt exact match and position-specific data models. In the case of PPM, the number of modes that may occur exponentially grows with the length of the history in the worst case, thereby reducing the likelihood that the stored pattern will accumulate confidence statistics and be invoked to generate predictions. Depending on the table allocation strategy, this data can also store a large number of non-prediction patterns in the global table. For position-specific predictors such as perceptrons, shift change blocking weights always filter out noise and preserve predictive correlation.As noted above, the CNN-based special branch predictor of this specification provides a solution for providing better branch prediction in this case.The basic unit of CNN is a neuron that computes a function f based on a linear combination of a real-valued input vector xi of length N and a weight vector (Wi, b):A common choice for f is sigmoid, tanh, or a rectified linear unit, and f can be selected per application. Once trained, the weight vector is often referred to as a feature or filter because the weight vector takes a value corresponding to the useful mode learned from the data.Compared to a perceptron branch predictor that includes only a single neuron, the CNN obtains its predictive power from a neuron layer stacked on top of each other. At the lower level, the neuron weights are trained to produce a small set of filters that can detect the outstanding patterns in any position. The filter has a width of l < < N, which corresponds to the size of the pattern detected by this neuron. Starting at each location in the input data, each filter is convolutionally matched to a set of l input values. This is shown in Figure 4.Pooling operations and nonlinear selection of f are often applied to lower convolution filter responses to propagate only strong responses to higher-level neurons, thereby increasing the tolerance of higher-level neurons to shift changes and confusing information.For example, in the previous code sample, HTP 1 and HTP 2 were separated by a varying number of conditional branches due to the variable iteration count cycle. This program structure poses a challenge to the PPM predictor due to the large number of possible sequences that must be stacked. The single perceptron predictor also competes because the positional changes of these HTPs prevent position-specific weights from being adjusted to correctly capture the predicted signal.However, when matching the LSB of the PC of HTP 1 and the direction of the HTP, the CNN special branch predictor of the present specification can learn a convolution filter that generates a large number of integrals according to Equation 1. Therefore, the convolutional layer of CNN can correctly identify the prediction mode regardless of where the prediction mode appears in the global history, and can only propagate this information to a higher level.The CNN filter can be trained by adjusting weights and network parameters based on the example historical data set and the observed branch direction. In an example, there may be recorded multiple batches of branch history data, and a back propagation algorithm may be used to adjust the weights. The network can be instantiated per HTP by first selecting the number of layers, the filter size, and the type of neuron. Then, an embodiment may randomly initialize the weights and run an implementation of stochastic gradient descent, i.e., backward propagation, to iteratively update the parameter values until the prediction accuracy of the top layer converges. This is shown in more detail in conjunction with Figures 5 and 6.3 is a block diagram showing the application of CNN to a branch prediction problem in accordance with one or more examples of the present specification.Even when the neuron weight is constrained to only one bit with a value of +1 or -1, CNN can provide excellent pattern recognition. By using logical operations instead of floating-point arithmetic, the results can greatly simplify the inference for trained CNNs while sacrificing only moderate accuracy degradation. The binary inner product between the {-1, +1}N vectors can be calculated by XORing its bits, calculating the fill count, level shifting, and integer subtraction.During training, binary constraints can be imposed by maintaining a full-precision network, but algebraically ensuring that it will produce the same prediction when the weights are quantized. During the forward pass of the training, the network error is calculated as if the weight is binary; then, the weight can be adjusted according to this error during the backward pass.Because backward propagation uses small steps to adjust the weight value toward the convergence point, a high-precision version of the network can be used during training. Therefore, the embodiment of the present specification assumes that the binary CNN is trained offline from the baseline predictor unit in which high-precision calculation can be performed as shown in FIG. Once trained, the network can be simplified to perform fast inferences within the branch predictor unit (BPU).Training 1-bit CNN predictorThe CNN predictor can be trained per HTP branch, and in some embodiments, the CNN predictor employs full precision backward propagation. Training can be performed offline to the branch predictor unit and the results can be uploaded to the on-chip special branch predictor. Embodiments of the training process can include the following four operations:1.Identify candidate hard-to-predict branches.2.Establish a training data set for backward propagation.3.The CNN predictor is trained using backward propagation along with binary weight constraints.4.The network response is extracted and uploaded to the on-chip special branch predictor.The operation is described below by way of example in each of these four operations operating its own subtitle.Identify candidate hard-to-predict branchesIn one embodiment, the HTP branch is defined as a branch that produces more than 1,000 mispredicted branches per 30 million instructions, or a branch that is predicted with less than 99% accuracy under the baseline predictor.Screening of these branches can be done by using additional instruments on the client or offline by playing the binary on the emulator or virtual machine.Candidate HTP branches can also be filtered to ensure that a training set of at least 15,000 branch executions is required. This is a conservative estimate of the amount of data required to converge during a backward propagation of a 1-bit CNT predictor with eight binary filters, and in some embodiments this is empirically established.Establish training data sets for backward propagationBackpropagation uses a branch history training set along with branch results. In one example, a sequence of (PC, direction) pairs is recorded for each branch that leads to the HTP branch under study. Each sequence can have a parameterized length N, for example, 155. The direction of the HTP branch is also recorded. In order to encode the historical data into an input suitable for the CNN, the training module can map the input values to a heat vector.Each value in this history can be represented by a vector whose size is proportional to the number of possible unique input values. The vector contains 1 in the position indexed to the corresponding input value, and otherwise contains zero.4 is a block diagram illustration of a training set in accordance with one or more examples of the present specification. In the example of Figure 4, a historical sequence of length 5 is shown, including some of the least significant bits of the PC and the markers for taking or not taking. These inputs are quantified for the 23 entry table. The unique heat algebra representation of the quantized input is then recorded.During encoding, the training module masks the historical values to control the maximum size of the heat vector, and ultimately the storage required to maintain the pre-calculated values on the chip. This masking procedure also provides tolerance to the underlying virtual address changes of the program between executions without the need for retraining. For each (PC, direction) pair, the trainer links the (b-1) least significant bits of the PC to the associated 1-bit direction (0: no selection; 1: selection). Each value in the input history data is thus encoded as a 2b x 1 vector, where 1 is in the (PC and (2b-1) + direction) position and, otherwise, zero is in the position.The input history sequence of length 155 leading up to the HTP branch is thus represented as a (2bx155) dimensional matrix of 1-bit values. This program guarantees that all tuples in the historical data can be mapped to one of the 2b entries in the final lookup table.Train the CNN predictor using backward propagation along with binary weight constraintsFor each HTP branch, the trainer can pass its training data set to a platform dedicated to CNN predictor training. By way of non-limiting example, this platform can be a coprocessor on a client or a dedicated server in a cloud environment.On the training platform, the trainer uses a stochastic gradient descent (SGD) along with network weights and activations to perform standard back propagation using additional constraints of 1-bit precision. In one embodiment, training can be implemented using open source tools for GPU-accelerated backward propagation along with binary constraints.In some embodiments, the trainer can constrain the network topology to allow only binary N-D convolutions as the lowest network layer because it enables inferred computations to be pipelined. The linear layer does not implement this pipelining and is therefore only used for the upper layers of the network. The final layer classifier can be implemented as a standard full precision classifier (eg, sigmoid or softmax) during training. Since the value of the flow classifier is guaranteed to be an integer, an integer operation can be used on the chip to approximate the classification calculation. One embodiment implements threshold setting, batch normalization, and quantization units between each layer to maintain the versatility between the full precision network for training and the final 1-bit CNN for inference. An example two-layer network with four convolution filters can be used.Extract network responses and pass metadata to on-chip special predictorsOnce the network has been trained, data coding can be used to pre-calculate the values of the convolutional layer and the parameters required for the final layer classification to make predictions.By way of non-limiting example, metadata extracted and uploaded to a special predictor on the chip may include:An m×L table indexed into m (bits) pairs of bits, where each entry contains L 1-bit convolution filter responses· Two L×n bit Layer 2 binary filters for historical length nTwo integer constants used in the second layer binary inner product• Two scaling constants used to calculate the prediction based on the Layer 2 filter response.Although all filters in the network formula are algebraically represented by a value of -1/+1, the filter can be stored on the chip as a separate bit with a value of 0 or 1, and the appropriate algebraic adjustment can be included in the inner product. Calculated.The pre-calculated 1st layer filter table can be filled according to the following formula (based on x_bar=γ1*(x–μ1)/Σ12+β1 using the learning parameters of the normalized unit after the first layer)Bool(fj(i)+cj>=thresh1)For j=1...L;i=1...2mAmong them, the learned bias constant c and the threshold 1 = upper limit (Σ1*(-β1)/γ1)+μ1The second layer constant used to normalize, threshold set, and collapse the binary inner product as little as possible is given by:Pred add selection = round(-(μ selection×σtaken)+β selectionFinally, given a learned layer 2 filter h selection and h not selection and bias constant c selection and cnottaken, the scaling constant is:5 is a block diagram of a branch predictor model in accordance with one or more examples of the present specification. In this embodiment, the branch predictor model includes a coprocessor 504 and a branch prediction unit (BPU) 502. In the model of Figure 5, it can be assumed that the HTP is identified from the runtime data because the historical data is streamed to the coprocessor 504 for training. By way of example, this model can be used to train a single CNN per HTP and cache the results. The network parameters can then be loaded into the BPU 502 to provide a dedicated boost along with a baseline predictor such as the mainline branch predictor 112. Accordingly, it should be understood that in some embodiments, the BPU 502 of FIG. 5 may be an embodiment of the special branch predictor 116 of FIG.In one non-limiting example, the first 100 million instructions of the software package or benchmark are filtered to identify the HTP. HTP can be found at any point in the workload. However, in this embodiment, the screening range is limited to maximize the amount of evaluation data available in the fixed length trajectory.For each HTP identified in the first 100 million instructions, historical data is collected from the entire workload, including the direction of the existing 200 conditional branches and the PC value.For the unique heat history code, each input sample for training begins as a sequence of raw global path history data that has been directed to the HTP fetch instruction. Each of the 200-composed sequences contains a (PC, direction) pair that can be converted into a vector to be fed to the CNN. Since the PC is discrete and may take a large number of possible values, each value in the history can be mapped to a fixed-size unique heat vector. For example, according to the 2b=1024 setting size, the direction bits can be connected to the b-1 LSBs of the PC, and 1 can be placed at the position of the 2b×1 dimensional vector (PC<<1)+Dir∧(b- 1), and otherwise, place zero in the position.By arranging these column vectors into a matrix, a history of length 200 can be converted into a 2b x 200 matrix representing a single training sample. Although the matrix size is relatively large, the temporary data representation can be optimized during the inference.Referring to Figure 5, the HTP tracking and data collection operations described in the previous paragraph are embodied in block 508. These operations may be provided to the coprocessor 504 as a training data set 520. As described in the previous paragraph, network training block 524 can perform training on the training data set.In block 528, binarization of the training data is performed and a pre-calculation is performed.In block 532, a special branch predictor metadata cache is created and provided to a configurable special predictor 516.Baseline predictor 512 can perform real-time branch prediction using configurable special predictor 516.6 and 7 are block diagrams of CNN branch predictors in accordance with one or more examples of the present specification.Figure 6 shows the so-called full precision CNN implementation. While a full precision CNN implementation provides the highest possible prediction accuracy, in some embodiments, implementing a full precision CNN predictor in a real system may not be feasible. Accordingly, a simplified branch predictor CNN in accordance with one or more examples of the present specification is disclosed in FIG. Although the simplified branch predictor of Figure 7 may have a lower overall accuracy than the full implementation of Figure 6, near the same branch prediction accuracy can still be achieved.The full-precision CNN of Figure 6 has 32-bit floating point weights and is configured according to the layout shown in this figure. This includes two convolutional layers 604 each having 32 filters. The first convolution has a filter length of 1, and the second layer has a filter length of 3.The pooled layer then includes a pair maximum of 608. The largest pooling layer takes the largest filter response in adjacent locations in the historical data.This is followed by a linear layer of 16 neurons 612, each of which is capable of latching into a different mode in the underlying filter response.The final layer is a binary filter, which in this example is a sigmoid classifier with one neuron sigmoid 616. This uses the network response to calculate the value between 0 and 1, where all values above 0.5 correspond to the "select" prediction. In this embodiment, the tanh activation function for all neurons is used in the network in addition to the classification layer.Figure 7 illustrates a simplified CNN branch predictor that may be more practical for implementing a processor, coprocessor, FPGA, or other special branch predictor in some embodiments. This embodiment is characterized by a single convolutional layer having filter length 1 and binary weight as shown in block 704. This can include between 8 and 32 filters that do not have an offset term. Next is a normalization layer and a binarization layer with a block 708 for scaling and quantizing the response into one bit.The binary linear layer includes a single neuron 710 that does not have an offset term, followed by a normalization block 712, where the results are fed directly into a binary classifier layer with a single neuron sigmoid 716.In the embodiment of Figure 7, by way of non-limiting example, the offset terms in the convolutional layer and the linear layer are disabled. Because the input vector is also binary, this network is very much like the XOR network. Network weights can be trained with full precision and quantized after training for inference.8 is a block diagram of a special branch prediction apparatus and method in accordance with one or more examples of the present specification.The advantage of the special branch predictor of Figure 8 is that once trained, its inferred computation can be simplified to fit the constraints of the on-chip BPU. Note that the inner product between the {-1, +1}N vectors can be implemented to apply to the XOR, padding count, shift, and subtraction operations represented by the corresponding {0, 1}N. Thus, by way of a non-limiting example, this design employs three optimizations:1.When the heat vector is multiplied by the filter, the result is always the filter coefficient corresponding to the position of the non-zero value. Since the data can be encoded by indexing from (PC, direction) values to a single heat input vector, the matrix representation can replace the table lookup on the chip. Using this approach, the first layer of the network can be implemented by indexing directly from historical data to a convolution filter weight table. In addition, subsequent normalization and binarization operations produce a single bit for each possible filter weight, whereby the results can be pre-computed for those layers in advance when populating the lookup table. For m filters of length 2b denoted wj where j = 1...m, and the learned parameters μ1, σ1, γ1, β1 of the normalized layer of data transformed according to the following equation:Fill the 2b×m bit table T with:For i=1...b,j=1...mThe content of this table is the first part of the metadata that will be cached in the BPU.b. When applying a convolution of length 1, the filter response for each position in the input sequence is independent of its neighbor. Therefore, this allows the branch predictor to calculate the underlying response (values after convolution, normalization, and binarization) long before the HTP is fetched. When a conditional instruction is executed, the corresponding lower layer response can be retrieved from the lookup table and pushed into the first in, first out (FIFO) buffer. The FIFO buffer contains a response to a global history of, for example, 200 branches at any given time. When the HTP is taken out and prediction is needed, the buffer content can be fed directly to the higher network layer to calculate the prediction.c. To generate predictions, the branch predictor can evaluate binary linearity, normalization, and sigmoid classifier layers. Separately, this may require an inner product between the binary content of the FIFO buffer and the weight of the binary linear layer, scaling and shifting the resulting integer value according to the learned normalization parameters, and ultimately the result with 0.5 A comparison is made to determine if the branch will be "selected" or "not selected." However, by folding the final shift and subtraction of the binary inner product into a normalization formula and solving the intersection of the sigmoid thresholds, the branch predictor can calculate a single integer threshold instead of these operations. Therefore, the prediction operation is reduced to the first two operations on the binary inner product: parallel XOR, padding count, and integer comparison. Given a learned normalized parameter, a FIFO buffer of length 200x m, and in the case where it is noted that for input 0, the sigmoid function intersects 0.5, the threshold t can be calculated by solving the following equation:Figure 8 shows the on-chip inference corresponding to the BP-CNN auxiliary predictor. This demonstrates a four-stage process of performing branch predictions.At operation 1, data including a global history of (PC, direction) pairs reaches the lower layer response table 804.At operation 2, the result is pushed into the FIFO buffer 812 where the convolution result will be maintained.At operation 3, the buffer contents are XORed with the 1-bit binary linear layer weight 808 when the HTP is fetched. The number of generated ones is then counted.At operation 4, the sum 816 of one is compared to a threshold 820. This comparison with the threshold produces a prediction that the branch is selected or not selected.Embodiments of the design of the on-chip CNN branch predictor may include storage for four components:1.A 2b x m bit table used to maintain filter response.2.Historical length x m-bit FIFO buffer for maintaining convolution results.3.Historical length × m bit buffer for maintaining binary linear layer weights.4.A buffer used to hold a pre-computed integer threshold.Therefore, the storage device can be driven by the size of the input value map 2b, the number of convolution filters m in the network, and the history length. For example, a CNN where b = 8, m = 32 and a history length of 200 requires 20,992 bits. When m is reduced to 24, the storage device is 15,744 bits. For m = 12, the storage device is 7,872 bits.Further, it has been found through analysis that HTP often occurs in different workload stages. This provides an opportunity to reuse CNN storage over time. For example, a particular workload might have four HTPs, of which only two are always executing in the same workload phase. This allows the branch predictor to split the amount of storage required on the chip into two halves.Figures 9a-9b are block diagrams showing a generic vector friendly instruction format and its instruction templates in accordance with various embodiments of the present specification. Figure 9a is a block diagram showing a generic vector friendly instruction format and its class A instruction template in accordance with an embodiment of the present specification; and Figure 9b is a diagram showing a generic vector friendly instruction format and its class B instruction template in accordance with an embodiment of the present specification; Block diagram. In particular, Class A and Class B instruction templates are defined for the generic vector friendly instruction format 900, both of which include a memoryless access 905 instruction template and a memory access 920 instruction template. The term "universal" in the context of a vector friendly instruction format refers to an instruction format that is not tied to any particular instruction set.An embodiment of the present specification in which the vector friendly instruction format supports the following cases will be described, namely: 64-byte vector operand length (or size) and 32-bit (4 bytes) or 64-bit (8-byte) data element width ( Or size) (and thus, a 64-byte vector consists of 16 double-word sized elements or alternatively 8 quad-sized elements); 64-byte vector operand length (or size) and 16-bit (2 words) Section) or 8-bit (1 byte) data element width (or size); 32-byte vector operand length (or size) and 32-bit (4 bytes), 64-bit (8-byte), 16-bit (2 Byte), or 8-bit (1 byte) data element width (or size); and 16-byte vector operand length (or size) and 32-bit (4 bytes), 64-bit (8-byte), 16 Bit (2 bytes), or 8 bits (1 byte) data element width (or size); alternative embodiments may support larger, smaller, and/or different vector operand sizes (eg, 256-byte vector) Operands) are larger, smaller, or different data element widths (for example, 128-bit (16-byte) data element width).The class A instruction template of Figure 9a includes: 1) an instruction template for a full round control type operation 910 without memory access and an instruction for a data conversion type operation 915 without memory access in an instruction template without memory access 905. Templates; and 2) within the instruction template of memory access 920, an instruction template of 925 showing the timeliness of memory access and an instruction template of non-time-sensitive 930 of memory access. The Class B instruction template in Figure 9b includes: 1) an instruction template for partial rounding control type operation 912 showing write mask control without memory access and an instruction mask without memory access in an instruction template without memory access 905 The instruction template of the code controlled VSIZE type operation 917; and 2) within the instruction template of the memory access 920, the instruction template of the write mask control 927 of the memory access.The generic vector friendly instruction format 900 includes the following fields listed below in the order illustrated in Figures 9a-9b.Format field 940 - a particular value (instruction format identifier value) in the field uniquely identifies the vector friendly instruction format and thereby identifies that the instruction appears in the vector friendly instruction format in the instruction stream. Thus, the field is not required for an instruction set having only a generic vector friendly instruction format, in the sense that the field is optional.Base operation field 942 - its content distinguishes between different base operations.Register Index Field 944 - its contents specify the source and destination operands in the register or in memory, either directly or through address generation. These fields include a sufficient number of bits to select N registers from PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register files. Although N may be up to three sources and one destination register in one embodiment, alternative embodiments may support more or fewer source and destination registers (eg, up to two sources may be supported, where A source is also used as a destination to support up to three sources, one of which is also used as a destination, or can support up to two sources and one destination).Modifier field 946 - its content distinguishes instructions that appear in the general vector instruction format for memory accesses from instructions that appear in the generic vector instruction format that do not specify memory access; that is, instruction templates that have no memory access 905 A distinction is made between the instruction templates of the memory access 920. Memory access operations read and/or write to the memory hierarchy (in some cases, values in registers are used to specify source and/or destination addresses), while non-memory access operations do not (eg, source and/or destination) The ground is the register). Although in one embodiment, the field is selected between three different ways to perform memory address calculations, alternative embodiments may support more, fewer, or different ways to perform memory address calculations.The augmentation operation field 950 - its content distinguishes which of a variety of different operations to perform in addition to the base operation. This field is for the context. In one embodiment of the present specification, this field is divided into a class field 968, an alpha field 952, and a beta field 954. The augmentation operation field 950 allows multiple sets of common operations to be performed in a single instruction instead of 2, 3, or 4 instructions.Scale field 960 - its content allows the content of the index field for memory address generation (eg, for address generation using 2 scale * index + base address) to be scaled.Displacement field 962A - its content is used as part of the memory address generation (e.g., for address generation using 2 ratio * index + base + displacement).Displacement factor field 962B (note that the displacement field 962A directly on the displacement factor field 962B indicates the use of one or the other) - its content is used as part of the address generation, which specifies the size (N) scaled by the memory access. The displacement factor, where N is the number of bytes in the memory access (eg, for address generation using a 2 scale * index + base address + scaled displacement). The redundant low order bits are ignored, and thus the content of the displacement factor field is multiplied by the total size (N) of the memory operand to generate the final displacement used in calculating the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 974 (described later herein) and the data manipulation field 954C. Displacement field 962A and displacement factor field 962B are not used for no memory access 905 instruction templates, and/or different embodiments may implement only one or both, in this sense, displacement field 962A and displacement factor field 962B It is optional.Data element width field 964 - its content distinguishes which of a plurality of data element widths will be used (in some embodiments, for all instructions; in other embodiments, for only some of the instructions). If only one data element width is supported and/or an aspect of the opcode is used to support the data element width, then the field is not required, in the sense that the field is optional.Write mask field 970 - its content controls whether the location of the data element in the destination vector operand reflects the result of the underlying operation and the augmentation operation on a per data element location basis. Class A instruction templates support merge-write mask operations, while class B instruction templates support both merge-write mask operations and zero-write mask operations. When merging, the vector mask allows any set of elements in the destination to be protected from updates (specified by the base operation and the augmentation operation) during any operation - in one embodiment, maintaining the corresponding mask bit with 0 The old value of each element of the destination. Conversely, when zeroing, the vector mask allows any set of elements in the destination to be zeroed (specified by the underlying operations and the augmentation operation) during any operation, in one embodiment, the elements of the destination are in the corresponding mask. Bits with a value of 0 are set to 0. A subset of this function is the ability to control the length of the vector of the operation being performed (i.e., the span of the element to be modified from the first to the last), however, the modified elements do not have to be contiguous. As such, write mask field 970 allows for partial vector operations, including loading, storing, arithmetic, logic, and the like. Although it is described that the content of the write mask field 970 selects one of the plurality of writemask registers containing the write mask to be used (and thus, the contents of the write mask field 970 indirectly identify The various embodiments of the present specification of the performed masking operation, but alternatively or additionally, the alternative embodiment allows the content of the masked write field 970 to directly specify the masking operation to be performed.Immediate field 972 - its content allows the specification of an immediate. This field is optional in the sense that it does not exist in a general vector friendly format that implements an immediate number and does not exist in instructions that do not use an immediate.Class field 968 - its content distinguishes between instructions of different classes. Referring to Figures 9a-9b, the contents of the field are selected between Class A and Class B instructions. In Figures 9a-9b, rounded squares are used to indicate the presence of a dedicated value in the field (e.g., in Figures 9a-9b, Class A 968A and Class B 968B for Class Field 968, respectively).Class A instruction templateIn the case of an instruction template of class A non-memory access 905, the alpha field 952 is interpreted as an RS field 952A whose content distinguishes which of the different types of augmentation operations will be performed (eg, rounding without memory access, respectively) Type operation 910 and data conversion type operation 915 without memory access instruction template specifies rounding 952A.1 and data transformation 952A.2), while beta field 954 distinguishes which of the specified types of operations will be performed. In the instruction template without memory access 905, scale field 960, displacement field 962A, and shift ratio field 962B are not present.Instruction template without memory access - fully rounded control operationIn the fully rounded control type operation 910 instruction template without memory access, the beta field 954 is interpreted as a rounding control field 954A whose content provides a static rounding operation. Although in the described embodiment of the present specification, the rounding control field 954A includes a suppress all floating point exception (SAE) field 956 and a rounding operation control field 958, alternative embodiments may encode the two concepts into the same A field, or only one or the other of these concepts/fields (eg, may have only rounding operation control field 958).SAE field 956 - its content distinguishes whether to disable the exception event report; when the content of SAE field 956 indicates that suppression is enabled, the given instruction does not report any kind of floating point exception flag and does not invoke any floating point exception handler.Rounding operation control field 958 - its content distinguishes which of a set of rounding operations to perform (eg, round up, round down, round to zero, and round to nearest). As such, the rounding operation control field 958 allows the rounding mode to be changed instruction by instruction. In one embodiment of the present specification in which the processor includes a control register for specifying a rounding mode, the content of the rounding operation control field 950 takes precedence over the register value.Instruction template without memory access - data transformation operationIn the data transformation type operation 915 instruction template without memory access, the beta field 954 is interpreted as a data transformation field 954B whose content distinguishes which of a number of data transformations will be performed (e.g., no data transformation, mixing, broadcast).In the case of a class A memory access 920 instruction template, the alpha field 952 is interpreted as an eviction hint field 952B whose content distinguishes which of the eviction hints to use (in Figure 9a, the instruction template for memory access aging 925) And the instruction template of the memory access non-aging 930 specifies the time-sensitive 952B.1 and the non-time-sensitive 952B.2), respectively, and the β field 954 is interpreted as the data manipulation field 954C, the content of which is to perform multiple data manipulation operations. Which of the (also known as primitives) (eg, no manipulation, broadcast, source up conversion, and destination down conversion). The memory access 920 instruction template includes a scale field 960, and optionally includes a displacement field 962A or a displacement scale field 962B.Vector memory instructions use translation support to perform vector loading from memory and store the vector to memory. As with ordinary vector instructions, vector memory instructions transfer data back and forth to the memory in a data element manner, where the elements actually transmitted are specified by the contents of the vector mask selected as the write mask.Memory access instruction template - timelinessTime-sensitive data is data that may be reused quickly enough to benefit from the cache. However, this is a hint and different processors can implement it in different ways, including completely ignoring the prompt.Memory access instruction template - non-timelinessNon-time-sensitive data is data that is unlikely to be reusable fast enough to benefit from the cache operations in the level 1 high cache and should be given priority for eviction. However, this is a hint and different processors can implement it in different ways, including completely ignoring the prompt.Class B instruction templateIn the case of a Class B instruction template, the alpha field 952 is interpreted as a write mask control (Z) field 952C whose content distinguishes whether the write mask operation controlled by the write mask field 970 should be merged or zeroed.In the case of a Class B non-memory access 905 instruction template, the portion of the beta field 954 is interpreted as an RL field 957A whose content distinguishes which of the different types of extended operations will be performed (eg, writes without memory access, respectively) The mask control portion rounding control type operation 912 instruction template and write mask control without memory access VSIZE type operation 917 instruction template specifies rounding 957A.1 and vector length (VSIZE) 957A.2), while β field 954 The rest distinguishes which of the specified types of operations will be performed. In the instruction template without memory access 905, the scale field 960, the displacement field 962A, and the displacement scale field 962B do not exist.In the write mask control portion rounding control type operation 910 instruction template without memory access, the remainder of the beta field 954 is interpreted as the rounding operation field 959A, and the exception event report is disabled (given instructions do not report any kind) The floating point exception flag does not raise any floating point exception handlers).Rounding operation control field 959A - as rounded operation control field 958, whose content distinguishes which of a set of rounding operations to perform (eg rounding up, rounding down, rounding to zero, and rounding nearest) ). Thus, the rounding operation control field 959A allows the rounding mode to be changed on a per instruction basis. In one embodiment of the present specification in which the processor includes a control register for specifying a rounding mode, the content of the rounding operation control field 950 takes precedence over the register value.In the write mask control VSIZE type operation 917 instruction template without memory access, the remainder of the beta field 954 is interpreted as a vector length field 959B whose content distinguishes which of a number of data vector lengths to execute (eg, 128, 256 or 512 bytes).In the case of a Class B memory access 920 instruction template, the portion of the beta field 954 is interpreted as a broadcast field 957B whose content distinguishes whether a broadcast type data manipulation operation will be performed, while the remainder of the beta field 954 is interpreted by the vector length field 959B. The memory access 920 instruction template includes a scale field 960 and optionally includes a displacement field 962A or a displacement scale field 962B.In the case of the generic vector friendly instruction format 900, the full opcode field 974 is shown to include a format field 940, a base operation field 942, and a data element width field 964. Although one embodiment is shown in which the full opcode field 974 includes all of these fields, in embodiments that do not support all of these fields, the full opcode field 974 includes less than all of these fields. The full opcode field 974 provides an opcode.The augmentation operation field 950, the data element width field 964, and the write mask field 970 allow these features to be specified instruction by instruction in a generic vector friendly instruction format.The combination of the write mask field and the data element width field creates various types of instructions because these instructions allow the mask to be applied based on different data element widths.The various instruction templates that appear within Class A and Class B are beneficial in different situations. In some embodiments of this specification, different processors or different cores within a processor may support only Class A, Class B only, or both. For example, high-performance general-purpose out-of-order cores intended for general-purpose computing can only support Class B, and cores intended primarily for graphics and/or scientific (throughput) computing can only support Class A and are intended to be used. Both of them support both (of course, cores with some mix of templates and instructions from both classes, but not all templates and instructions from both categories are within the scope of this specification). Similarly, a single processor can include multiple cores, all cores support the same class or different cores support different classes. For example, in a processor with separate graphics and a generic core, a core in the graphics core intended primarily for graphics and/or scientific computing may only support class A, while one or more of the generic cores It can be a high performance general purpose core with only B-class out-of-order execution and register renaming intended for general purpose computing. Another processor that does not have a separate graphics core may include one or more general purpose or out-of-order cores that support both Class A and Class B. Of course, in various embodiments of the present specification, features from one class may also be implemented in other classes. Programs written in high-level languages can be (eg, compiled in time or statically compiled) in a variety of different executable forms, including: 1) forms of instructions that have only classes or classes supported by the target processor for execution Or 2) an alternate routine written with different combinations of instructions of all classes and having the form of control flow code selected to be executed based on instructions supported by the processor currently executing the code.Sample specific vector friendly instruction formatFigures 10a-10d are block diagrams showing example specific vector friendly instruction formats in accordance with one or more examples of the present specification. Figure 10a illustrates a dedicated vector friendly instruction format 1000 that specifies the position, size, interpretation, and order of the fields, as well as the values of some of those fields, in the sense that the dedicated vector friendly instruction format 1000 is dedicated. The dedicated vector friendly instruction format 1000 can be used to extend the x86 instruction set, and thus some of the fields are similar to or identical to those used in existing x86 instruction sets and their extensions (e.g., AVX). The format remains consistent with the prefix encoding field, the real opcode byte field, the MOD R/M field, the SIB field, the displacement field, and the immediate field with the extended existing x86 instruction set. The fields from Figures 9a and 9b are shown, the fields from Figures 10a-10d being mapped to the fields from Figures 9a and 9b.It should be understood that although in the context of the general vector friendly instruction format 900 for illustrative purposes, embodiments of the present specification have been described with reference to the dedicated vector friendly instruction format 1000, the specification is not limited to the dedicated vector friendly instruction format 1000, declarative Except for places. For example, the generic vector friendly instruction format 900 contemplates various possible sizes of various fields, while the dedicated vector friendly instruction format 1000 is shown as having fields of a particular size. As a specific example, although the data element width field 964 is shown as one bit field in the dedicated vector friendly instruction format 1000, the present description is not limited thereto (that is, the general vector friendly instruction format 900 contemplates the data element width field 964 other than size).The generic vector friendly instruction format 900 includes the fields listed below in the order shown in Figure 10a.EVEX prefix (bytes 0-3) 1002 - encoded in four bytes.Format field 940 (EVEX byte 0, bit [7:0]) - the first byte (EVEX byte 0) is format field 940 and it contains 0x62 (in one embodiment for distinguishing vector friendly instruction formats) Unique value).The second-fourth byte (EVEX bytes 1-3) includes a plurality of bit fields that provide dedicated capabilities.REX field 1005 (EVEX byte 1, bit [7-5]) - by EVEX.R bit field (EVEX byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [ 6]–X) and (957BEX byte 1, bit [5]–B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit field and are encoded in a 1's complement form, ie ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits (rrr, xxx, and bbb) of the register index as known in the art, thereby increasing EVEX.R, EVEX.X, and EVEX.B. Rrrr, Xxxx, and Bbbb are formed.REX' field 910 - this is the first part of the REX' field 910 and is the EVEX.R' bit field (EVEX word) used to encode the upper 16 or lower 16 registers of the extended 32 register sets. Section 1, bit [4]–R'). In one embodiment, this bit is stored in a bit-reversed format along with the other bits indicated below (in the well-known x86 32-bit mode) and the BOUND instruction with a real opcode byte of 62, but in MOD The value 11 in the MOD field is not accepted in the R/M field (described below); other embodiments do not store the indicated bit and other indicated bits in an inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRRs from other fields.Opcode mapping field 1015 (EVEX byte 1, bit [3:0] - mmmm) - its content encodes the implied leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 964 (EVEX byte 2, bit [7] - W) - is represented by the token EVEX.W. EVEX.W is used to define the granularity (size) of a data type (32-bit data element or 64-bit data element).EVEX.vvvv 1020 (EVEX byte 2, bit [6:3]-vvvv) - The role of EVEX.vvvv can be as follows: 1) EVEX.vvvv encodes the first source register operand and has two or two pairs The above source operand instruction is valid, the first source register operand is specified in the form of inversion (1's complement); 2) EVEX.vvvv encodes the destination register operand, and the destination register operand is offset for the specific vector by 1 The form of the complement is specified; or 3) EVEX.vvvv does not encode any operands, retains this field, and should contain 1111b. Thus, the EVEX.vvvv field 1020 encodes the 4 low order bits of the first source register specifier stored in inverted (1's complement) form. Depending on the instruction, an additional different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 968 class field (EVEX byte 2, bit [2]-U) - if EVEX.U=0, it indicates class A or EVEX.U0; if EVEX.U=1, it indicates class B Or EVEX.U1.The prefix encoding field 1025 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for legacy SSE instructions in the EVEX prefix format, this also has the benefit of compressing the SIMD prefix (the EVEX prefix requires only 2 bits instead of requiring bytes to express the SIMD prefix). In one embodiment, these legacy SIMD prefixes are encoded into SIMD prefix encoding fields in order to support legacy SSE instructions using the SIMD prefix (66H, F2H, F3H) in the legacy format and in the EVEX prefix format; and are provided at runtime The PLA to the decoder was previously extended to the traditional SIMD prefix (so the PLA can execute these traditional instructions in the legacy and EVEX formats without modification). While newer instructions may extend the content of the EVEX prefix encoding field directly as an opcode, for consistency, particular embodiments extend in a similar manner, but allow different meanings to be specified by these legacy SIMD prefixes. Alternate embodiments may redesign the PLA to support 2-bit SIMD prefix encoding and thus do not require extension.Alpha field 952 (EVEX byte 3, bit [7] - EH, also known as EVEX.eh, EVEX.rs, EVEX.rl, EVEX. write mask control, and EVEX.n; also shown as a) - - As mentioned previously, this field is context specific.字段 field 954 (EVEX byte 3, bit [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB, also shown as βββ ) - As mentioned earlier, this field is context specific.REX' field 910 - this is the remainder of the REX' field and is an EVEX.V' bit field (EVEX byte) that can be used to encode the upper 16 or lower 16 registers of the extended 32 register sets. 3, bit [3]–V'). This bit is stored in a bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Write mask field 970 (EVEX byte 3, bits [2:0] - kkk) - its contents specify the register index in the write mask register as previously described. In one embodiment, the specific value EVEX.kkk=000 has a special behavior that implies no write mask for a particular instruction (this can be implemented in various ways, including using hardwired to all 1 write masks or bypasses) Mask hardware hardware to implement).The real opcode field 1030 (byte 4) is also referred to as an opcode byte. A portion of the opcode is specified in this field.The MOD R/M field 1040 (byte 5) includes a MOD field 1042, a Reg field 1044, and an R/M field 1046. As previously described, the contents of the MOD field 1042 distinguish between memory access and non-memory access operations. The role of the Reg field 1044 can be summarized into two situations: encoding the destination register operand or the source register operand; or being considered an opcode extension and not used to encode any instruction operand. The role of the R/M field 1046 can include the following: encoding an instruction operand that references a memory address; or encoding a destination register operand or a source register operand.Proportional, Index, Base Address (SIB) Byte (Byte 6) - As previously described, the contents of the Scale field 950 are used for memory address generation. SIB.xxx 1054 and SIB.bbb 1056 - the contents of these fields have been previously mentioned for register references Xxxx and Bbbb.Displacement field 962A (Bytes 7-10) - When MOD field 1042 contains 10, byte 7-10 is displacement field 962A, and it works the same as a traditional 32-bit displacement (disp32) and works in byte granularity .Displacement Factor Field 962B (Byte 7) - When MOD field 1042 contains 01, byte 7 is the displacement factor field 962B. This field is located at the same position as the traditional x86 instruction set 8-bit shift (disp8), which operates at byte granularity. Since disp8 is sign-extended, it can only be addressed between -128 and 127-byte offsets; in terms of 64-byte cache lines, disp8 can be set to only four really useful values - 128 bits of 128, -64, 0, and 64; disp32 is used because a larger range is often required; however, disp32 requires 4 bytes.In contrast to disp8 and disp32, the displacement factor field 962B is a reinterpretation of disp8; when the displacement factor field 962B is used, the actual displacement is determined by multiplying the content of the displacement factor field by the size (N) of the memory operand access.This type of displacement is called disp8*N. This reduces the average instruction length (a single byte is used for displacement, but has a much larger range). This compression displacement is based on the assumption that the effective displacement is a multiple of the granularity of the memory access, and thus the redundant low order bits of the address offset need not be encoded. In other words, the displacement factor field 962B replaces the 8-bit shift of the conventional x86 instruction set.Thus, the displacement factor field 962B is encoded in the same manner as the 8-bit shift of the x86 instruction set (and therefore unchanged in the ModRM/SIB encoding rules), the only difference being that disp8 is overloaded to disp8*N.In other words, there is no change in the encoding rule or encoding length, but only in the interpretation of the displacement value by hardware (this requires scaling the displacement by the size of the memory operand to obtain the byte address offset). ).The immediate field 972 operates as previously described.Full opcode fieldFigure 10b is a block diagram showing the fields in the dedicated vector friendly instruction format 1000 that make up the full opcode field 974, in accordance with one embodiment. Specifically, the full opcode field 974 includes a format field 940, a base operation field 942, and a data element width (W) field 964. The base operation field 942 includes a prefix code field 1025, an opcode map field 1015, and a real action code field 1030.Register index fieldFigure 10c is a block diagram showing fields in a dedicated vector friendly instruction format 1000 that constitutes a register index field 944, in accordance with one embodiment. In particular, register index field 944 includes REX field 1005, REX' field 1010, MODR/M.reg field 1044, MODR/M.r/m field 1046, VVVV field 1020, xxx field 1054, and bbb field 1056.Extended operation fieldFigure 10d is a block diagram showing the fields in the Dedicated Vector Friendly Instruction Format 1000 that make up the augmentation operation field 950, in accordance with one embodiment. When class (U) field 968 contains 0, it indicates EVEX.U0 (class A 968A); when it contains 1, it indicates EVEX.U1 (class B 968B). When U = 0 and the MOD field 1042 contains 11 (indicating no memory access operation), the alpha field 952 (EVEX byte 3, bits [7] - EH) is interpreted as the rs field 952A. When rs field 952A contains 1 (rounded 952A.1), β field 954 (EVEX byte 3, bit [6:4] - SSS) is interpreted as rounding control field 954A. Rounding control field 954A includes a one-bit SAE field 956 and two rounding operation fields 958. When the rs field 952A contains 0 (data transform 952A.2), the beta field 954 (EVEX byte 3, bits [6:4] - SSS) is interpreted as a three-bit data transform field 954B. When U=0 and the MOD field 1042 contains 00, 01 or 10 (indicating a memory access operation), the alpha field 952 (EVEX byte 3, bits [7] - EH) is interpreted as the eviction hint (EH) field 952B and β Field 954 (EVEX byte 3, bit [6:4] - SSS) is interpreted as a three bit data manipulation field 954C.When U = 1, the alpha field 952 (EVEX byte 3, bits [7] - EH) is interpreted as a write mask control (Z) field 952C. When U=1 and the MOD field 1042 contains 11 (indicating no memory access operation), a portion of the beta field 954 (EVEX byte 3, bit [4] - S0) is interpreted as the RL field 957A; when it contains 1 ( When entering 957A.1), the remainder of the beta field 954 (EVEX byte 3, bits [6-5] - S2-1) is interpreted as rounding operation field 959A, and when RL field 957A contains 0 (VSIZE957.A2) When the rest of the β field 954 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as a vector length field 959B (EVEX byte 3, bits [6-5] - L1-0) . When U=1 and the MOD field 1042 contains 00, 01 or 10 (indicating a memory access operation), the β field 954 (EVEX byte 3, bits [6:4]–SSS) is interpreted as a vector length field 959B (EVEX word) Section 3, bits [6-5]–L1-0) and broadcast field 957B (EVEX byte 3, bits [4]–B).Sample register architectureFIG. 11 is a block diagram of a register architecture 1100, in accordance with one embodiment. In the illustrated embodiment, there are 32 512-bit wide vector registers 1110; these registers are referenced as zmm0 to zmm31.The lower order 256 bits of the lower 16zmm register are overlaid on register ymm0-16. The lower order 128 bits of the lower 16zmm register (lower order 128 bits of the ymm register) are overlaid on register xmm0-15.The dedicated vector friendly instruction format 1000 operates on these overridden register registers, as shown in the following table.In other words, vector length field 959B selects between a maximum length and one or more other shorter lengths, wherein each such shorter length is half of the previous length and does not have an instruction template for vector length field 959B Operates on the maximum vector length. Moreover, in one embodiment, the Class B instruction templates of the Dedicated Vector Friendly Instruction Format 1000 operate on packed or scalar single/double precision floating point data as well as packed or scalar integer data. The scalar operation is the operation performed on the lowest order data element position in the zmm/ymm/xmm register; depending on the embodiment, the higher order data element position remains the same or zeroed before the instruction.Write Mask Register 1115 - In the illustrated embodiment, there are 8 write mask registers (k0 through k7), each of which is 64 bits in size. In an alternate embodiment, the size of the write mask register 1115 is 16 bits. As previously stated, in one embodiment, the vector mask register k0 cannot be used as a write mask; when the code indicating the normal indication k0 is used as a write mask, it selects the hard-wired write mask 0xFFFF, which is effective The write mask operation of this instruction is disabled.General Purpose Register 1125 - In the illustrated embodiment, there are sixteen 64-bit general purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.A scalar floating point stack register file (x87 stack) 1145 over which the MMX packed integer flat register file 1150 is overlaid - in the illustrated embodiment, the x87 stack is used to extend the 32/64 using the x87 instruction set. The /80-bit floating-point data performs an eight-element stack of scalar floating-point operations; the MMX registers are used to perform operations on 64-bit packed integer data, and the operands are saved for certain operations performed between the MMX and XMM registers.Other embodiments may use wider or narrower registers. In addition, other embodiments may use more, fewer, or different register files and registers.Example core architecture, processor, and computer architectureThe processor cores can be implemented in different ways, for different purposes, in different processors. For example, implementations of such cores may include: 1) a generic ordered core intended for general purpose computing; 2) a high performance general purpose out-of-order core intended for general purpose computing; 3) intended primarily for graphics and/or Or a dedicated core for scientific (throughput) calculations. Implementations of different processors may include: 1) including one or more general purpose ordered cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; and 2) Includes coprocessors that are intended for one or more dedicated cores primarily for graphics and/or scientific throughput. Such different processors result in different computer system architectures, which may include: 1) a coprocessor on a chip separate from the CPU; 2) a coprocessor on a separate die in the same package as the CPU; 3) Coprocessors on the same die as the CPU (in this case, such coprocessors are sometimes referred to as dedicated logic such as integrated graphics and/or science (throughput) logic, or as dedicated cores And 4) the described CPU (sometimes referred to as an application core or application processor), the coprocessor described above, and additional functions may be included on a system on a chip. The example core architecture is described next, followed by an example processor and computer architecture.Example core architectureOrdered and out of order nuclear block diagramFigure 12a is a block diagram showing an out-of-order issue/execution pipeline of an example in-order pipeline and an example register renaming. Figure 12b is a block diagram showing an embodiment of an in-order architecture core to be included in a processor and an out-of-order issue/execution architecture core of an example register renaming. The solid lined boxes in Figures 12a-12b show the in-order pipeline and the in-order core, while the optional added dashed box shows the register renaming, out-of-order issue/execution pipeline and core. Given that the ordered aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 12a, processor pipeline 1200 includes fetch stage 1202, length decode stage 1204, decode stage 1206, allocation stage 1208, rename stage 1210, scheduling (also known as dispatch or issue) stage 1212, register read/memory read. Stage 1214, execution stage 1216, write back/memory write stage 1218, exception handling stage 1222, and commit stage 1224.Figure 12b shows a processor core 1290 including a front end unit 1230 coupled to an execution engine unit 1250, and both an execution engine unit and a front end unit are coupled to the memory unit 1270. Core 1290 can be a Reduced Instruction Set Computing (RISC) core, a Complex Instruction Set Computing (CISC) core, a Very Long Instruction Word (VLIW) core, or a hybrid or other core type. As a further option, core 1290 can be a dedicated core such as, for example, a network or communication core, a compression engine, a coprocessor core, a general purpose computing graphics processing unit (GPGPU) core, or a graphics core, and the like.The front end unit 1230 includes a branch prediction unit 1232 coupled to an instruction cache unit 1234 that is coupled to an instruction conversion lookaside buffer (TLB) 1236 that is coupled to the instruction fetch unit 1238, the instruction The fetch unit 1238 is coupled to the decode unit 1240. Decoding unit 1240 (or decoder) may decode the instructions and generate one or more micro-ops, micro-code entry points, micro-instructions that are decoded from, or otherwise reflect from, the original instructions. , other commands, or other control signals are output. Decoding unit 1240 can be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. In one embodiment, core 1290 includes microcode ROM or other medium (e.g., in decoding unit 1240 or otherwise within front end unit 1230) for storing microcode for certain macro instructions. Decoding unit 1240 is coupled to rename/allocator unit 1252 in execution engine unit 1250.Execution engine unit 1250 includes a rename/allocator unit 1252 coupled to retirement unit 1254 and a set 1256 of one or more scheduler units. Scheduler unit 1256 represents any number of different schedulers, including reserved stations, central command windows, and the like. Scheduler unit 1256 is coupled to physical register file unit 1258. Each physical register file unit 1258 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integers, scalar floating points, packed integers, packed floating point, vector integers. , vector floating point, state (for example, an instruction pointer as the address of the next instruction to be executed), and so on. In one embodiment, physical register file unit 1258 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit 1258 is overridden by retirement unit 1254 to illustrate various ways in which register renaming and out-of-order execution can be implemented (such as using reorder buffers and retiring register files, using future files, history) Buffers, retiring register files, using register maps and register pools, etc.). The retirement unit 1254 and physical register file unit 1258 are coupled to the execution cluster 1260. Execution cluster 1260 includes a collection of one or more execution units 1262 and a collection of one or more memory access units 1264. Execution unit 1262 can perform a variety of operations (eg, shifting, addition, subtraction, multiplication) and can execute on a variety of data types (eg, scalar floating point, compact integer, compact floating point, vector integer, vector floating point) . Although some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that perform all of the functions. Scheduler unit 1256, physical register file unit 1258, and execution cluster 1260 are shown as potentially multiple, as some embodiments create separate pipelines for certain types of data/operations (eg, scalar integer pipelines, scalar floating points) / Compact integer/tight floating point/vector integer/vector floating point pipeline, and/or each with its own scheduler unit, physical register file unit and/or memory access pipeline executing clusters - and in separate memory In the case of an access pipeline, some embodiments in which only the execution cluster of the pipeline has a memory access unit 1264 are implemented. It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issued/executed, and the remaining pipelines may be ordered for publication/execution.A set 1249 of memory access units is coupled to a memory unit 1270 that includes a data TLB unit 1272 that is coupled to a data cache unit 1274 that is coupled to a second level (L2) cache unit 1276. . In one embodiment, memory access unit 1264 can include a load unit, a memory address unit, and a store data unit, each of which is coupled to data TLB unit 1272 in memory unit 1270. Instruction cache unit 1234 is also coupled to a level 2 (L2) cache unit 1276 in memory unit 1270. L2 cache unit 1276 is coupled to one or more other levels of cache and is ultimately coupled to main memory.As an example, a register-renamed, out-of-order issue/execute core architecture may implement pipeline 1200 as follows: 1) instruction fetch 1238 performs fetch and length decode stages 1202 and 1204; 2) decode unit 1240 performs decode stage 1206; 3) rename The allocator unit 1252 performs the allocation stage 1208 and the rename stage 1210; 4) the scheduler unit 1256 executes the scheduling stage 1212; 5) the physical register file unit 1258 and the memory unit 1270 perform the register read/memory read stage 1214; 1260 executes execution stage 1216; 6) memory unit 1270 and physical register file unit 1258 execute write-back/memory write stage 1218; 7) each unit may involve exception handling stage 1222; and 8) retirement unit 1254 and physical register file unit 1258 executes the commit stage 1224.Core 1290 can support one or more instruction sets (for example, the x86 instruction set (with some extensions added with newer versions); MIPS instruction set from MIPS Technologies, Sunnyvale, California; San Francisco, California The ARM instruction set of ARM Holdings (with optional additional extensions such as NEON) from Neville, including the instructions described in this article. In one embodiment, core 1290 includes logic for supporting compact data command set extensions (e.g., AVX1, AVX2), thereby allowing operations used by many multimedia applications to be performed using compacted data.It should be understood that the core can support multi-threading (performing two or more parallel operations or a collection of threads) and can be done in a variety of ways, including time division multithreading, synchronization. Multi-threading (where a single physical core provides a logical core for each of the threads that the physical core is synchronizing with multi-threading), or a combination thereof (eg, time-division fetch and decode and thereafter, such as withhyper-threading technology Synchronous multithreading).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an ordered architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 1234/1274 and shared L2 cache unit 1276, alternative embodiments may have a single internal cache for both instructions and data, Such as, for example, a level one (L1) internal cache or multiple levels of internal cache. In some embodiments, the system can include a combination of an internal cache and an external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor. Example ordered core architectureFigures 13a-13b show block diagrams of a more specific example ordered core architecture that will be one of a plurality of logical blocks in a chip (including other cores of the same type and/or different types). Depending on the application, these logic blocks communicate with some fixed functional logic, memory IO interfaces, and other necessary IO logic through a high bandwidth interconnected network (e.g., a ring network).Figure 13a is a block diagram of a single processor core and its connection to the on-die interconnect network 1302 and its local subset 1304 of a level 2 (L2) cache, in accordance with one or more embodiments. In one embodiment, the instruction decoder 1300 supports an x86 instruction set with a compact data instruction set extension. The L1 cache 1306 allows for low latency access to the cache memory in the scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 1308 and vector unit 1310 use separate sets of registers (scalar register 1312 and vector register 1314, respectively), and data transferred between these registers is written to memory. And then read back from the level one (L1) cache 1306, but other embodiments may use different methods (eg, using a single set of registers or including allowing data to be transferred between the two register files without being written and read back) Communication path).The local subset 1304 of the L2 cache is part of the global L2 cache, which is divided into a plurality of separate local subsets, one local subset per processor core. Each processor core has a direct access path to a local subset 1304 of its own L2 cache. The data read by the processor core is stored in its L2 cache subset 1304 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 1304 and flushed from other subset dumps if necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. Each circular data path is 1012 bits wide in each direction.Figure 13b is an expanded view of a portion of the processor core of Figure 13a, in accordance with an embodiment of the present specification. Figure 13b includes the L1 data cache 1306A portion of L1 cache 1304, as well as more details regarding vector unit 1310 and vector register 1314. Specifically, vector unit 1310 is a 16 wide vector processing unit (VPU) (see 16 wide ALU 1328) that performs one or more of integer, single precision floating point, and double precision floating point instructions. The VPU supports mixing of register inputs by mixing unit 1320, numerical conversion by numeric conversion units 1322A-B, and replication of memory inputs by copy unit 1324. Write Mask Register 1326 allows assertion of the resulting vector write.14 is a block diagram of a processor 1400 that may have more than one core, may have an integrated memory controller, and may have integrated graphics devices, in accordance with an embodiment of the present specification. The solid lined box in Figure 14 shows a processor 1400 having a single core 1402A, a system agent 1410, one or more sets of bus controller units 1416, and an optional addition of a dashed box with a plurality of cores 1402A- N. A set of one or more integrated memory controller units 1414 in system agent unit 1410 and an alternate processor 1400 of dedicated logic 1408.Thus, different implementations of processor 1400 can include: 1) a CPU, where dedicated logic 1408 is integrated graphics and/or scientific (throughput) logic (which can include one or more cores), and cores 1402A-N are one or Multiple general purpose cores (eg, a generic ordered core, a generic out-of-order core, a combination of the two); 2) a coprocessor, where the core 1402A-N is intended primarily for graphics and/or scientific throughput A large number of dedicated cores; and 3) coprocessors, where the core 1402A-N is a large number of general purpose ordered cores. Thus, processor 1400 can be a general purpose processor, coprocessor, or special purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (general graphics processing unit), a high throughput integrated core ( MIC) coprocessor (including 30 or more cores), or embedded processor. The processor can be implemented on one or more chips. Processor 1400 can be part of one or more substrates and/or can be implemented on one or more substrates using any of a variety of process technologies, such as BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the core, a set or one or more shared cache units 1406, and an external memory (not shown) coupled to the set of integrated memory controller units 1414. The set of shared cache units 1406 may include one or more intermediate caches, such as a second level (L2), a third level (L3), a fourth level (L4), or other level of cache, a last level cache. (LLC) and/or a combination of the above. Although in one embodiment, the ring-based interconnect unit 1412 interconnects the integrated graphics logic 1408, the set of shared cache units 1406, and the system proxy unit 1410/integrated memory controller unit 1414, alternative embodiments may use any number. Known techniques are used to interconnect such units. In one embodiment, coherency between one or more cache units 1406 and cores 1402A-N may be maintained.In some embodiments, one or more of the cores 1402A-N can implement multi-threading. System agent 1410 includes those components that coordinate and operate cores 1402A-N. System agent unit 1410 can include, for example, a power control unit (PCU) and a display unit. The PCU may be the logic and components required to adjust the power states of cores 1402A-N and integrated graphics logic 1408, or may include such logic and components. The display unit is used to drive one or more externally connected displays.Cores 1402A-N may be isomorphic or heterogeneous in terms of architectural instruction sets; that is, two or more of these cores 1402A-N may be capable of executing the same set of instructions, while other cores may be able to perform the Only a subset of the instruction set or a different instruction set.Sample computer architecture15-18 are block diagrams of example computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, Other system designs and configurations for video game devices, set top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, multiple systems and electronic devices capable of containing the processors and/or other execution logic disclosed herein are generally suitable.Referring now to Figure 15, shown is a block diagram of a system 1500 in accordance with one embodiment. System 1500 can include one or more processors 1510, 1515 that are coupled to controller hub 1520. In one embodiment, controller hub 1520 includes a graphics memory controller hub (GMCH) 1590 and an input/output hub (IOH) 1550 (which may be on separate chips); GMCH 1590 includes a memory and graphics controller, memory 1540 And coprocessor 1545 is coupled to the memory and graphics controller; IOH 1550 couples input/output (IO) device 1560 to GMCH 1590. Alternatively, one or both of the memory and graphics controller are integrated within a processor (as described herein), the memory 1540 and coprocessor 1545 are directly coupled to the processor 1510, and have IOH in a single chip The 1550 controller hub 1520.The optional nature of the additional processor 1515 is indicated by dashed lines in FIG. Each processor 1510, 1515 can include one or more of the processing cores described herein, and can be a certain version of the processor 1400.Memory 1540 can be, for example, a dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, controller hub 1520 is via a multi-drop bus such as a front side bus (FSB), a point-to-point interface such as a Super Channel Interconnect (UPI), or a similar connection 1595 and processor(s). Communication is performed at 1510 and 1515.In one embodiment, coprocessor 1545 is a dedicated processor such as, for example, a high throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like. In one embodiment, controller hub 1520 can include an integrated graphics accelerator.There are various differences in quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, etc. between physical resources 1510, 1515.In one embodiment, processor 1510 executes instructions that control general types of data processing operations. Coprocessor instructions can be embedded in these instructions. Processor 1510 identifies these coprocessor instructions as the type that should be executed by attached coprocessor 1545. Thus, processor 1510 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 1545 on a coprocessor bus or other interconnect. The coprocessor 145(s) accept and execute the received coprocessor instructions.Referring now to Figure 16, a block diagram of a first more specific example system 1600 is shown. As shown in FIG. 16, multiprocessor system 1600 is a point-to-point interconnect system and includes a first processor 1670 and a second processor 1680 coupled via a point-to-point interconnect 1650. Each of processors 1670 and 1680 can be a certain version of processor 1400. In one embodiment, processors 1670 and 1680 are processors 1510 and 1515, respectively, and coprocessor 1638 is coprocessor 1545. In another embodiment, processors 1670 and 1680 are processor 1510 and coprocessor 1545, respectively.Processors 1670 and 1680 are shown as including integrated memory controller (IMC) units 1672 and 1682, respectively. Processor 1670 also includes point-to-point (P-P) interfaces 1676 and 1678 as part of its bus controller unit; similarly, second processor 1680 includes P-P interfaces 1686 and 1688. Processors 1670, 1680 can exchange information via P-P interface 1650 using point-to-point (P-P) interface circuits 1678, 1688. As shown in Figure 16, IMCs 1672 and 1682 couple the processors to respective memories, namely memory 1632 and memory 1634, which may be portions of the main memory that are locally attached to the respective processors.Processors 1670, 1680 can each exchange information with chipset 1690 via respective P-P interfaces 1652, 1654 using point-to-point interface circuits 1676, 1694, 1686, 1698. Chipset 1690 can optionally exchange information with coprocessor 1638 via high performance interface 1639. In one embodiment, coprocessor 1638 is a dedicated processor such as, for example, a high throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like.A shared cache (not shown) may be included in either processor, or external to both processors but connected via PP interconnects, such that if the processor is placed in a low power mode, then either Or the local cache information of the two processors can be stored in the shared cache.Chipset 1690 can be coupled to first bus 1616 via interface 1696. In one embodiment, the first bus 1616 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third generation IO interconnect bus, as a non-limiting example.As shown in FIG. 16, various IO devices 1614 can be coupled to a first bus 1616 along with a bus bridge 1618 that couples the first bus 1616 to a second bus 1620. In one embodiment, one or more such as a coprocessor, a high throughput MIC processor, a GPGPU, an accelerator such as, for example, a graphics accelerator or digital signal processing (DSP) unit, a field programmable gate array, or any other processor Additional processors 1615 are coupled to the first bus 1616. In one embodiment, the second bus 1620 can be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1620, including, for example, a keyboard and/or mouse 1622, a communication device 1627, and a storage unit 1628, such as a disk drive or other mass storage device that may include instructions or code and data 1630 in one embodiment. device. Additionally, audio IO 1624 can be coupled to second bus 1620. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 16, the system can implement a multi-drop bus or other such architecture.Referring now to Figure 17, a block diagram of a more specific second example system 1700 is shown. Figures 16 and 17 have the same reference numerals, and some aspects of Figure 16 have been omitted from Figure 17 to avoid obscuring the other aspects of Figure 17.FIG. 17 shows that processors 1670, 1680 can include integrated memory and IO control logic ("CL") 1672 and 1682, respectively. Thus, CL 1672, 1682 includes an integrated memory controller unit and includes IO control logic. Figure 17 shows that not only are memories 1632, 1634 coupled to CLs 1672, 1682, but IO devices 1714 are also coupled to control logic 1672, 1682. Traditional IO device 1715 is coupled to chipset 1690.Referring now to Figure 18, shown is a block diagram of a SoC 1800 in accordance with an embodiment. In Fig. 14, like components have the same reference numerals. In addition, the dashed box is an optional feature of more advanced SoCs. In FIG. 18, interconnect unit 1802 is coupled to: an application processor 1810 that includes a set of one or more cores 1402A-N and a shared cache unit 1406; a system proxy unit 1410; a bus controller unit 1416 Integrated memory controller unit 1414; a collection of one or more coprocessors 1820, which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a static random access memory (SRAM) unit 1830; A memory access (DMA) unit 1832; and a display unit 1840 for coupling to one or more external displays. In one embodiment, coprocessor 1820 includes a dedicated processor such as, for example, a network or communication processor, a compression engine, a GPGPU, a high throughput MIC processor, an embedded processor, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Some embodiments may be implemented as a computer program or program code for execution on a programmable system, the programmable system comprising at least one processor, a storage system (including volatile and nonvolatile memory and/or storage elements), at least An input device and at least one output device.Program code, such as code 1630 shown in Figure 16, can be applied to the input instructions to perform the various functions described herein and to generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code can be implemented in a high level procedural or object oriented programming language to communicate with the processing system. The program code can also be implemented in assembly or machine language as needed. In fact, the mechanisms described herein are not limited to the scope of any particular programming language. In either case, the language can be a compiled or interpreted language.One or more aspects of at least one embodiment can be implemented by a representative instruction stored on a machine readable medium, the instructions representing various logic in a processor that, when read by a machine, causes the machine to be made for execution The logic of the techniques described herein. These representations, referred to as "IP cores", can be stored on a tangible machine readable medium and provided to a plurality of customers or production facilities for loading into the manufacturing machine that actually manufactures the logic or processor.Such machine readable storage medium may include, but is not limited to, a non-transitory, tangible arrangement of articles manufactured or formed by a machine or device, including a storage medium such as a hard disk; any other type of disk including floppy disk, optical disk, tight Disk Read Only Memory (CD-ROM), compact disk rewritable (CD-RW), and magneto-optical disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM), and static random Random access memory (RAM) such as access memory (SRAM), erasable programmable read only memory (EPROM), flash memory, electrically erasable programmable read only memory (EEPROM); phase change memory (PCM) Magnetic or optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, some embodiments also include non-transitory tangible machine readable media containing instructions or containing design data, such as hardware description language (HDL), which defines the structures, circuits, devices, processors, and/or described herein. Or system characteristics. These embodiments are also referred to as program products.Simulation (including binary transformation, code transformation, etc.)In some cases, an instruction converter can be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter can transform (e. g., use static binary transforms or dynamic binary transforms including dynamic compilation), morph, emulate, or otherwise convert the instructions into one or more other instructions to be processed by the core. The instruction converter can be implemented in software, hardware, firmware, or a combination thereof. The instruction converter can be on the processor, external to the processor, or partially on the processor and partially external to the processor.Figure 19 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set. In the illustrated embodiment, the instruction converter is a software instruction converter, but instead the instruction converter can be implemented in software, firmware, hardware, or various combinations thereof. 19 illustrates that a program utilizing high level language 1902 can be compiled using x86 compiler 1904 to generate x86 binary code 1906 that can be natively executed by processor 1916 having at least one x86 instruction set core. A processor 1916 having at least one x86 instruction set core represents any processor capable of performing substantially the same functions as aprocessor having at least one x86 instruction set core by performing or otherwise processing the following: (1) )The essential part of the instruction set of the x86 instruction set core, or (2) the target is to run on theprocessor with at least one x86 instruction set core to implement theprocessor with at least one x86 instruction set core Basically the same result of the application or the target code version of other software. The x86 compiler 1904 represents a compiler for generating x86 binary code 1906 (e.g., object code) that can be executed on a processor 1916 having at least one x86 instruction set core with or without additional link processing. Similarly, FIG. 19 illustrates that an alternate instruction set compiler 1908 can be used to compile a program that utilizes high-level language 1902 to generate a processor 1914 that can be executed by a core that does not have at least one x86 instruction set (eg, has implemented Sunnyvale, California) The MIPS instruction set of the City's MIPS Technologies Inc., and/or the processor executing the core of the ARM instruction set of ARM Holdings Inc. of Sunnyvale, Calif.) is an alternate instruction set binary code 1910 that is natively executed. The instruction converter 1912 is used to convert the x86 binary code 1906 into code that can be natively executed by the processor 1914 that does not have the x86 instruction set core. The converted code is unlikely to be identical to the alternative instruction set binary code 1910 because the instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform the normal operation and consist of instructions from the alternate instruction set. Thus, the instruction converter 1912 represents, by simulation, simulation, or any other process, software, firmware, hardware, or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1906.The foregoing has outlined the features of several embodiments so that those skilled in the art can Those skilled in the art will appreciate that the present disclosure can be readily utilized as a basis for designing or modifying other processes and structures for performing the same objectives and/or achieving the same advantages of the various embodiments described herein. A person skilled in the art should also realize that such equivalent constructions do not deviate from the spirit and scope of the disclosure, and various modifications, substitutions, and modifications can be made without departing from the spirit and scope of the disclosure. change.All or a portion of any of the hardware components disclosed herein can be readily provided in a system on a chip (SoC) including a central processing unit (CPU) package. SoC refers to an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. SoCs can include digital signals, analog signals, mixed signals, and RF functions, all of which can be provided on a single chip substrate. Other embodiments may include a multichip module (MCM) in which a plurality of chips are located within a single electronic package and configured to closely interact with each other through the electronic package. In various other embodiments, the computing functions disclosed herein may be implemented in one or more silicon cores in an application specific integrated circuit (ASIC), field programmable gate array (FPGA), and other semiconductor chips.As used throughout this specification, the terms "processor" or "microprocessor" shall be taken to include not only traditional microprocessors (such as's industry-leading x86 and x64 architectures), but also any ASIC, FPGA. , microcontroller, digital signal processor (DSP), programmable logic device, programmable logic array (PLA), microcode, instruction set, emulation or virtual machine processor, or any similar "turing" that allows execution of instructions A Turing-complete device, a combination of devices, or a logical component (hardware or software).It should also be noted that some of the components may be omitted or merged in some embodiments. In general, the arrangements depicted in the figures are to be understood as logical divisions, and the physical architecture may include various permutations, combinations and/or hybrids of these elements. It should be noted that a myriad of possible design configurations can be used to achieve the operational goals outlined in this paper. As a result, the associated infrastructure has a number of alternative arrangements, design choices, device possibilities, hardware configurations, software implementations, and device options.In a general sense, any suitably configured processor can execute instructions associated with data or microcode to perform the operations detailed herein. Any processor disclosed herein can transform an element or item (eg, data) from one state or thing to another. In another example, some of the activities outlined herein may be implemented using fixed logic or programmable logic (eg, software and/or computer instructions executed by a processor), and the elements identified herein may be some type of programmable processing. Programmable digital logic (eg, Field Programmable Gate Array (FPGA), Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM)); including digital logic, software, Code, electronic instructions, flash memory, optical disk, CD-ROM, DVD ROM, magnetic or optical card, ASIC suitable for other types of machine readable media storing electronic instructions; or any suitable combination thereof.In operation, where appropriate, and based on particular needs, the storage device may store the information in any suitable type of tangible, non-transitory storage medium (eg, random access memory (RAM), read only memory (ROM), Field Programmable Gate Array (FPGA), Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable ROM (EEPROM), or microcode); software, hardware (eg, processor instructions, or microcode) ); or in any other suitable component, device, component, or object. In addition, information that is tracked, transmitted, received, or stored in the processor can be provided in any database, register, table, cache, queue, control list, or storage structure, based on specific needs and implementations, all These can all be referenced in any suitable time frame. Any of the memory or storage elements disclosed herein should be construed as being encompassed within the broad terms "memory&apos; and &apos;storage&apos;. Herein, non-transitory storage medium is specifically intended to include any non-transitory dedicated or programmable hardware configured to provide the disclosed operations or to cause the processor to perform the disclosed operations. Herein, the non-transitory storage medium also explicitly includes a processor having hardware-encoded instructions stored thereon and optionally microcode instructions or sequences encoded in hardware, firmware or software.Computer program logic implementing all or a portion of the functions described herein is embodied in various forms including, but not limited to, hardware description language, source code form, computer executable form, machine instruction or microcode, programmable hardware And various intermediate forms (for example, forms generated by HDL processors, assemblers, compilers, linkers, or locators). In one example, the source code includes a series of computer program instructions implemented in the following languages: various programming languages for use with various operating systems or operating environments, such as object code, assembly language, or as OpenCL, FORTRAN, C, High-level languages such as C++, JAVA, or HTML; or hardware description languages such as Spice, Verilog, and VHDL. Source code can define and use a variety of data structures and communication messages. The source code can be in a computer executable form (eg, via an interpreter), or the source code can be converted to a computer executable form (eg, via a converter, assembler, or compiler) or converted to byte code, etc. Intermediate form. Any of the above may be used to construct or describe appropriate discrete circuits or integrated circuits, whether sequential, combined, state machine, or otherwise, where appropriate.In one example, any number of circuits of the figures can be implemented on the boards of associated electronic devices. The board may be a general purpose board that holds various components of the internal electronic system of the electronic device and further provides connectors for other peripherals. More specifically, the board may provide an electrical connection through which other components of the system may be in electrical communication. Any suitable processor and memory can be suitably coupled to the board based on the specific configuration needs, processing requirements, and computing design. Other components such as external storage devices, additional sensors, controllers for audio/video display, and peripheral devices can be attached to the board via a cable as an add-in card, or integrated into the board itself. In another example, the circuitry of the figures can be implemented as a stand-alone module (eg, a device having associated components and circuitry configured to perform a particular application or function) or as dedicated hardware to an electronic device. Insert module.It should be noted that with the numerous examples provided herein, interactions can be described with respect to two, three, four, or more electrical components. However, this has been done for clarity and example purposes only. It should be understood that the system can be combined or reconfigured in any suitable manner. In conjunction with similar design alternatives, any of the illustrated components, modules, and components of the drawings may be combined in various possible configurations, all of which are within the broad scope of the present disclosure. In some cases, one or more of the functions of a given set of processes can be more easily described by reference to only a limited number of electrical components. It should be understood that the circuits of the drawings and their teachings are readily scalable and can accommodate a large number of components, as well as more complex/precise arrangements and configurations. Thus, the examples provided should not limit the scope of the invention or the broad teachings of the circuit to the potential application.Many other variations, substitutions, changes, alterations and modifications may be . To assist the U.S. Patent and Trademark Office (USPTO) and any readers of any patents issued in this application to interpret the claims appended hereto, the Applicant wishes to note that Applicant: (a) is not intended to make the appended claims Any claim recited in paragraph (f) of Section 912 of 35 USC as it appears on the date of filing herein, unless the words "means for" or "for The present disclosure is not limited by any of the statements in the specification, and is not intended to be otherwise.Example embodimentIn one example, a processor is disclosed, the processor comprising: an execution unit, the execution unit comprising a branching circuit system; a branch predictor, the branch predictor comprising an HTP for identifying a hard to predict (HTP) branch a branch filter; and a special branch predictor for receiving an identity of the HTP branch from the HTP branch filter, the special branch predictor comprising a convolutional neural network for predicting a branch action for the HTP branch ( CNN) Branch Predictor.Further disclosed is an example of a processor, wherein the special branch predictor comprises a coprocessor or a field programmable gate array.Further disclosed is an example of a processor wherein the special branch predictor is an on-die circuit block.Further disclosed is an example of a processor in which the special branch predictor is employed to employ a simplified unique thermal binary circuit system.Further disclosed is an example of a processor, wherein the special branch predictor comprises a dual layer CNN.Further disclosed is an example of a processor wherein the special branch predictor comprises a binary 1 dimensional convolution layer and a fully connected binary layer.Further disclosed is an example of a processor, wherein the 1D convolution layer is configured to receive an incoming (program counter (PC), direction) pair, mask the incoming pair, and use the masked bits Make an index to the filter response table and return an L-bit vector as a response.Further disclosed is an example of a processor wherein the 1D convolutional layer is further for pushing the response into an N x L bit first in first out (FIFO) buffer.Further disclosed is an example of a processor wherein the fully connected binary layer is used to XOR the contents of the FIFO buffer with binary linear layer weights and count the number of generated ones as a total number of integers.Further disclosed is an example of a processor, wherein the fully connected binary layer is further for comparing the total number of integers to generate a selection or not to select a branch prediction.Further disclosed is an example of a processor, wherein the special branch predictor is for receiving metadata from a trained CNN.Further disclosed is an example of a processor, wherein the special branch predictor further includes a CNN assisted predictor.Also disclosed is an example of a system on a chip, the system on a chip comprising: an input-output circuitry; a memory for housing a program, the program comprising a branch circuitry; and a processor, the processor comprising: executing a unit, the execution unit including a branch circuit system; a branch predictor including an HTP branch filter for identifying a hard-to-predict (HTP) branch; and a special branch predictor for filtering from the HTP branch The receiver receives an identification of an HTP branch, the special branch predictor including a Convolutional Neural Network (CNN) branch predictor for predicting branching actions for the HTP branch.Further disclosed is an example of a system on a chip, wherein the special branch predictor comprises a coprocessor or a field programmable gate array.Further examples of on-chip systems are disclosed in which the special branch predictor is an on-die circuit block.Further examples of on-chip systems are disclosed in which the special branch predictor is used to employ a simplified one-hot binary circuit system.An example of a system on a chip is further disclosed, wherein the special branch predictor comprises a dual layer CNN.Further disclosed is an example of a system on a chip, wherein the special branch predictor comprises a binary 1 dimensional convolutional layer and a fully connected binary layer.Further disclosed is an example of a system on a chip, wherein the 1D convolution layer is configured to receive an incoming (program counter (PC), direction) pair, mask the incoming pair, and mask the bit Used as an index into the filter response table and returns an L-bit vector as a response.Further disclosed is an example of a system on a chip, wherein the 1D convolutional layer is further for pushing the response into an N x L bit first in first out (FIFO) buffer.Further disclosed is an example of a system on a chip, wherein the fully connected binary layer is used to XOR the contents of the FIFO buffer with binary linear layer weights and count the number of generated 1s as an integer total.Further disclosed is an example of a system on a chip, wherein the fully connected binary layer is further for comparing the total number of integers to a threshold to generate a selection or not to select a branch prediction.Further disclosed is an example of a system on a chip, wherein the special branch predictor is for receiving metadata from a trained CNN.Further disclosed is an example of a system on a chip, wherein the special branch predictor further includes a CNN assisted predictor.Also disclosed is an example of a computer-implemented method of performing hard-to-predict (HTP) branch prediction, the method comprising: applying a branch filter to a branch circuit system to identify an HTP branch; and according to convolution A neural network (CNN) algorithm predicts branching actions for the HTP branch.Further disclosed is an example of a computer implemented method, wherein the CNN algorithm includes a simplified, one-hot binary circuit system.Further disclosed is an example of a computer implemented method wherein the CNN algorithm is a two layer CNN algorithm.Further disclosed is an example of a computer implemented method wherein the two-layer CNN algorithm includes a binary one-dimensional convolution layer and a fully connected binary layer.Further disclosed is an example of a computer implemented method for receiving an incoming (program counter (PC), direction) pair, masking the incoming pair, masked The bit is used as an index into the filter response table and returns an L-bit vector as a response.Further disclosed is an example of a computer implemented method wherein the 1D convolutional layer is further for pushing the response into an N x L bit first in first out (FIFO) buffer.Further disclosed is an example of a computer implemented method wherein the fully connected binary layer is used to XOR the contents of the FIFO buffer with binary linear layer weights and count the number of generated 1s as a total number of integers.Further disclosed is an example of a computer implemented method, the method further comprising comparing the total number of integers to a threshold to generate a selection or not to select a branch prediction.Further disclosed is an example of a computer implemented method, the method further comprising training the CNN algorithm based on metadata from a trained CNN.Further examples of devices are disclosed that include means for performing the methods as described in the various examples above.Further disclosed are examples of devices in which the apparatus includes a microprocessor, the microprocessor including a special branch predictor.Further disclosed is an example of a device wherein the special branch predictor includes an on-die circuit block.Further examples of devices are disclosed, wherein the special branch predictor comprises a coprocessor or a field programmable gate array.Further examples of on-chip systems are disclosed, which include devices as described in the various examples above.Further examples of devices are disclosed, the devices further including a CNN assisted predictor.Also disclosed is an example of a method of performing branch prediction, the method comprising: identifying a hard-to-predict (HTP) branch of a program; and accessing a convolutional neural network (CNN) branch predictor to predict a branching action for the HTP branch .Further examples of methods are disclosed in which accessing the CNN branch predictor includes employing a simplified, one-hot binary circuit system.Further disclosed is an example of a method wherein the CNN branch predictor comprises a dual layer CNN.Further disclosed is an example of a method wherein the CNN branch predictor comprises a binary 1 dimensional convolution layer and a fully connected binary layer.Further disclosed is an example of a method wherein the 1D convolution layer is configured to receive an incoming (program counter (PC), direction) pair, mask the incoming pair, and use the masked bits as Go to the index of the filter response table and return the L-bit vector as a response.Further disclosed is an example of a method wherein the 1D convolutional layer is further for pushing the response into an N x L bit first in first out (FIFO) buffer.Further disclosed is an example of a method in which the fully connected binary layer is used to XOR the contents of the FIFO buffer with binary linear layer weights and count the number of generated 1s as an integer total.Further disclosed are examples of methods in which the fully connected binary layer is further used to compare the total number of integers to generate a selection or not to select a branch prediction.Further examples of methods are disclosed, the method further comprising receiving metadata from the trained CNN.Further disclosed is an example of a method, wherein the CNN branch predictor further comprises a CNN assisted predictor.Further examples of devices are disclosed that include means for performing the methods as described in the various previous examples.Further disclosed is an example of a device, wherein the apparatus for performing the method includes a processor, the processor including a branch predictor and a special branch predictor, the special branch predictor including the CNN branch predictor .Further examples of devices are disclosed in which the special branch predictor is a coprocessor.Further examples of devices are disclosed in which the special branch predictor is a hardware accelerator.Further examples of devices are disclosed in which the devices are computing systems.Further disclosed is an example of at least one computer readable medium comprising instructions that, when executed, implement a method as described in the aforesaid examples or implement a device as described in the preceding examples . |
Particular embodiments described herein can offer a method that includes receiving a signal indicating whether at least one device is in a low power mode, determining that the at least one device is in a first thermally benign state based, at least in part, on the signal, and performing a first operation associated with a reduced thermal remediation power consumption. |
1.A method for reducing power consumption, including:Receiving a signal indicating whether at least one device is in a low power mode;Determining that the at least one device is in a first thermal benign state based at least in part on the signal;Performing a first operation associated with reduced thermal remediation power consumption;Determining that the at least one device is in a second thermally benign state based at least in part on the signal, wherein the second thermal benign state is associated with less heat than the first thermally benign state;Performing a second operation associated with causing reduced thermal remediation power consumption, wherein the second operation is associated with a reduction in power consumption associated with thermal remediation that results in a reduction in power consumption associated with the first operation UnionDetermining that the at least one device is in a third thermally benign state based at least in part on the signal, wherein the third thermal benign state is associated with less heat than the second thermally benign state;Resulting in reduced power consumption associated with the cooling device,Wherein the first operation involves reducing a sampling frequency associated with a thermal sensor, the second operation involving disabling the thermal sensor.2.The method of claim 1 wherein said at least one device comprises at least one of a processor or a controller hub.3.The method of claim 1 wherein determining that the at least one device is in a first thermally benign state comprises determining that a low power duty cycle of the signal exceeds a threshold duty cycle.4.The method of claim 1 wherein said first operation involves reducing power consumption associated with at least one of: a software module, a thermal sensor, or a cooling device associated with monitoring thermal sensor information .5.The method of claim 1 wherein said first operation involves reducing a sampling frequency associated with the thermal sensor.6.The method of claim 1 further comprising:Receiving thermal sensor information;It is determined that the thermal sensor information indicates a temperature within a predetermined temperature threshold.7.An apparatus for reducing power consumption, comprising: logic that at least in part includes hardware logic for:Receiving a signal indicating whether at least one device is in a low power mode;Determining that the at least one device is in a first thermal benign state based at least in part on the signal;Performing a first operation associated with reduced thermal remediation power consumption;Determining that the at least one device is in a second thermally benign state based at least in part on the signal, wherein the second thermal benign state is associated with less heat than the first thermally benign state;Performing a second operation associated with causing reduced thermal remediation power consumption, wherein the second operation is associated with a reduction in power consumption associated with thermal remediation that results in a reduction in power consumption associated with the first operation UnionDetermining that the at least one device is in a third thermally benign state based at least in part on the signal, wherein the third thermal benign state is associated with less heat than the second thermally benign state;Resulting in reduced power consumption associated with the cooling device,Wherein the first operation involves reducing a sampling frequency associated with a thermal sensor, wherein the second operation involves disabling the thermal sensor.8.The apparatus of claim 7 wherein said at least one device comprises at least one of a processor or a controller hub.9.The apparatus of claim 7 wherein determining that the at least one device is in a first thermally benign state comprises determining that a low power duty cycle of the signal exceeds a threshold duty cycle.10.The apparatus of claim 7 wherein said first operation relates to reduced power consumption associated with at least one of: software modules, thermal sensors or cooling associated with monitoring thermal sensor information device.11.The apparatus of claim 7 wherein said first operation involves reducing a sampling frequency associated with the thermal sensor.12.The apparatus of claim 7 further comprising logic, the logic including, at least in part, hardware logic for:Receiving thermal sensor information;It is determined that the thermal sensor information indicates a temperature within a predetermined temperature threshold.13.A system for reducing power consumption, comprising: at least one controller and at least one device, the controller including logic, the logic including, at least in part, hardware logic for:Receiving, at the controller, a signal indicating whether at least one device is in a low power mode;Determining, at the portion of the controller, the at least one device at a first thermal benign state based at least in part on the signal;Performing a first operation associated with reduced thermal remediation power consumption at the controller;Determining, at the portion of the controller, the at least one device is in a second thermally benign state based at least in part on the signal, wherein the second thermally benign state is less hot than the first thermally benign state Associated;Performing a second operation associated with causing reduced thermal remediation power consumption at the controller, wherein the second operation is associated with thermal remediation resulting in a reduction in power consumption associated with the first operation Associated with reduced power consumption;Determining, at the portion of the controller, the at least one device is in a third thermally benign state based at least in part on the signal, wherein the third thermally benign state is less thermally associated with the second thermally benign state Associated;At the controller, causing a reduction in power consumption associated with the cooling device,Wherein the first operation involves reducing a sampling frequency associated with a thermal sensor, wherein the second operation involves disabling the thermal sensor.14.The system of claim 13 wherein said at least one device comprises at least one of a processor or a controller hub.15.The system of claim 13 wherein determining that the at least one device is in the first thermally benign state comprises determining that the low power duty cycle of the signal exceeds a threshold duty cycle.16.The system of claim 13 wherein said first operation relates to reduced power consumption associated with at least one of: software modules, thermal sensors or cooling associated with monitoring thermal sensor information device.17.The system of claim 13 wherein said first operation involves reducing a sampling frequency associated with the thermal sensor.18.A device for reducing power consumption, including:Means for receiving a signal indicating whether the at least one device is in a low power mode;Means for determining that the at least one device is in a first thermally benign state based at least in part on the signal;Means for performing a first operation associated with reduced thermal remediation power consumption;Means for determining that the at least one device is in a second thermally benign state based at least in part on the signal, wherein the second thermal benign state is associated with less heat than the first thermally benign state ;Means for performing a second operation associated with causing reduced thermal remediation power consumption, wherein the second operation is associated with thermal remediation resulting in a reduction in power consumption associated with the first operation Associated with reduced power consumption;Means for determining that the at least one device is in a third thermally benign state based at least in part on the signal, wherein the third thermal benign state is associated with less heat than the second thermally benign state ;as well asMeans for causing a reduction in power consumption associated with a cooling device,Wherein the first operation involves reducing a sampling frequency associated with a thermal sensor, the second operation involving disabling the thermal sensor. |
System and method for causing power consumption reduction associated with thermal remediationTechnical fieldEmbodiments described herein generally relate to allowing for power savings in a processor environment.Background techniqueAs electronic devices become more complex and ubiquitous in the daily lives of users, there are increasingly different requirements for them. For example, many electronic devices can be operated by battery power, thereby allowing users to operate these devices in many different environments. Additionally, as the capabilities of electronic devices become more widespread, many users may become dependent on the enhanced performance provided by these capabilities. As these aspects of electronic devices evolve, there is an increasing demand for reduced power consumption. However, as the capabilities of electronic devices increase, the amount of heat generated by electronic devices is also increasing. Many electronic devices include mechanisms for thermally remediating the generated heat. It may be desirable to control thermal remediation in a manner that reduces power consumption while still allowing for thermal remediation.BRIEF DESCRIPTION OF THE DRAWINGSThe various embodiments are illustrated by way of example and not limitation in the drawings1 is a block diagram showing components associated with thermal remediation of a device, in accordance with at least one example embodiment;2 is a timing diagram showing signals indicating whether at least one device is in a low power mode, in accordance with at least one example embodiment;3 is another timing diagram showing signals indicating whether at least one device is in a low power mode, in accordance with at least one example embodiment;4 is a flow chart showing a set of operations for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment;5 is another flow diagram showing a set of operations for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment;6 is yet another flow diagram showing a set of operations for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment;7 is yet another flow diagram showing a set of operations for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment;8 is yet another flow diagram showing a set of operations for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment;9 is a simplified block diagram associated with an example ARM ecosystem system-on-a-chip (SOC) of the present disclosure;10 is a simplified block diagram showing example logic that can be used to perform activities associated with the present disclosure.The various figures in the figures are not necessarily to scale unless the scope,Detailed Description of Example EmbodimentsThe following detailed description illustrates example embodiments relating to apparatus, methods, and systems that provide for energy savings in a processor environment. For example, features such as structures, functions, and/or characteristics may be described with reference to an embodiment for convenience, and embodiments may be implemented with any suitable one or more of the features described.In at least one embodiment, a method is provided, the method comprising receiving a signal indicating whether at least one device is in a low power mode; determining, based at least in part on the signal, that the at least one device is in a first thermal benign state; and performing The first operation associated with reduced thermal remediation power consumption. In a more specific embodiment, the at least one device comprises at least one of a processor or a controller hub. Additionally, determining that the at least one device is in the first thermal benign state includes determining that the low power duty cycle of the signal exceeds a threshold duty cycle. The first operation may involve reducing power consumption associated with at least one of: a software module, a thermal sensor, or a cooling device associated with monitoring thermal sensor information. This first operation may also involve reducing the sampling frequency associated with the thermal sensor. The method can also include receiving thermal sensor information; and determining that the thermal sensor information indicates a temperature within a predetermined temperature threshold.FIG. 1 is a block diagram showing components associated with thermal remediation of device 104, in accordance with at least one example embodiment. The example of FIG. 2 is merely an example of a component associated with thermal remediation of a device, and does not limit the scope of the claims. For example, the operations attributed to a component can vary, the number of components can vary, the composition of components can vary, and so on. For example, in some example embodiments, operations attributable to one component of the example of FIG. 1 may be assigned to one or more other components.The example of FIG. 1 shows controller 102, thermal sensor 106, and cooling device 108 in communication with device 104. Controller 102 can be any type of controller, such as power management controller 1118 of Figure 10, power control 1055 of Figure 9, and the like. In at least one example embodiment, controller 102 is an embedded controller, thermal system management controller (SMC), and the like. Device 104 can be any type of electronic device. In at least one example embodiment, device 104 is a processor (such as processor 1104 of FIG. 10), a controller (such as display controller 1112 of FIG. 10), a storage system (such as storage system 1108 of FIG. 10), platform control Hub (PCH), Input/Output Controller Hub (ICH), etc. In at least one example embodiment, device 104 is a system on a chip, such as ARM ecosystem SOC 1000 of FIG. Thermal sensor 106 can be any type of sensor capable of providing thermal sensor information such as temperature information. In at least one example embodiment, thermal sensor 106 is associated with device 104. For example, thermal sensor 106 can be thermally coupled to device 104 such that thermal sensor 106 can provide thermal sensor information indicative of the temperature of device 104. Cooling device 108 can be any cooling device that can cause a decrease in temperature. In at least one example embodiment, cooling device 108 is associated with device 104. For example, the cooling device 108 can be coupled to the device 104 such that the cooling device 108 can cause the temperature of the device 104 to decrease. For example, the cooling device can include a fan, a liquid cooling element, and the like. Cooling device 108 may be thermally coupled to device 104.In at least one example embodiment, thermal sensor 106 and cooling device 108 are associated with thermal remediation. For example, controller 102 can monitor the thermal information received from thermal sensor 106 to determine if device 104 is at a desired temperature. The controller 102 can control the operation of the cooling device 108 to reduce the temperature of the device 104 based at least in part on the thermal information received from the thermal sensor 106. For example, the controller 102 can enable the cooling device 108 if the controller 102 determines that the temperature indicated by the thermal sensor information exceeds a threshold. Accordingly, the control, use, and/or operation of cooling device 108 and thermal sensor 106 may be referred to as thermal remediation.Even though the example of FIG. 1 shows a single controller, a single device 104, a single thermal sensor 106, and a single cooling device 108, there may be multiple controllers, devices, thermal sensors, and/or cooling devices. Additionally, the controller can communicate with one or more devices. Additionally, the thermal sensor can be associated with one or more devices. Additionally, the cooling device can be associated with one or more devices.In at least one example embodiment, controller 102 controls thermal sensor 106 and receives thermal sensor information from thermal sensor 106. For example, controller 102 can include one or more software modules associated with controlling thermal sensor 106 and/or receiving thermal sensor information from thermal sensor 106. Controller 102 can sample thermal sensor information from thermal sensor 106 at various times. For example, controller 102 can periodically sample thermal sensor information. The frequency at which controller 102 samples thermal sensor information from thermal sensor 106 may be referred to as the sampling frequency. Controller 102 can control the power used to enable operation of thermal sensor 106. For example, the controller 102 can control powering the thermal sensor 106 at the sampling time to enable supply of thermal sensor information, but control does not power the thermal sensor 106 at non-sampling times.It should be understood that there may be power consumption associated with the controller 102 sampling the thermal information from the thermal sensor 106. For example, there may be power consumption associated with operation of software modules (eg, software modules within controller 102) associated with sampling thermal sensor information from thermal sensor 106. In another example, there may be power consumption associated with sampling thermal information from thermal sensor 106, such as when performing signal conversion. In yet another example, there may be power consumption associated with enabling reception of thermal information from the thermal sensor 106, which may consume power by providing power to the thermal sensor.In at least one example embodiment, controller 102 controls cooling device 108. For example, the controller 102 can enable and/or disable the cooling device 108, can control the amount of cooling applied by the cooling device 108, and the like. In at least one example embodiment, the cooling device 108 can be controlled such that the cooling device 108 can vary the amount of cooling performed. For example, if the cooling device 108 includes a fan, the fan speed can be varied to change the amount of cooling. In another example, if the cooling device 108 includes a liquid cooling element, the circulation of the liquid can be varied to change the amount of cooling. It should be understood that there may be power consumption associated with operation of the cooling device 108. For example, there may be power consumption associated with operations associated with enabling the operation of the cooling device 108 with a software module (eg, a software module within the controller 102). In another example, there may be power consumption associated with operation of the cooling device 108, such as power for rotating the fan, power for circulating the liquid, and the like. In at least one example embodiment, controller 102 operates independently of operating system software. For example, controller 102 can operate via firmware, device drivers, motherboard logic, and the like. In these cases, controller 102 can perform operations independently of operating system software.In an example embodiment, device 104 may provide a signal indicating whether device 104 is in a low power mode. In at least one example embodiment, controller 102 receives a signal indicating whether device 104 is in a low power mode. The low power mode may relate to an operational mode of device 104 characterized by a power reduction relative to a normal power mode. For example, the low power mode may relate to a power state of device 104 that is associated with less than full operation. In this example, the low power mode may involve a power state above S0, above C0, and the like. In another example, the low power mode may relate to a mode in which the activity of the device 104 is reduced such that the power consumed by the device 104 is reduced. In at least one example embodiment, the signal is a logic signal that is received as an electrical signal. For example, the signal can be provided from the electronic output of device 104 and can be received by controller 102 as an electronic input. The controller 102 can continuously receive the signal.It should be understood that as device 104 performs more activities, device 104 may increase its temperature. Thus, as device 104 performs more operations, device 104 may increase its need for thermal remediation. Rather, there may be operational conditions of device 104 that are associated with performing sufficiently little activity such that the activity does not result in a temperature increase of device 104. For example, device 104 can perform operations such that the amount of heat associated with this operation is less than or equal to the amount of heat dissipated by the device without thermal remediation. This operational condition can be referred to as a thermally benign state. In at least one example embodiment, the thermally benign state and the state of the device in which the device is not performing an action to the extent that may result in a temperature increase. In at least one example embodiment, the low power state is a thermally benign state.It may be desirable to reduce power consumption associated with thermal remediation of the device while the device, such as device 104, is operating in a thermally benign state. For example, when the device is operating in a thermally benign state, the device can be sufficiently cooled without the aid of a cooling device such as cooling device 108. In another example, when the device is operating in a thermally benign state, there may be no need to frequently monitor temperature due to lack of temperature ramping activity or a need to monitor temperature at all. The power consumption associated with omitting the low power mode of the device and/or omitting thermal remediation considering the thermal benign state of the device may be referred to as standard thermal remediation power consumption. For example, standard thermal remediation power consumption can involve standard cooling device operation as well as standard thermal sensor sampling frequency.2 is a timing diagram showing signal 200 indicating whether at least one device is in a low power mode, in accordance with at least one example embodiment. The example of FIG. 2 is merely an example of a signal indicating whether at least one device is in a low power state, and does not limit the scope of the claims. For example, the signal level associated with the low power mode may vary, the number of signals indicative of the low power mode may vary, the granularity of the low power mode represented by the signal may change, and the like.In at least one example embodiment, the signal may indicate a low power mode by being in an indicated active state. In such cases, a device such as device 104 of FIG. 1 may provide a signal indicating that the device may be in a low power mode when active and indicating that the device may be in a mode other than the low power mode. Even though the example of FIG. 2 is described for signals in which high level is associated with active and low level is associated with invalid, other examples may differ in this regard.In the example of FIG. 2, signal 200 includes invalid signal portions 202, 206, 210, 214, and 218. Signal 200 also includes valid signal portions 204, 208, 212, and 216. In at least one example embodiment, the active signal portions 204, 208, 212, and 216 indicate that the device is in a low power mode, while the invalid signal portions 202, 206, 210, 214, and 218 indicate that the device is in a mode that is not in a low power mode. In at least one example embodiment, signal 200 is a continuous signal provided during the entire operation of the associated device. In at least one example embodiment, the controller may determine that the valid signal portion corresponds to a thermally benign state of one or more devices from which the signal 200 was received.FIG. 3 is another timing diagram showing signals indicating whether at least one device is in a low power mode, in accordance with at least one example embodiment. The example of FIG. 3 is merely an example of a signal indicating whether at least one device is in a low power state, and does not limit the scope of the claims. For example, the signal level associated with the low power mode may vary, the number of signals indicative of the low power mode may vary, the granularity of the low power mode represented by the signal may change, and the like. Even though the example of FIG. 3 is described for signals in which high level is associated with active and low level is associated with invalid, other examples may differ in this regard.In at least one example embodiment, it may be desirable to evaluate a signal indicative of a low power mode with respect to time. For example, a device such as device 104 of Figure 1 can enter and exit the low power mode frequently, quickly (and so on). In some cases, the thermal state of the device may not change immediately when entering low power mode. Under these circumstances, it may be desirable to characterize the low power mode with respect to time. For example, it may be desirable to characterize the low power mode of the device as the percentage of time that the signal indicates a low power mode over a time interval. This percentage can be referred to as the duty cycle. Without limiting the claims in any way, at least one technical advantage associated with evaluating a signal indicative of a low power mode with respect to time is to allow the controller to reduce the number of times a heat remedial change is made based on the signal.Moreover, it should be understood that the operations associated with changing thermal remediation may correspond to power consumption. Therefore, it may be desirable to avoid changing the heat remedy at this frequency of increasing power consumption.The example of FIG. 3 shows signal 300 with respect to time interval 304. In at least one example embodiment, a controller, such as controller 102 of FIG. 1, may evaluate signal 300 with respect to time interval 304. The time interval 304 can be based on the time associated with a beneficial change in heat remediation. For example, the time associated with a beneficial change in thermal remediation may involve a power length that is long enough to modify the thermal remediation at each time interval to be associated with a thermal remediation that is less than or equal to a mode corresponding to a mode other than the low power mode. time. In the example of FIG. 3, signal 300 becomes active and inactive at various times within time interval 304. In the example of FIG. 3, signal 300 is active for approximately 55% of the time during time interval 304. This effective can involve a 55% duty cycle. In at least one example embodiment, the effective duration may be measured by recording the amount of time between a transition to an active state and a transition to an inactive state (eg, using signal edge detection).4 is a flow diagram showing a set of operations 400 for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment. This set of operations 400 can be utilized by a device, such as the example system of FIG. 10 or a portion thereof. The apparatus can include means for performing the operations of FIG. 4, including, for example, processor 1104 of FIG. In an example embodiment, a device (eg, system 1100 of FIG. 10) is transformed by having a memory (eg, system memory 1108 of FIG. 10) that includes computer code configured to interact with a processor (eg, FIG. 10) The example processor 1104) works together to cause the device to perform the set of operations 400. In at least one example embodiment, the set of operations 400 is performed independently of operating system software.At block 402, the device receives a signal indicating whether the at least one device is in a low power mode. This reception can be similar to the reception described with respect to FIG. This signal can be similar to the signal described with respect to Figures 1-3. At block 404, the device determines whether the at least one device is in a thermally benign state based at least in part on the signal. This thermally benign state can be similar to the thermally benign state described with respect to FIG. Determining whether the device is in a thermally benign state can include evaluating the signal against a predefined criterion associated with the thermal benign operation of the device. For example, the device may have a particular low power mode duty cycle above which the device is in a thermally benign state. In this example, the device can determine that the device is in a thermally benign state by determining that the low power mode duty cycle of the signal exceeds a threshold duty cycle value. This threshold duty cycle value may correspond to the particular low power mode duty cycle, which is greater than the particular low power mode duty cycle when the device is in a thermally benign state. This threshold can be different on different devices. This threshold can be determined by the design characteristics of the device, the manufacturing characteristics of the device, the testing of the device, and the like. At block 404, if the device determines that at least one device is in a thermally benign state, then flow continues to block 406. Otherwise, the flow returns to block 402.At block 406, the device performs operations associated with causing a decrease in thermal remediation power consumption. In at least one example embodiment, the reduced power consumption relates to power consumption that is less than standard thermal remediation power consumption similar to the standard thermal remediation power consumption described with reference to FIG. This operation may involve operations associated with controlling devices associated with thermal remediation. The device associated with the heat remedy may be a thermal sensor (such as thermal sensor 106 of Figure 1), a cooling device (such as cooling device 108 of Figure 1), and the like. This operation may involve causing a reduction in operations associated with monitoring the thermal sensor information associated with the software module. This operation can be associated with a thermal sensor. For example, the operation may involve reducing the sampling frequency associated with the thermal sensor, eliminating sampling associated with the thermal sensor, reducing power to the thermal sensor, eliminating power to the thermal sensor, and the like. This operation can be associated with a cooling device. For example, the operation may involve reducing the amount of cooling performed by the cooling device, reducing the power provided to the cooling device, eliminating cooling performed by the cooling device, eliminating power supplied to the cooling device, and the like. In at least one example embodiment, thermal remediation is associated with a device that signals its low power mode at block 402, similar to that described with respect to FIG. In at least one example embodiment, the device may perform the operations of block 406 in response to determining that the at least one device is in a thermally benign state.FIG. 5 is another flow diagram showing a set of operations for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment. This set of operations 500 can be utilized by a device, such as the example system of FIG. 10 or a portion thereof. The apparatus can include means for performing the operations of FIG. 5, including, for example, processor 1104 of FIG. In an example embodiment, a device (eg, system 1100 of FIG. 10) is transformed by having a memory (eg, system memory 1108 of FIG. 10) that includes computer code configured to interact with a processor (eg, FIG. 10) The example processor 1104) works together to cause the device to perform the set of operations 500. In at least one example embodiment, the set of operations 500 is performed independently of operating system software.The example of FIG. 5 illustrates performing an operation associated with reduced thermal remediation power consumption while the device is in a thermally benign state, and performing performance associated with unreduced power consumption if the device is not in a thermally benign state An example of an operation. In at least one example embodiment, the unreduced thermal remediation power consumption corresponds to standard thermal remediation power consumption. Operations associated with standard power consumption may involve thermal sensors and/or cooling devices. The operations associated with standard power consumption involving the thermal sensor may be operations that result in enabling sampling associated with the thermal sensor, resulting in increased sampling frequency associated with the thermal sensor, resulting in enabling power to the thermal sensor, and the like. The operations involving the cooling device associated with the standard power consumption may be operations that result in an increase in the amount of cooling performed, an increase in power provided to the cooling device, activation of cooling by the cooling device, permission to provide power to the cooling device, and the like.At block 502, the device receives a signal indicating whether the at least one device is in a low power mode, similar to that described with reference to block 402 of FIG. At block 504, the device determines whether the at least one device is in a thermally benign state based at least in part on the signal, similar to that described with respect to block 404 of FIG. At block 504, if the device determines that the at least one device is in a thermally benign state, then flow continues to block 506. Otherwise, the flow returns to block 502. At block 506, the device performs operations associated with causing reduced thermal remediation power consumption, similar to that described with reference to block 406 of FIG.At block 508, the device receives a signal indicating whether the at least one device is in a low power mode, similar to that described with reference to block 502. At block 510, the device determines whether the at least one device is in a thermally benign state based at least in part on the signal, similar to that described with reference to block 504. At block 510, if the device determines that at least one device is in a thermally benign state, then flow returns to block 508. Otherwise, the flow continues to block 512. At block 512, the device performs operations associated with unreduced thermal remediation power consumption.FIG. 6 is yet another flow diagram showing a set of operations 600 for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment. This set of operations 600 can be utilized by a device, such as the example system of FIG. 10 or a portion thereof. The apparatus can include means for performing the operations of FIG. 6, including, for example, the processor 1104 of FIG. In an example embodiment, a device (eg, system 1100 of FIG. 10) is transformed by having a memory (eg, system memory 1108 of FIG. 10) that includes computer code configured to interact with a processor (eg, FIG. 10) The example processor 1104) works together to cause the device to perform the set of operations 600. In at least one example embodiment, the set of operations 600 is performed independently of operating system software.In some cases, it may be desirable to perform operations associated with reduced thermal remediation power consumption after determining whether thermal information associated with the device is within a predetermined threshold. For example, if a device is at a high temperature, it may be beneficial to continue cooling the device even after it enters a thermally benign state, so that the device can reach lower temperatures before thermal remediation can be reduced. Without limiting the claims in any way, at least one technical advantage of having the execution of the operation further based on thermal sensor information indicative of temperature within a predefined threshold is that the device can be allowed to reach lower before thermal remediation can be reduced temperature.At block 602, the device receives a signal indicating whether the at least one device is in a low power mode, similar to that described with reference to block 402 of FIG. At block 604, the device determines whether the at least one device is in a thermally benign state based at least in part on the signal, similar to that described with respect to block 404 of FIG. At block 604, if the device determines that the at least one device is in a thermally benign state, then flow continues to block 606. Otherwise, the flow returns to block 602. At block 606, the device receives thermal sensor information similar to that described with reference to FIG. At block 608, the device determines if the thermal sensor information indicates a temperature within a predefined temperature threshold. At block 608, if the device determines that the temperature exceeds the predetermined temperature threshold, then flow returns to block 602. Otherwise, the flow continues to block 610. Accordingly, the apparatus may perform the operations of block 610 in response to determining that the at least one device is in a thermally benign state and further in response to determining that the thermal sensor information indicates a temperature within the predetermined temperature threshold. At block 610, the device performs operations associated with causing reduced thermal remediation power consumption, similar to that described with reference to block 406 of FIG.FIG. 7 is still another flow diagram showing a set of operations 700 for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment. This set of operations 700 can be utilized by a device, such as the example system of FIG. 10 or a portion thereof. The apparatus can include means for performing the operations of FIG. 7, including, for example, the processor 1104 of FIG. In an example embodiment, a device (eg, system 1100 of FIG. 10) is transformed by having a memory (eg, system memory 1108 of FIG. 10) that includes computer code configured to interact with a processor (eg, FIG. 10) The example processor 1104) works together to cause the device to perform the set of operations 700. In at least one example embodiment, the set of operations 700 is performed independently of operating system software.In at least one example embodiment, there may be more than one level of granularity associated with a thermally benign state. For example, there may be a thermally benign state associated with less heat generation than a different thermally benign state. For example, there may be multiple levels of thermal benign states each associated with a different level of fever. Under these circumstances, it may be desirable to base the operation performed in response to the determination of the thermal benign state based on the level of fever associated with the thermally benign state. For example, when the device is in a second thermal benign state associated with less heat than the first thermally benign state, it may be desirable to perform a second operation that is associated with the heat associated with the first operation Remedy associated power consumption is associated with more thermal remediation power consumption.At block 702, the device receives a signal indicating whether the at least one device is in a low power mode, similar to that described with reference to block 402 of FIG. At block 704, the device determines whether the at least one device is in the first thermal benign state based at least in part on the signal, similar to that described with reference to block 404 of FIG. At block 704, if the device determines that the at least one device is in the first thermal benign state, then flow continues to block 706. Otherwise, flow continues to block 708. At block 706, the device performs operations associated with causing reduced thermal remediation power consumption, similar to that described with reference to block 406 of FIG. At block 704, if the device determines that the at least one device is not in the first thermal benign state, then at block 708, the device determines whether the at least one device is in the second thermal benign state, as described with reference to block 404 of FIG. similar. In at least one example embodiment, the second thermally benign state is associated with less heat than the first thermally benign state. At block 708, if the device determines that the at least one device is in the second thermal benign state, then flow continues to block 710. Otherwise, the flow returns to block 702. At block 710, the device performs a second operation associated with causing reduced thermal remediation power consumption. In at least one example embodiment, the second operation is associated with a reduction in power consumption associated with thermal remediation that results in a reduction in power consumption associated with the first operation.FIG. 8 is yet another flow diagram showing a set of operations 800 for causing reduced thermal remediation power consumption, in accordance with at least one example embodiment. This set of operations 800 can be utilized by a device, such as the example system of FIG. 10 or a portion thereof. The apparatus can include means for performing the operations of FIG. 8, including, for example, the processor 1104 of FIG. In an example embodiment, a device (eg, system 1100 of FIG. 10) is transformed by having a memory (eg, system memory 1108 of FIG. 10) that includes computer code configured to interact with a processor (eg, FIG. 10) The example processor 1104) works together to cause the device to perform the set of operations 800. In at least one example embodiment, the set of operations 800 is performed independently of operating system software.At block 802, the device receives a signal indicating whether the at least one device is in a low power mode, similar to that described with reference to block 402 of FIG. At block 804, the device determines whether the at least one device is in the first thermal benign state based at least in part on the signal, similar to that described with reference to block 404 of FIG. At block 804, if the device determines that the at least one device is in the first thermal benign state, then flow continues to block 806. Otherwise, the flow continues to block 808. At block 806, the device performs operations associated with causing reduced thermal sensor sampling frequency and standard cooling device operation, similar to that described with reference to Figures 1 and 4. At block 804, if the device determines that the at least one device is not in the first thermal benign state, then at block 808, the device determines whether the at least one device is in the second thermal benign state, as described with reference to block 404 of FIG. similar. In at least one example embodiment, the second thermally benign state is associated with less heat than the first thermally benign state. At block 808, if the device determines that the at least one device is in the second thermal benign state, then flow continues to block 810. Otherwise, flow continues to block 812. At block 810, the device performs operations associated with causing reduced thermal sensor sampling frequency and reduced cooling device operation.At block 808, if the device determines that the at least one device is not in the second thermal benign state, then at block 812, the device determines whether the at least one device is in the third thermal benign state, as described with reference to block 404 of FIG. similar. In at least one example embodiment, the third thermally benign state is associated with less heat than the second thermally benign state. At block 812, if the device determines that the at least one device is in the third thermal benign state, then flow continues to block 814. Otherwise, the flow continues to block 816. At block 814, the device performs operations associated with causing thermal sensor sampling termination and cooling device operation termination. At block 812, if the device determines that the at least one device is not in the third thermal benign state, then at block 816, the device performs operations associated with causing standard thermal sensor sampling and standard cooling system operation.9 is a simplified block diagram associated with an example ARM ecosystem SOC 1000 of the present disclosure. At least one example implementation of the present disclosure includes integration of the energy saving features discussed herein with ARM components. For example, the example of FIG. 9 can be associated with any ARM core (eg, A-9, A-15, etc.). In addition, the architecture can be any type of tablet, smart phone (including AndroidTM phone, i-PhonesTM), i-PadTM, Google NexusTM, Microsoft SurfaceTM, personal computer, server, video processing component, laptop (including any A type of notebook), part of any type of touch-enabled input device, and so on.In this example of FIG. 9, the ARM ecosystem SOC 1000 can include a plurality of cores 1006-1007, a level two cache control 1008, a bus interface unit 1009, a level two cache 1010, a graphics processing unit (GPU) 1015, and an interconnect. 1012, a video codec 1020, and a liquid crystal display (LCD) interface 1025, the LCD interface being associated with a Mobile Industrial Processor Interface (MIPI) / High Definition Multimedia Interface (HDMI) link coupled to the LDC.The ARM ecosystem SOC 1000 may also include a Subscriber Identity Module (SIM) interface 1030, a boot read only memory (ROM) 1035, a synchronous dynamic random access memory (SDRAM) controller 1040, a flash controller 1045, and a serial peripheral interface (SPI). Host 1050, suitable power control 1055, dynamic RAM (DRAM) 1060, and flash memory 1065. Additionally, one or more example embodiments include one or more communication capabilities, interfaces, and features, such as examples of Bluetooth 1070, 3G modem 1075, Global Positioning System (GPS) 1080, and 802.11 WiFi 1085.In operation, the example of FIG. 9 can provide processing power along with relatively low power consumption to enable various types of computing (eg, mobile computing, high-end digital homes, servers, wireless infrastructure, etc.). In addition, such an architecture can enable any number of software applications (eg, AndroidTM,Player, Java Platform Standard Edition (Java SE), JavaFX, Linux, Microsoft Windows Embedded, Symbian, and Ubuntu, etc.). In at least one example embodiment, a core processor may implement an out-of-order superscalar pipeline with a coupled low latency secondary cache.10 is a simplified block diagram showing possible electronic devices and logic that may be associated with any of the power saving operations discussed herein. In at least one example embodiment, system 1100 includes a touch controller 1102, one or more processors 1104, system control logic 1106 coupled to at least one of processors 1104, system memory 1108 coupled to system control logic 1106, A non-volatile memory and/or storage device 1110 coupled to system control logic 1106, a display controller 1112 coupled to system control logic 1106, a display controller 1112 coupled to the display, and power management control coupled to system control logic 1106 The device 1118, and/or is coupled to the communication interface 1120 of the system control logic 1106.In at least one example embodiment, system control logic 1106 includes any suitable interface control for providing any suitable interface to at least one processor 1104 and/or to any suitable device or component in communication with system control logic 1106. Device. In at least one example embodiment, system control logic 1106 includes one or more memory controllers for providing an interface to system memory 1108. System memory 1108 can be used, for example, to load and store data and/or instructions for system 1100. In at least one example embodiment, system memory 1108 includes any suitable volatile memory such as, for example, a suitable dynamic random access memory (DRAM). In at least one example embodiment, system control logic 1106 includes one or more inputs/outputs (I/) for providing interfaces to display device, touch controller 1102, and non-volatile memory and/or storage device 1110. O) Controller.Non-volatile memory and/or storage device 1110 can be used to store data and/or instructions within, for example, software 1128. The non-volatile memory and/or storage device 1110 may comprise any suitable non-volatile memory such as, for example, a flash memory, and/or may include, for example, one or more hard disk drives (HDDs), one or more optical disks. Any suitable non-volatile storage device such as a (CD) drive, and/or one or more digital versatile disc (DVD) drives.The power management controller 1118 can include power management logic 1130 that is configured to control the various power management and/or power saving functions discussed herein or any portion thereof. In at least one example embodiment, power management controller 1118 is configured to reduce power consumption of components or devices of system 1100 that can operate with reduced power or are turned off when the electronic device is in a closed configuration . For example, in at least one example embodiment, when the electronic device is in a closed configuration, the power management controller 1118 performs one or more of the following: turning off an unused portion of the display and/or any backlight associated therewith; Requiring less computing power in the closed configuration allows one or more of the processors 1104 to enter a lower power state; and shutting down any devices and/or components that are not in use, such as the keyboard 108, when the electronic device is in a closed configuration.Communication interface 1120 can provide system 1100 with an interface for communicating over one or more networks and/or with any other suitable device. Communication interface 1120 can include any suitable hardware and/or firmware. In at least one example embodiment, communication interface 1120 can include, for example, a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.In at least one example embodiment, system control logic 1106 includes one or more input/output (I/O) controllers for providing, for example, to assist in converting sound to corresponding digital signals and/or for An interface to any suitable input/output device that facilitates the conversion of digital signals into corresponding sounds, audio devices, cameras, camcorders, printers, and/or scanners.As at least one example embodiment, at least one processor 1104 can be packaged with logic of one or more controllers of system control logic 1106. In at least one example embodiment, at least one processor 1104 can be packaged with logic of one or more controllers of system control logic 1106 to form a system in package (SiP). In at least one example embodiment, at least one processor 1104 can be integrated with the logic of one or more controllers of system control logic 1106 on the same die. As at least one example embodiment, at least one processor 1104 can be integrated with the logic of one or more controllers of system control logic 1106 on the same die to form a system on a chip (SoC).For touch control, touch controller 1102 can include touch sensor interface circuitry 1122 and touch control logic 1124. Touch sensor interface circuitry 1122 can be coupled to detect touch inputs on the first touch surface layer and the second touch surface layer of display 11 (ie, display device 1110). Touch sensor interface circuit 1122 can include any suitable circuitry, for example, that depends, at least in part, on the touch sensitive technology used by the touch input device. In one embodiment, touch sensor interface circuitry 1122 can support any suitable multi-touch technology. In at least one embodiment, touch sensor interface circuit 1122 includes any suitable circuitry that converts analog signals corresponding to the first touch surface layer and the second surface layer into any suitable digital touch input data. For one embodiment, suitable digital touch input data can include, for example, touch location or coordinate data.Touch control logic 1124 can be coupled to help control touch sensor interface circuitry 1122 to detect touch inputs on the first touch surface layer and the second touch surface layer in any suitable manner. As at least one example embodiment, the coupled touch control logic 1124 also outputs digital touch input data corresponding to the touch input detected by the touch sensor interface circuit 1122 in any suitable manner. Touch control logic 1124 can be implemented using any suitable logic, including any suitable hardware, firmware, and/or software logic (eg, non-transitory tangible media), depending at least in part on, for example, touch sensor interface circuitry 1122 Circuit. For one embodiment, touch control logic 1124 can support any suitable multi-touch technology.Touch control logic 1124 can be coupled to output digital touch input data to system control logic 1106 and/or at least one processor 1104 for processing. For one embodiment, at least one processor 1104 can execute any suitable software for processing digital touch input data output from touch control logic 1124. Suitable software may include, for example, any suitable driver software and/or any suitable application software. As shown in FIG. 11, system memory 1108 can store suitable software 1126 and/or non-volatile memory and/or storage devices.Note that in some example implementations, the power management functions outlined herein may be implemented in conjunction with logic encoded in one or more tangible, non-transitory media (eg, in an application specific integrated circuit (ASIC), digital signal processor ( DSP) instructions, software to be executed by a processor or other similar machine [may include embedded logic provided in object code and source code]. In some of these examples, the memory element can store data for the operations described herein. This includes memory elements capable of storing software, logic, code or processor instructions that are executed to perform the activities described herein. The processor can execute any type of instructions associated with the data to perform the operations described herein. In one example, a processor can transform an element or artifact (eg, data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented in fixed logic or programmable logic (eg, software/computer instructions executed by a processor), and the elements identified herein may be some type of programmable processor, Programmable digital logic (eg, Field Programmable Gate Array (FPGA), Erasable Programmable Read Only Memory (EPROM), EEPROM, or digital logic, software, code, An ASIC of electronic instructions, or any suitable combination thereof).Note that in the case of the examples provided above and numerous other examples provided herein, the interactions may be described more generally in terms of layers, protocols, interfaces, spaces, and environments. However, this is done for clarity and example purposes only. In certain circumstances, it may be easier to describe one or more functions in a given set of processes by simply referring to a limited number of components. It should be understood that the architecture (and its teachings) discussed herein can be easily scaled and can accommodate a large number of components as well as more complex/fine arrangements and configurations. Accordingly, the examples provided should not limit the scope of the disclosure or the broad teachings of the disclosure, as may be applicable to numerous other architectures.It is also important to note that the blocks in the flowcharts only show some of the possible signaling scenarios and modes that may be performed in or discussed in the circuits discussed herein. Some of these blocks may be deleted or removed as appropriate, or the steps may be substantially modified or altered without departing from the scope of the teachings provided herein. Additionally, a number of these operations have been described as being performed concurrently or in parallel with one or more additional operations. However, the timing of these operations can be drastically changed. The previous operational flow has been provided for the purposes of illustration and discussion. The present invention provides a great deal of flexibility in that any suitable arrangement, chronological order, configuration, and timing mechanism can be provided without departing from the teachings provided herein.It is also important to note that all of the specifications, protocols, and relationships outlined herein (eg, specific commands, timing intervals, support assistive components, etc.) are provided for purposes of example and teaching only. Each of these data may vary significantly without departing from the spirit of the disclosure or the scope of the appended claims. The specifications apply to many different non-limiting examples, and accordingly they should be interpreted as such. In the above description, various example embodiments have been described. Various modifications and changes may be made to these embodiments without departing from the scope of the appended claims. Accordingly, the specification and drawings are to be regarded in aA person skilled in the art can recognize a variety of other variations, substitutions, variations, changes and modifications, and the present disclosure is intended to cover all such modifications, alternatives, variations, changes and modifications as fall within the scope of the appended claims Inside. In order to assist the United States Patent and Trademark Office (USPTO) and any readers of any patents issued on this application to interpret the appended claims, the Applicant draws attention to the Applicant: (a) is not intended to be in any accompanying claims Invoking paragraph 6(6) of section 112 of Section 35U.SC (since it already exists on the date of application), unless the words "means for" or "steps for" are specifically used in The present disclosure is not limited by any of the statements in the appended claims.Example embodiment implementationAt least one specific example implementation can include a device that includes means for receiving signals (e.g., via any suitable interface, link, bus, communication path, etc.). The signal can indicate whether at least one device is in a low power mode. The apparatus can also include means for determining (eg, via a processor, software, circuitry, hub, controller, etc.) based at least in part on the signal that the at least one device is in a first thermally benign state, and for (eg, Means for performing a first operation associated with reduced thermal remediation power consumption via a processor, software, circuitry, hub, controller, or the like. |
The invention relates to a physical distributed control plane firewall with a unified software view. Various embodiments include techniques for processing transactions via a computer system interconnecting with a distributed firewall. The distributed firewalls include individual firewalls for respective initiators of transactions and individual firewalls for respective targets of those transactions. Thus, for example, transactions proceed along the shortest path from the initiator to the target, rather than being routed through the centralized firewall. Furthermore, for example, firewall transactions may be remapped such that the initiator addresses the initiator firewall and the target firewall via a unified address space without having to maintain a separate base address for each of the initiator firewall and the target firewall. Thus, an application may, for example, execute transactions of increased performance on a computer system compared to existing methods. |
CLAIMS 1. A computer-implemented method for processing a first transaction via an interconnect, the method comprising:determining that the first transaction is directed to a firewall;suspending execution of the first transaction;modifying a memory address included in the first transaction and in a first memory address format to generate a modified memory address in a second memory address format; andThe first transaction including the modified memory address is communicated to the firewall via the interconnect.2. The computer-implemented method of claim 1, wherein:the firewall includes an initiator firewall performing a first authorization function on the second transaction, andThe target firewall performs a second authorization function on the second transaction.3. The computer-implemented method of claim 2, wherein the first authorization function comprises determining that an initiator associated with the initiator firewall is authorized to direct the second transaction to a memory address space, the memory address The space includes a second memory address included in the second transaction.4. The computer-implemented method of claim 2, wherein the second authorization function comprises determining whether an initiator associated with the initiator firewall is authorized to direct the second transaction to an The goal.5. The computer-implemented method of claim 1, further comprising, after aborting execution of the first transaction, forwarding the first transaction to a firewall remapper.6. The computer-implemented method of claim 1 , further comprising, after modifying the memory address included in the first transaction to generate the modified memory address, forwarding the first transaction to An address space associated with the modified memory address.7. The computer-implemented method of claim 1, wherein the firewall comprises an initiator firewall and a target firewall, and the method further comprises:determining a path between an initiator and at least one of the initiator firewall or the target firewall through a plurality of nodes included in the interconnection; andThe first transaction is transmitted from the originator to the at least one of the originator firewall or the target firewall via the path.8. The computer-implemented method of claim 1, wherein the first memory address format comprises:a base address for a firewall address space comprising a plurality of address spaces corresponding to a plurality of firewalls comprising the firewall; andThe offset associated with the firewall.9. The computer-implemented method of claim 1, wherein the second memory address format comprises:corresponds to the base address of the firewall address space of the firewall; andThe offset associated with the firewall.10. The computer-implemented method of claim 1, wherein the firewall comprises an initiator firewall configured to perform authorization functions for a plurality of initiators.11. The computer-implemented method of claim 1, wherein the firewall comprises a target firewall configured to perform authorization functions for a plurality of targets.12. The computer-implemented method of claim 1 , wherein the firewall comprises an initiator firewall coupled to a first node included in a plurality of nodes within the interconnect, and a second firewall comprises a Describe the target firewall of the first node.13. The computer-implemented method of claim 1 , wherein the firewall comprises an initiator firewall coupled to a first node included in a plurality of nodes within the interconnect, and a second firewall comprises a A target firewall of a second node included in the plurality of nodes.14. A system comprising:initiator firewall;target firewall;Firewall catcher, which:determining that the first transaction is directed to either the initiator firewall or the target firewall, andsuspending execution of the first transaction;A firewall remapper that:modifying a memory address included in the first transaction and in a first memory address format to generate a modified memory address in a second memory address format; andan interconnect that communicates the first transaction including the modified memory address to the initiator firewall or the target firewall.15. The system of claim 14, wherein:the initiator firewall performs a first authorization function on the second transaction, andThe target firewall performs a second authorization function on the second transaction.16. The system of claim 15, wherein the first authorization function comprises determining that an initiator associated with the initiator firewall is authorized to direct the second transaction to a memory address space comprising A second memory address included in the second transaction.17. The system of claim 15, wherein the second authorization function includes determining whether an initiator associated with the initiator firewall is authorized to direct the second transaction to a target protected by the target firewall.18. The system of claim 14, wherein after aborting execution of the first transaction, the firewall trapper further forwards the first transaction to the firewall remapper.19. The system of claim 14 , wherein after the firewall remapper modifies the memory address included in the first transaction to generate the modified memory address, the firewall trapper further converts The first transaction is forwarded to an address space associated with the modified memory address.20. The system of claim 14, wherein the firewalls include an initiator firewall and a target firewall, and wherein the interconnection is further:determining a path between the initiator and the target through a plurality of nodes included in the interconnection; andA second transaction is communicated from the initiator to the target via the path. |
Physically Distributed Control Plane Firewall with Unified Software Viewtechnical fieldVarious embodiments relate generally to parallel processing computing architectures, and more specifically to physically distributed control plane firewalls with a unified software view.Background techniqueComputer systems typically include, among other things, one or more processing units such as a central processing unit (CPU) and/or a graphics processing unit (GPU), one or more memory systems, registers, input/output (I/O) /O device), etc. In operation, various components of the computer system act as initiators, as targets, or as both initiators and targets. When a component acts as an initiator, the component directs a transaction, such as a load operation or a store operation, through the interconnection system and toward another component in the computer system, called a target. In some examples, the initiator may perform a load operation to retrieve one or more data values from memory locations and/or registers included in the target. Similarly, in some examples, the initiator may perform a store operation to store one or more data values to memory locations and/or registers included in the target. The physical path from the initiator to the target's register configuration space or other control space is referred to herein as the control plane. Typical components used as initiators include CPUs and GPUs. Typical components used as targets include memory systems, registers, and I/O devices. I/O devices used as targets include Universal Asynchronous Receiver/Transmitters (UARTs), Serial Peripheral Interfaces (SPIs), Ethernet controllers, and others. Further, a CPU may initiate a transaction targeting a GPU or another CPU. Similarly, a GPU may initiate a transaction targeting the CPU or another GPU.Often, computer systems that include multiple initiators and multiple targets can employ firewall mechanisms, or more simply, firewalls for security purposes. A firewall acts as a security system that monitors and controls incoming and outgoing transactions across the interconnected system to ensure that each initiator accesses only memory address spaces and/or other targets that the initiator is allowed to access. Firewalls prevent applications executing on one initiator from inadvertently or intentionally interfering with applications executing on another initiator. With a firewall, each transaction generated by an initiator is received by the firewall, and the firewall determines whether the initiator is authorized to direct the transaction to the memory address space and/or other target specified by the transaction. If the firewall determines that the initiator is not authorized to direct the transaction to the memory address space and/or other destination, the firewall blocks the transaction. On the other hand, if the firewall determines that the initiator is authorized to direct the transaction to the memory address space and/or other destination, the firewall allows the transaction to proceed.This approach of funneling all transactions to the firewall works well for small to medium sized computer systems. However, some larger computer systems may have hundreds or even thousands of initiators and targets. In such large computer systems, because the firewall handles transactions for hundreds or thousands of originators, the firewall can become a bottleneck, resulting in reduced bandwidth, increased latency, and reduced computing power, resulting in significantly reduced performance of the computer system.As noted above, what is needed in the art are more efficient techniques for implementing firewalls in computer systems.Contents of the inventionVarious embodiments of the present disclosure set forth a computer-implemented method for processing a first transaction via an interconnect. The method includes determining that the first transaction is directed to a firewall. The method also includes aborting execution of the first transaction. The method further includes modifying a memory address included in the first transaction and in a first memory address format to generate a modified memory address in a second memory address format. The method also includes transmitting the first transaction including the modified memory address to the firewall via the interconnect.Other embodiments include, but are not limited to, systems implementing one or more aspects of the disclosed techniques, and one or more computer-readable media comprising instructions for performing one or more aspects of the disclosed techniques , and methods for performing one or more aspects of the disclosed technology.At least one technical advantage of the disclosed technique over the prior art is that, with the disclosed technique, firewalls are distributed such that each initiator is associated with a separate initiator firewall and each target is associated with a target firewall. Each transaction is routed along a path directly from the initiator firewall associated with the initiator to the target firewall associated with the target. Because transactions are not routed through a centralized firewall, transactions are processed more efficiently relative to existing methods. These advantages represent one or more technical improvements over prior art methods.Description of drawingsSo that the manner in which the above recited features of various embodiments can be understood in detail, a more particular description of the inventive concepts briefly summarized above may be had by reference to various embodiments, some of which are shown in the accompanying drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of the inventive concept and are therefore not to be considered limiting in any way of scope, as there may be other equally effective embodiments.Figure 1 is a block diagram of a computer system configured to implement one or more aspects of various embodiments;2 is a block diagram of a firewall system for the computer system of FIG. 1, according to various embodiments;3A is a block diagram of a firewall system having a centralized firewall for the computer system of FIG. 1 , according to various embodiments;3B is a block diagram of a firewall system having a distributed firewall for the computer system of FIG. 1, according to various embodiments;4A-4B illustrate a flowchart of method steps for processing transactions via the interconnection and distributed firewall of FIG. 3B, according to various embodiments.Detailed waysIn the following description, numerous specific details are set forth in order to provide a more thorough understanding of various embodiments. It will be apparent, however, to one skilled in the art that the inventive concepts may be practiced without one or more of these specific details.System OverviewFIG. 1 is a block diagram of a computer system 100 configured to implement one or more aspects of various embodiments. As shown, computer system 100 includes, but is not limited to, central processing unit (CPU) 102 and system memory 104 coupled to parallel processing subsystem 112 via memory bridge 105 and communication path 113 . Memory bridge 105 is further coupled to I/O (input/output) bridge 107 via communication path 106 , and I/O bridge 107 is in turn coupled to switch 116 .In operation, I/O bridge 107 is configured to receive user input from input device 108 , such as a keyboard or mouse, and to forward the input via communication path 106 and memory bridge 105 to CPU 102 for processing. Switch 116 is configured to provide connectivity between I/O bridge 107 and other components of computer system 100 , such as network adapter 118 and various add-in cards 120 and 121 .As also shown, I/O bridge 107 is coupled to system disk 114 , which may be configured to store content and applications and data used by CPU 102 and parallel processing subsystem 112 . In general, the system disk 114 provides non-volatile storage for applications and data, and may include fixed or removable hard drives, flash memory devices, and CD-ROM (Compact Disk Read-Only Memory), DVD-ROM (Digital Versatile Disk -ROM), Blu-ray, HD-DVD (High-Definition DVD), or other magnetic, optical, or solid-state storage devices. Finally, although not explicitly shown, other components (such as Universal Serial Bus or other port connections, compact disk drives, digital versatile disk drives, film recording devices, etc.) may also be connected to I/O bridge 107 .In various embodiments, memory bridge 105 may be a north bridge chip and I/O bridge 107 may be a south bridge chip. Furthermore, communication paths 106 and 113, as well as other communication paths within computer system 100, may be implemented using any technically suitable protocol, including but not limited to AGP (Accelerated Graphics Port), HyperTransport, or any other bus known in the art Or point-to-point communication protocol.In some embodiments, parallel processing subsystem 112 includes a graphics subsystem that delivers pixels to display device 110, which may be any conventional cathode ray tube, liquid crystal display, light emitting diode display, or the like. In such an embodiment, parallel processing subsystem 112 incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry. Such circuitry may be incorporated across one or more parallel processing units (PPUs) included within parallel processing subsystem 112 . In other embodiments, parallel processing subsystem 112 incorporates circuits optimized for general-purpose and/or computational processing. Again, such circuitry may be incorporated across one or more PPUs included within parallel processing subsystem 112 configured to perform such general-purpose and/or computational operations. In other embodiments, one or more PPUs included within parallel processing subsystem 112 may be configured to perform graphics processing, general-purpose processing, and computational processing operations. System memory 104 includes at least one device driver 103 configured to manage processing operations of one or more PPUs within parallel processing subsystem 112 .In various embodiments, parallel processing subsystem 112 may be integrated with one or more other elements of FIG. 1 to form a single system. For example, parallel processing subsystem 112 may be integrated on a single chip with CPU 102 and other connected circuits to form a system on a chip (SoC).It will be understood that the systems shown here are illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, can be modified as desired. For example, in some embodiments, system memory 104 may be connected directly to CPU 102 rather than through memory bridge 105 , and other devices will communicate with system memory 104 via memory bridge 105 and CPU 102 . In other alternative topologies, parallel processing subsystem 112 may be connected to I/O bridge 107 or directly to CPU 102 instead of to memory bridge 105 . In other embodiments, I/O bridge 107 and memory bridge 105 may be integrated into a single chip rather than exist as one or more discrete devices. Finally, in some embodiments, one or more components shown in FIG. 1 may not be present. For example, switch 116 could be eliminated and network adapter 118 and add-in cards 120 , 121 would connect directly to I/O bridge 107 .Physically Distributed Control Plane Firewall with Unified Software ViewVarious embodiments include techniques for processing transactions via computer systems interconnected with distributed firewalls. Using the disclosed technology, firewalls are distributed between initiators and targets in computer system interconnections. Place firewalls with initiator affinity closer to the corresponding initiators. These firewalls (referred to as originator firewalls) restrict transactions from corresponding originators to specified and limited address ranges. A firewall with target affinity is placed closer to the corresponding target. The target firewall restricts transactions to only those from initiators authorized to access the corresponding target. Generally, one of the initiators is responsible for configuring both the initiator firewall and the target firewall. Before configuration, the initiator and target firewalls are transparent, allowing all transactions to pass through. After the initiator and target firewalls are configured, the initiator and target firewalls can detect and authorize or block access from other initiators.Additionally, the firewall remapper performs mapping functions during the initialization and configuration of the initiator and target firewalls. Firewall Remapper assists in the programming and configuration of initiator and target firewalls. A firewall remapper maps individual initiator and target firewalls in a computer system interconnect into a single unified view. The unified view consolidates the memory address spaces of the various initiator and target firewalls into a single address space that includes the memory address spaces of all initiator and target firewalls. As a result, the initiator does not need to manage separate address spaces for each initiator firewall and target firewall, but instead accesses all firewalls via a unified view from a software perspective.FIG. 2 is a block diagram of a firewall system 200 for the computer system 100 of FIG. 1 , according to various embodiments. As shown, firewall system 200 includes, but is not limited to, initiator 210 and target 220 that communicate with each other via interconnect 230 . Interconnect 230 is also referred to herein as a "mesh network." Initiator 210 includes, but is not limited to, CPU complex 242 , boot processor 244 , power management processor 246 , camera processor 248 , security processor 250 , and debug processor 252 . Targets include, but are not limited to, PCIe targets 260 and peripheral bus targets 262 .CPU complex 242 includes one or more central processing units, such as CPU 102 of FIG. 1 . In some embodiments, CPU complex 242 may also include one or more parallel processors, such as parallel processing subsystem 112 of FIG. 1 . In operation, CPU 102 , parallel processing subsystem 112 , and/or other processors included in CPU complex 242 execute operating systems, user-level applications, and/or other executable software applications.In operation, boot processor 244 executes a boot sequence that includes various functions for initializing firewall system 200 when firewall system 200 is powered on and/or reset. The boot sequence performed by boot processor 244 may include functions for initializing memory, configuring various targets 220, loading and executing one or more operating systems and/or hypervisors, and the like. Upon completion of the boot sequence, boot processor 244 may notify CPU 102, parallel processing subsystem 112, and/or other processors included in CPU complex 242 that the boot sequence is complete and execution of applications may begin.Power management processor 246 monitors the activity of other components of firewall system 200 . Power management processor 246 controls various power gating control signals configured to provide or remove power to various components of firewall system 200 . If a particular component (whether initiator 210 or target 220) remains idle for a period of time, power management processor 246 changes the level of the corresponding power gating control signal in order to remove power to the particular component. In response, the component transitions from a powered-on state to a powered-down state. Subsequently, if power management processor 246 determines that a component in a powered-down state is required to support operations within firewall system 200, power management processor 246 changes the level of the corresponding power gating control signal to apply power to that particular component. components. In response, the component transitions from a powered-down state to a powered-on state. Additionally or alternatively, the power management processor 246 may provide control signals to gate or remove clock signals for various idle components to reduce power consumption. In this manner, power management processor 246 reduces power consumption of firewall system 200 when one or more components are idle.Camera processor 248 receives image data from a still image camera and/or a video camera (not shown). In general, image data can be large in size and can involve extensive preprocessing. Accordingly, firewall system 200 includes a separate camera processor 248 that retrieves and processes this image data. Thus, CPU 102, parallel processing subsystem 112, and/or other processors included in CPU complex 242 are freed up to perform other tasks.Security processor 250 performs various security functions to detect malware, software viruses, memory leaks, and applications that otherwise exhibit suspicious behavior. Security processor 250 isolates such applications and/or the processors executing them to mitigate any damage that may be caused by the applications.Debug processor 252 supports software development by providing debug and trace functionality for CPU 102 , parallel processing subsystem 112 and/or other processors included in CPU complex 242 . The debug handler 252 may set traps to halt the execution of the application under certain conditions or when a breakpoint placed in the application is detected. Debug processor 252 may generate a trace including the state of the processor executing the application. Debugging the processor 252 may allow a software developer to modify the state of the processor before resuming execution. Debug processors can further allow applications to execute one instruction at a time and then stop after each instruction. These functions enable software developers to monitor the behavior of applications and facilitate debugging of applications.CPU complex 242 , boot processor 244 , power management processor 246 , camera processor 248 , security processor 250 , debug processor 252 , and/or other initiators 210 communicate with various targets 220 via interconnect 230 . Targets include PCIe target 260 and peripheral bus target 262 .PCIe target 260 includes the components of firewall system 200 that communicate with initiator 210 via the Peripheral Component Interconnect Express (PCIe) bus standard. PCIe targets 260 may include, but are not limited to, secondary graphics processors, audio/sound processors, display processors, and the like.Peripheral bus target 262 includes components of firewall system 200 that communicate with initiator 210 via any technically feasible communication channel other than the PCIe bus standard. Peripheral bus targets 262 may include, but are not limited to, memory systems, universal asynchronous receiver/transmitters (UARTs), serial peripheral interfaces (SPIs), Ethernet controllers, and the like.To provide security isolation across applications executing on various initiators 210 and to mitigate interference from one application to other applications, firewall system 200 may implement firewall 232 . Firewall 232 acts as a security system that monitors and controls incoming and outgoing transactions across interconnect 230 to ensure that each initiator 210 only accesses memory address spaces and/or other targets 220 that initiator 210 is allowed to access. Firewall 232 is deployed within interconnect 230 to avoid dependencies and/or interference from initiators, such as interference due to power gating, clock gating, and reset loops. Accordingly, firewall 232 authorizes transactions regardless of the activities of the respective initiators 210 . In general, firewall 232 is reset when needed by a secure reset signal that is inaccessible to non-secure applications executing on initiator 210 .FIG. 3A is a block diagram of a firewall system 300 having a centralized firewall for the computer system 100 of FIG. 1 , according to various embodiments. As shown, firewall system 300 includes, but is not limited to, initiator 302 and target 304 that communicate with each other via interconnect 310 . Interconnect 310 is also referred to herein as a "mesh network." Interconnect 310 includes, but is not limited to, node (A) 306 and centralized firewall 308 .Certain nodes A 306 in interconnection 310 are associated with initiator 302 but not with target 304 . For example, nodes A(0, 1) 306(1), A(0, 2) 306(2) and A(0, 3) 306(3) communicate with initiator (0, 1) 302(1), initiator Party (0,2) 302(2) is associated with initiator (0,3) 302(3). Similarly, nodes A(3,1) 306(1), A(3,2) 306(2) and A(3,3) 306(1) communicate with initiators (3,1) 302(6), Initiator(3,2) 302(7) and Initiator(3,3) 302(8) are associated. Certain nodes A 306 in interconnection 310 are associated with target 304 but not with initiator 302 . For example, nodes A(1,0) 306(5) and A(2,0) 306(10) are associated with target (1,0) 304(1) and target (2,0) 304(2), respectively. Similarly, nodes A(1,4) 306(9) and A(2,4) 306(14) are associated with target (1,4) 304(5) and target (2,4) 304(6), respectively .Certain nodes A 306 in interconnection 310 are associated with both initiator 302 and target 304 . For example, node A (0,0) 306(0) is associated with both an initiator (0,0) 302(0) and a target (0,0) 304(0). Node A (0,4) 306(4) is associated with initiator (0,4) 302(4) and target (0,4) 304(4). Node A (3,0) 306(15) is associated with initiator (3,0) 302(5) and target (3,0) 304(3). Node A (3,4) 306(19) is associated with initiator (3,4) 302(9) and target (3,4) 304(7). Certain nodes A 306 in interconnection 310 are interconnected with other nodes 306 and/or centralized firewall 308 , but are not directly associated with any initiator 302 or target 304 . For example, nodes A(1,1) 306(6), A(1,3) 306(8), A(2,1) 306(11), A(2,2) 306(12) and A(2 , 3) 306(13) are interconnected with other nodes 306, but are not directly associated with any initiator 302 or target 304.To provide security isolation across applications executing on various originators 302 and to mitigate interference from one application to other applications, all transactions processed by firewall system 300 pass through centralized firewall 308 . In effect, all transactions are transmitted by the various originators 302 and merged at the centralized firewall 308 . For each transaction, centralized firewall 308 performs an authorization function to determine whether initiator 302 is authorized to access target 304 specified by the transaction. If centralized firewall 308 determines that initiator 302 is not authorized to direct a transaction to a memory address space associated with target 304, centralized firewall 308 blocks the transaction. On the other hand, if centralized firewall 308 determines that initiator 302 is authorized to direct the transaction to the memory address space associated with target 304, then centralized firewall 308 allows the transaction to proceed. Transactions authorized by the centralized firewall 308 are split at the output of the centralized firewall 308 and proceed to the corresponding target 304 .While a particular initiator 302 may generate transactions for individual targets 304 scattered throughout firewall system 300, centralized firewall 308 presents to initiator 302 a unified software view of the firewall for targets in the form of a single address space, independent of The number of firewalls for various targets 304 or the structure and/or topology of interconnection 310 . Using the unified software view, the initiator 302 addresses the firewall for a given target 304 by adding the offset associated with the firewall for the target 304 to a single firewall base address. For example, to address the firewall of target (0,0) 304(0), initiator 302 directs the transaction to firewall base address + target firewall offset (0,0). Similarly, to address the firewall of target (2,1) 304(11), initiator 302 directs the transaction to firewall base address + target firewall offset (2,1), and so on.Generally, one of the initiators is responsible for configuring the target's firewall via the unified software view of the firewall. Before configuration, the firewall is transparent, allowing all transactions to pass through. After the firewall is configured, it can detect and authorize or block access from other originators.In one particular example, an initiator (0,0) 302(0) executes a transaction directed to a target (0,0) 304(0). Even though the initiator (0,0) 302(0) and target (0,0) 304(0) are both associated with the same node A(0,0) 306(0), the transaction is not allowed to pass through node A(0 , 0) 306(0) is passed directly from the initiator (0,0) 302(0) to the target (0,0) 304(0). The transaction travels along path 330 from originator (0,0) 302(0) to centralized firewall 308 . In doing so, the transaction travels through nodes A(0,0) 306(0), A(0,1) 306(1), A(0,2) 306(2) and A(0,2) 306(2) before reaching the centralized firewall 308 (1,2)306(7). The centralized firewall 308 performs an authorization function to determine whether the initiator (0,0) 302(0) is authorized to access the target (0,0) 304(0) specified by the transaction. If centralized firewall 308 determines that initiator (0,0) 302(0) is not authorized to direct a transaction to the memory address space associated with target (0,0) 304(0), centralized firewall 308 blocks the transaction. On the other hand, if the centralized firewall 308 determines that the initiator (0,0) 302(0) is authorized to direct transactions to the memory address space associated with the target (0,0) 304(0), then the centralized firewall 308 allows The transaction travels along path 332 from centralized firewall 308 to destination (0,0) 304(0). In doing so, the transaction travels through nodes A(1,2) 306(7), A(1,1) 306(6), A(1,0) before reaching destination (0,0) 304(0) 306(5) and A(0,0) 306(0).The above approach is reasonably efficient for small interconnects 310 . However, this approach does not scale well for larger interconnects 310 for various reasons. First, all transactions across interconnect 310 are consolidated into centralized firewall 308 and then detached after being authorized by centralized firewall 308 . Accordingly, a substantial amount of bandwidth of interconnect 310 is consumed by transmitting transactions from initiator 302 to centralized firewall 308 and from centralized firewall 308 to target 304 . Second, communicating transactions from initiator 302 to target 304 via centralized firewall 308 , such as via paths 330 and 332 , increases the latency of transactions relative to communicating transactions directly from initiator 302 to target 304 . This increased delay results in reduced performance of interconnect 310 . Third, because all transactions pass through the centralized firewall 308, the utilization of the centralized firewall 308 can become very high, creating a hotspot in the area from a temperature and power consumption standpoint. Fourth, because all transactions are routed to and from the centralized firewall 308, the area around the centralized firewall 308 may have high line counts, which in turn may cause congestion and routing difficulties during placement. Fifth, centralized firewall 308 may have difficulty meeting timing requirements because centralized firewall 308 includes firewalls for all initiators 302 and targets 304 . Therefore, interconnect 310 with centralized firewall 308 is more suitable for relatively small firewall system 300 .FIG. 3B is a block diagram of a firewall system 350 with a distributed firewall for the computer system 100 of FIG. 1 , according to various embodiments. As shown, firewall system 350 includes, but is not limited to, initiator 302 , target 304 , initiator firewall 312 , and target firewall 314 in communication with each other via interconnect 320 . Interconnect 320 is also referred to herein as a "mesh network." Interconnect 320 includes, but is not limited to, node (A) 306 , firewall trap 322 and firewall remapper 324 . Initiator 302, target 304, and node 306 of firewall system 350 of FIG. 3B function similarly to initiator 302, target 304, and node 306 of firewall system 300 of FIG. 3A, except as further described below.With firewall system 350 of FIG. 3B , the firewall is distributed through firewall system 350 rather than centralized within interconnect 320 . A firewall with initiator affinity is referred to as an initiator firewall (IFW) 312 . Initiator firewall 312 is placed adjacent to corresponding initiators 302 and is used to restrict or sandbox each corresponding initiator 302 to address a limited range of addresses. In some embodiments, multiple initiator firewalls 312 may be combined and placed at common node 306 to sandbox multiple initiators 302 .Each initiator 302 of firewall system 350 is connected to a corresponding node 306 via a corresponding initiator firewall (IFW) 312 . Initiator(0,0)302(0), Initiator(0,1)302(1), Initiator(0,2)302(2), Initiator(0,3)302(3) and Initiator (0,4)302(4) via IFW(0,0)312(0), IFW(0,1)312(1), IFW(0,2)312(2), IFW(0,3) 312(3) and IFW(0,4) 312(4) are connected to nodes A(0,0) 306(0), A(0,1) 306(1), A(0,2) 306(2) , A(0,3) 306(3) and A(0,4) 306(4). Similarly, Initiator(3,0) 302(5), Initiator(3,1) 302(6), Initiator(3,2) 302(7), Initiator(3,3) 302(8) and initiator (3, 4) 302 (9) via IFW (3, 0) 312 (5), IFW (3, 1) 312 (6), IFW (3, 2) 312 (7), IFW (3 , 3) 312(8) and IFW(3,4) 312(9) are connected to nodes A(3,0) 306(15), A(3,1) 306(16), A(3,2) 306 (17), A(3,3)306(18), and A(3,4)306(19).A firewall with target affinity is referred to as a target firewall (TFW) 314 . Target firewalls 314 are placed adjacent to corresponding targets 304 that are protected by respective target firewalls 314 . In some embodiments, target firewall 314 may protect multiple targets by implementing a firewall for each of the multiple targets protected by target firewall 314 . In such an embodiment, the target firewall 314 scales to the number of targets 304 associated with the corresponding node 306 and the individual functions of each target 304 .Each target 304 of firewall system 350 is connected to a corresponding node 306 via a corresponding target firewall (TFW) 314 . Target (0, 0) 304 (0), target (1, 0) 304 (1), target (2, 0) 304 (2) and target (3, 0) 304 (3) respectively via TFW (0, 0 )314(0), TFW(1,0)314(1), TFW(2,0)314(2) and TFW(3,0)314(3) are connected to node A(0,0)306(0 ), A(1,0)306(5), A(2,0)306(10) and A(3,0)306(3). Similarly, target (0,4) 304(4), target (1,4) 304(5), target (2,4) 304(6) and target (3,4) 304(7) respectively via TFW( 0,4)314(4), TFW(1,4)314(5), TFW(2,4)314(6) and TFW(3,4)314(7) are connected to node A(0,4) 306(4), A(1,4) 306(9), A(2,4) 306(14), and A(3,4) 306(19).In one particular example, an initiator (0,0) 302(0) executes a transaction to a target (0,0) 304(0). The IFW (0,0) 312(0) of the initiator (0,0) 302(0) performs an authorization function to determine whether the initiator (0,0) 302(0) is authorized to access the target (0,0) specified by the transaction. 0) 304(0). In doing so, IFW (0,0) 312(0) of initiator (0,0) 302(0) checks the memory address specified by the transaction. IFW(0,0) 312(0) determines whether initiator(0,0) 302(0) is authorized to direct the transaction to the memory address space including the memory address specified by the transaction. If IFW(0,0) 312(0) determines that initiator(0,0) 302(0) is not authorized to direct the transaction to memory address space, then IFW(0,0) 312(0) blocks the transaction. On the other hand, if IFW(0,0) 312(0) determines that initiator(0,0) 302(0) is authorized to direct the transaction to memory address space, then IFW(0,0) 312(0) allows the transaction to continue conduct.Transactions are transferred directly from IFW (0,0) 312(0) of initiator (0,0) 302(0) to TFW (0,0) 314(0) of target (0,0) 304(0) via node 306 ). A transaction follows the shortest path from IFW(0,0)312(0) of initiator (0,0)302(0) to TFW(0,0)314(0) of target (0,0)304(0) . The shortest path can be determined via any technically feasible technique, such as Manhattan distance technique, Euclidian distance technique, and the like. As shown, the transaction follows path 340 from IFW(0,0) 312(0), through node A(0,0) 306(0), and then to TFW(0,0) 314(0).TFW(0,0) 314(0) performs an authorization function to determine whether the initiator (0,0) 302(0) is authorized to direct the transaction to the target (0,0) protected by TFW(0,0) 314(0) )304(0). If TFW(0,0) 314(0) determines that initiator (0,0) 302(0) is not authorized to direct the transaction to target (0,0) 304(0), then TFW(0,0) 314(0 ) blocks the transaction. On the other hand, if TFW(0,0) 314(0) determines that initiator (0,0) 302(0) is authorized to direct transactions to target (0,0) 304(0), then TFW(0,0) 314(0) allows the transaction to continue. TFW (0,0) 314(0) forwards the transaction to target (0,0) 304(0) for processing.With a distributed firewall, firewall system 350 does not merge and split transactions at a centralized firewall such as centralized firewall 308 of firewall system 300 of FIG. 3A . Accordingly, firewall system 350 does not experience bottlenecks at individual firewalls or the latency, hotspot, and congestion issues described in connection with FIG. 3A. Each transaction traverses the shortest path from the associated IFW 312 to the associated TFW 314 . Further, each TFW 314 includes firewalls only for the specific targets 304 protected by that TFW 314 . Accordingly, the timing issues described in connection with FIG. 3A are reduced and/or eliminated.With this approach for distributed firewalls, initiator firewall 312 authorizes transactions from associated initiators 302 . If the transaction is authorized, initiator firewall 312 transmits the transaction through node 306 of interconnect 320 to target firewall 314 via the shortest path through interconnect 320 without going through the centralized firewall. Target firewall 314 authorizes transactions for associated target 304 . If the transaction is authorized, target firewall 314 forwards the transaction to associated target 304 .While this approach works well for transactions directed at most targets, this approach presents challenges for transactions where the target is one or more of the initiator firewall 312 or the target firewall 314 itself, such as when the bootstrap processor 244 When the initiator firewall 312 and the target firewall 314 are initialized and configured. During initialization, the bootstrap processor 244 of FIG. 2 , or some similar processor, configures the initiator firewall 312 with the memory address space authorized by the associated initiator 302 . Similarly, bootstrap processor 244 configures target firewall 314 with a list of initiators 302 authorized to transmit transactions to target 304 protected by associated target firewall 314 .With the distributed target firewall 314, the bootstrap processor 244 directs transactions to different individual initiator firewalls 312 and target firewalls 314, where each initiator firewall 312 and target firewall 314 is associated with a separate address space. Thus, the bootstrap processor 244 is not presented with a unified software view in the form of a single address space that includes memory address spaces for all initiator firewalls 312 and target firewalls 314 . Instead, the bootstrap processor 244 addresses a given initiator firewall 312 or target firewall 314 by adding an offset associated with the initiator firewall 312 or target firewall 314 to the base address of the initiator firewall 312 or target firewall 314 . In addition, the base address of each initiator firewall 312 or target firewall 314 may change with each iteration of firewall system 350 as the number and address space size of each initiator firewall 312 and target firewall 314 changes. For example, to address target firewall (0,0) 314(0), bootstrap processor 244 directs the transaction to target firewall (0,0) base address + target firewall offset (0,0). Similarly, to address target firewall (2,1) 304(11), bootstrap processor 244 directs transactions to target firewall (2,1) base address + target firewall offset (2,1), and so on. In order to address the distributed initiator firewall 312 and target firewall 314 directly, the bootstrap processor 244 would have to maintain a list of base addresses of the various initiator firewalls 312 and target firewalls 314 . Further, the number and address space size of initiator firewalls 312 and target firewalls 314 change with iterations of firewall system 350 .To alleviate this problem, firewall system 350 includes firewall catcher 322 and firewall remapper 324 . Bootstrap processor 244 addresses the various registers of initiator firewall 312 and target firewall 314 with a unified software view, as if initiator firewall 312 and target firewall 314 had a common base address. The memory addresses generated by the bootstrap processor 244 address registers of the initiator firewall address space and the target firewall address space as offsets from a single firewall base address. This form of address is called an originator-based address.Firewall trapper 322 captures software-initiated transactions directed to initiator firewall 312 and target firewall 314 and forwards the access to the correct initiator firewall address space or target firewall address space. In doing so, firewall catcher 322 temporarily suspends execution of the transaction. A transaction specifying an originator-based address cannot be sent directly to either the originator firewall 312 or the target firewall 314 until the memory address is remapped to an address specifying an offset from the base address of the associated target address space. This form of address is called a destination-based address. Firewall trapper 322 traps transactions awaiting remapping. Firewall catcher 322 directs transactions to firewall remapper 324 included in interconnect 320 .Firewall remapper 324 remaps the memory addresses included in the transaction by modifying the memory addresses from initiator-based addresses to target-based addresses. Firewall remapper 324 maintains a lookup table, referred to as a remap table, that maps the base address of each initiator firewall 312 and target firewall 314 memory address space to a single address space with a single base address, and vice versa. After firewall remapper 324 remaps the initiator-based address, the remapped address is now in the form of a target-based address.The remap table is hard-coded in firewall remapper 324 and may change with iterations of firewall system 350 . However, these changes are transparent to the boot processor 244 and other initiators 302 . As a result, bootstrap processor 244 and other initiators 302 may use the same base address to refer to various initiator firewalls 312 and target firewalls 314 , even through iterations of firewall system 350 .For example, to address target firewall (0,0) 314(0), bootstrap processor 244 directs the transaction to firewall base address + target firewall offset (0,0). Firewall remapper 324 remaps this address to an address specified as an offset from the base address for target firewall 314 , such as target firewall (0,0) base address + target firewall offset (0,0). Similarly, to address target firewall (2,1) 314(11), bootstrap processor 244 directs the transaction to firewall base address + target firewall offset (2,1). Firewall remapper 324 remaps this address to an address specified as an offset from the base address for target firewall 314 , such as target firewall (2,1) base address + target firewall offset (2,1). Firewall remapper 324 similarly remaps addresses for transactions to originator firewall 312 .After firewall remapper 324 remaps the memory addresses included in the transaction by modifying the memory addresses from initiator-based addresses to target-based addresses, the transactions can now be processed to configure the relevant initiator firewall 312 or target firewall 314 . Accordingly, firewall trapper 322 forwards the transaction to the address space associated with the modified memory address, thereby allowing the transaction to proceed.Although firewall remapper 324 may introduce some additional traffic into firewall system 350, most of this additional traffic occurs only once at boot time, when bootstrap processor 244 configures initiator firewall 312 and target firewall 314. Thus, after initiator firewall 312 and target firewall 314 are configured, firewall remapper 324 introduces little to no impact on the performance of firewall system 350 during runtime. Furthermore, traffic on firewall system 350 for configuring initiator firewalls 312 and target firewalls 314 is only transmitted to relevant initiator firewalls 312 and target firewalls 314 , rather than all initiator firewalls 312 and target firewalls 314 . As a result, the configuration of initiator firewall 312 and target firewall 314 introduces little to no unnecessary traffic on firewall system 350 .It will be appreciated that the interconnections shown in FIGS. 2-3B are illustrative and that variations and modifications are possible. In one example, the interconnection of FIGS. 2-3B is shown with a certain number of nodes, initiators, and targets. However, an interconnect may have any technically feasible number of nodes, initiators, and targets within the scope of this disclosure. In another example, in the interconnection of FIGS. 2-3B , some nodes of the interconnection are associated with initiator firewalls, some nodes are associated with target firewalls, and some nodes are associated with both initiator firewalls and target firewalls. associated, and some nodes are not associated with either the initiator firewall or the target firewall. However, any node interconnected may be associated with an initiator firewall, a target firewall, or both an initiator and target firewall within the scope of the present disclosure. In yet another example, the interconnected nodes of FIGS. 3A-3B are shown as having some connections to each other and/or to a firewall. However, within the scope of the present disclosure, the nodes may be interconnected with each other and/or with the firewall in any technically feasible way and via any technically feasible interconnection structure and/or topology. In yet another example, the initiator and target firewalls of FIG. 3B are shown external to interconnect 320 of firewall system 350 . However, an initiator firewall and a target firewall may be included within interconnect 320 of firewall system 350 within the scope of the present disclosure.4A-4B set forth a flowchart of method steps for processing transactions via the interconnect 320 and distributed firewall of FIG. 3B, according to various embodiments. Although the method steps are described in conjunction with the systems of FIGS. 1-3B , those skilled in the art will understand that any system configured to perform the method steps in any order is within the scope of the present disclosure.As shown, method 400 begins at step 402, where initiator 302 issues a transaction. Initiator 302 includes any system component that can initiate a transaction against a target, such as a load operation or a store operation. Initiator 302 includes, but is not limited to, CPU complex 242 , boot processor 244 , power management processor 246 , camera processor 248 , security processor 250 , and debug processor 252 . Targets include, but are not limited to, PCIe targets 260 and peripheral bus targets 262 . Initiator 302 executes a transaction directed to target 304 . In general, an initiator 302 passes through an interconnect 320 and directs a transaction (such as a load operation or a store operation) to a target 304 . In some examples, initiator 302 may perform a load operation to retrieve one or more data values from memory locations and/or registers included in target 304 . Similarly, in some examples, initiator 302 may perform a store operation to store one or more data values to memory locations and/or registers included in target 304 .At step 404 , firewall catcher 322 included in interconnect 320 determines whether the transaction is directed to originator firewall 312 or target firewall 314 . If firewall catcher 322 determines that the transaction is directed to either initiator firewall 312 or target firewall 314, the initiator-based address included in the transaction should be remapped to a target-based address. While initiator 302 may generate transactions directed to various targets 304 scattered throughout firewall system 350, when initiator 302 generates transactions directed to either initiator firewall 312 or target firewall 314, as in initializing and configuring initiator firewall 312 With target firewalls 314 , initiator 302 addresses the target address space as an offset from a single firewall base address, independent of the number of targets 304 or the structure and/or topology of interconnect 310 . This form of address is called an originator-based address. For example, to address the firewall of target (0,0) 304(0), initiator 302 directs the transaction to firewall base address + target firewall offset (0,0). Similarly, to address the firewall of target (2,1) 304(11), initiator 302 directs the transaction to firewall base address + target firewall offset (2,1), and so on.In such a case, method 400 proceeds to step 406 where firewall trapper 322 captures the transaction. In doing so, firewall catcher 322 temporarily suspends execution of the transaction. A transaction specifying an originator-based address cannot be sent directly to the originator firewall 312 or target firewall 314 until the memory address is remapped to an address specifying an offset from the base address of the associated firewall address space. This form of address is called a destination-based address. Firewall trapper 322 traps transactions awaiting remapping. At step 408 , firewall catcher 322 directs the transaction to firewall remapper 324 included in interconnect 320 .At step 410, firewall remapper 324 remaps the memory address included in the transaction by modifying the memory address from an initiator-based address to a target-based address. Firewall remapper 324 maintains a lookup table, referred to as a remap table, that maps the base address of each initiator firewall 312 and target firewall 314 memory address space to a single address space with a single base address, and vice versa. In this manner, initiator 302 addresses each of initiator firewall 312 and target firewall 314 with a unified software view, as if initiator firewall 312 and target firewall 314 had a common base address. The memory address generated by initiator 302 addresses the target address space as an offset from a single firewall base address. This form of address is called an originator-based address. After firewall remapper 324 remaps the initiator-based address, the remapped address now addresses the target address space as an offset from the base address of the associated target address space. This form of address is called a destination-based address.The remap table is hard-coded in firewall remapper 324 and may change with iterations of firewall system 350 . However, these changes are transparent to the initiator 302. As a result, initiator 302 may use the same base address to refer to different initiator firewalls 312 and target firewalls 314 , even through iterations of firewall system 350 .For example, to address the firewall of target (0,0) 304(0), initiator 302 directs the transaction to firewall base address + target firewall offset (0,0). Firewall remapper 324 remaps the address to an address specified as an offset from the base address of target firewall 314 , such as target (0,0) base address+target firewall offset(0,0). Similarly, to address the firewall of target (2,1) 304(11), initiator 302 directs the transaction to firewall base address + target firewall offset (2,1). Firewall remapper 324 remaps this address to an address specified as an offset from the base address for target firewall 314 , such as target (2,1) base address + target firewall offset (2,1). Firewall remapper 324 similarly remaps addresses for transactions to originator firewall 312 .At step 412, firewall catcher 322 forwards the transaction to the address space associated with the modified memory address. After firewall remapper 324 remaps the memory addresses included in the transaction by modifying the memory addresses from initiator-based addresses to target-based addresses, the transactions can now be processed by the relevant initiator firewall 312 or target firewall 314 . Accordingly, firewall trapper 322 forwards the transaction to the address space associated with the modified memory address, thereby allowing the transaction to proceed.At step 414 , interconnect 320 communicates the transaction to either initiator firewall 312 or target firewall 314 included in firewall system 350 corresponding to the transaction. , interconnect 320 transports transactions that now include the target-based address mapped by firewall remapper 324 at step 408 . Interconnect 320 sends the transaction from firewall remapper 324 to target firewall 314 of target 304 via node 306 of interconnect 320 .In step 416, initiator 302 completes the transaction. Method 400 then terminates. Alternatively, method 400 proceeds to step 402 to process additional transactions.Returning to step 404, if firewall trapper 322 determines that the transaction is not directed to a firewall, then firewall trapper 322 does not trap the transaction. Instead, method 400 proceeds to step 418 , where initiator firewall 312 included in firewall system 350 authorizes transactions issued by initiator 302 . Initiator firewall 312 in turn authorizes the transaction. Firewalls with initiator affinity are referred to as initiator firewalls 312 . Initiator firewall 312 is placed adjacent to corresponding initiators 302 and is used to restrict or sandbox each corresponding initiator 302 to address a limited range of addresses. In some embodiments, multiple initiator firewalls 312 may be combined and placed at common node 306 to sandbox multiple initiators 302 .Initiator firewall 312 for initiator 302 performs an authorization function to determine whether initiator 302 is authorized to access target 304 specified by the transaction. In doing so, initiator firewall 312 of initiator 302 checks the memory address specified by the transaction. Initiator firewall 312 determines whether initiator 302 is authorized to direct the transaction to the memory address space that includes the memory address specified by the transaction. If initiator firewall 312 determines that initiator 302 is not authorized to direct a transaction to a memory address space, initiator firewall 312 blocks the transaction. On the other hand, if initiator firewall 312 determines that initiator 302 is authorized to direct the transaction to the memory address space, then initiator firewall 312 allows the transaction to proceed.At step 420 , interconnect 320 transmits the transaction to target firewall 314 included in firewall system 350 corresponding to target 304 . Interconnect 320 transmits transactions from initiator firewall 312 of initiator 302 directly to target firewall 314 of target 304 via nodes 306 of interconnect 320 . Transactions follow the shortest path from initiator firewall 312 for initiator 302 to target firewall 314 for target 304 . The shortest path may be determined via any technically feasible technique, such as Manhattan distance techniques, Euclidean distance techniques, and the like.At step 422 , target firewall 314 authorizes the transaction generated by initiator 302 as described in connection with step 416 . Firewalls with target affinity are referred to as target firewalls 314 . Target firewalls 314 are placed adjacent to corresponding targets 304 that are protected by respective target firewalls 314 . In some embodiments, target firewall 314 may protect multiple targets by implementing a firewall for each of the multiple targets protected by target firewall 314 . In such an embodiment, the target firewall 314 scales to the number of targets 304 associated with the corresponding node 306 and the different functionality of each target 304 .Target firewall 314 performs an authorization function to determine whether initiator 302 is authorized to direct transactions to target 304 protected by target firewall 314 . If target firewall 314 determines that initiator 302 is not authorized to direct the transaction to target 304, target firewall 314 blocks the transaction. On the other hand, if target firewall 314 determines that initiator 302 is authorized to direct the transaction to target 304, target firewall 314 allows the transaction to proceed. Target firewall 314 forwards the transaction to target 304 for processing. In step 424, initiator 302 completes the transaction. Method 400 then terminates. Alternatively, method 400 proceeds to step 402 to process additional transactions.In summary, various embodiments include techniques for processing transactions via a computer system interconnected with a distributed firewall. Using the disclosed technology, firewalls are distributed between initiators and targets in computer system interconnections. Place firewalls with initiator affinity closer to the corresponding initiators. These firewalls (referred to as originator firewalls) restrict transactions from corresponding originators to specified and limited address ranges. A firewall with target affinity is placed closer to the corresponding target. The target firewall restricts transactions to only those from initiators authorized to access the corresponding target. Generally, one of the initiators is responsible for configuring both the initiator firewall and the target firewall. Before configuration, the initiator and target firewalls are transparent, allowing all transactions to pass through. After the initiator and target firewalls are configured, the initiator and target firewalls can detect and authorize or block access from other initiators.Additionally, the firewall remapper performs mapping functions during the initialization and configuration of the initiator and target firewalls. Firewall Remapper assists in the programming and configuration of initiator and target firewalls. A firewall remapper maps individual initiator and target firewalls in a computer system interconnect into a single unified view. The unified view consolidates the memory address spaces of the various initiator and target firewalls into a single address space that includes the memory address spaces of all initiator and target firewalls. As a result, the initiator does not need to manage separate address spaces for each initiator firewall and target firewall, but instead accesses all firewalls via a unified view from a software perspective.At least one technical advantage of the disclosed technique over the prior art is that, with the disclosed technique, firewalls are distributed such that each initiator is associated with a separate initiator firewall and each target is associated with a target firewall. Each transaction is routed along a path directly from the initiator firewall associated with the initiator to the target firewall associated with the target. Because transactions are not routed through a centralized firewall, transactions are processed more efficiently relative to existing methods. Another advantage of the disclosed technology is that transactions are remapped such that applications executing on the initiator are presented with a unified view of the firewall regardless of changes to the specific configuration of the interconnect. Therefore, programmers do not need to modify applications every time the architecture of the interconnect changes. These advantages represent one or more technical improvements over prior art methods.Any and all combinations of any claim elements recited in any claim and/or any elements described in this application in any form are within the contemplated scope of the present disclosure and protection.The description of various embodiments has been presented for purposes of illustration, and is not intended to be exhaustive or limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.Aspects of this embodiment can be embodied as a system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.), or an embodiment combining software and hardware aspects, which may all be collectively referred to herein as "module" or "system". Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon.Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example and without limitation, an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system, apparatus or device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media would include the following: electrical connection with one or more wires, portable computer disk, hard disk, random access memory (RAM), read only memory ( ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the context of this document, a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It should be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that, via the instructions executed by the processor of the computer or other programmable data processing apparatus, the flowchart and/or the implementation of the functions/acts specified in the block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, a special purpose processor or a field programmable gate array.The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical functions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the present disclosure can be devised without departing from the essential scope thereof, the scope of which is determined by the following claims. |
The invention relates to microelectronic devices, and related memory devices, electronic systems, and methods. A microelectronic device comprises a stack structure comprising alternating conductive structures and insulating structures arranged in tiers, each of the tiers individually comprising a conductive structure and an insulating structure, strings of memory cells vertically extending through the stack structure, the strings of memory cells comprising a channel material vertically extending through the stack structure, and conductive rails laterally adjacent to the conductive structures of the stack structure. The conductive rails comprise a material composition that is different than a material composition of the conductive structures of the stack structure. |
1.A microelectronic device comprising:a stacked structure comprising alternating conductive and insulating structures arranged in layers, each of the layers comprising a conductive structure and an insulating structure, respectively;a string of memory cells extending vertically through the stack, the string of memory cells including channel material extending vertically through the stack; andA conductive track laterally adjacent to the conductive structure of the stacked structure, the conductive track comprising a material composition different from that of the conductive structure of the stacked structure.2.1. The microelectronic device of claim 1, wherein the conductive track is in direct physical contact with the conductive structure, the conductive track extending horizontally beyond a horizontal boundary of the insulating structure.3.2. The microelectronic device of claim 1, wherein the conductive rail comprises a T-shaped rail of conductive material, a first portion of the T-shaped rail is positioned laterally beyond an outer sidewall of the insulating structure, and the T-shaped A second portion of the rail is located vertically between portions of the insulating structure, the second portion having a height substantially equal to the height of the conductive structure.4.3. The microelectronic device of any one of claims 1-3, wherein the conductive track has a greater electrical conductivity than the conductive structure.5.The microelectronic device of any one of claims 1 to 3, wherein:The conductive rail further includes one or more of phosphorus, arsenic, antimony, bismuth, boron, aluminum, gallium, carbon, fluorine, chlorine, bromine, and argon; andThe conductive structure of the stacked structure is substantially free of fluorine.6.3. The microelectronic device of any one of claims 1-3, further comprising a conductive liner material between the insulating structure and the conductive structure, wherein the conductive track comprises tungsten, and the conductive track The liner material includes titanium nitride.7.3. The microelectronic device of any one of claims 1-3, further comprising a dielectric barrier material between and in direct contact with the insulating structure and the conductive structure.8.3. The microelectronic device of any one of claims 1-3, further comprising a dielectric material within a replacement gate trench extending through the stacked structure, wherein all of the insulating structures are laterally adjacent to the insulating structure. The lateral dimension of the dielectric material is greater than the lateral dimension of the dielectric material laterally adjacent to the conductive track.9.A memory device comprising:a stacked structure comprising alternating layers of conductive structures and insulating structures;guide posts extending vertically through the stack structure, each guide post including a channel structure including a semiconducting material extending vertically through the stack structure; andA conductive track extending vertically along sidewalls of the conductive structure and the insulating structure of the stack structure, the conductive track having a greater electrical conductivity than the conductive structure.10.9. The memory device of claim 9, wherein the conductive structures of the stacked structure are relatively close to the guide posts and the conductive rails are relatively far from the guide posts, the conductive structures being composed of a first material A conductive material is formed, and the conductive track is formed of another conductive material having a different composition of the second material.11.The memory device of claim 9 or claim 10, further comprising a data line structure overlying the stacked structure and a source structure underlying the stacked structure, the conductive posts being electrically connected to the stacked structure data line structure and the source structure.12.10. A memory device as claimed in claim 9 or claim 10, wherein the conductive rails extend vertically along portions of the sidewalls of the insulating structures of vertically adjacent pairs.13.10. The memory device of claim 9 or claim 10, wherein the conductive rails each exhibit a height equal to or greater than the height of each of the conductive structures in the stacked structures, respectively.14.10. The memory device of claim 9 or claim 10, wherein the conductive rails comprise tungsten and the conductive structures of the stacked structure comprise one or more of titanium, ruthenium, aluminium and molybdenum.15.A method of forming a microelectronic device, the method comprising:forming a stack structure comprising vertically alternating conductive structures and insulating structures;forming a memory string including channel material and at least one dielectric material extending vertically through the stack; andConductive rails are formed along outer sidewalls of the conductive structures of the stacked structure, and the conductive rails include a material composition different from that of the conductive structures of the stacked structure.16.The method of claim 15, wherein:forming the stacked structure includes forming the conductive structure of the stacked structure using atomic layer deposition; andForming the conductive tracks includes forming the conductive tracks using one or more of chemical vapor deposition and atomic layer deposition.17.16. The method of claim 15, wherein forming the conductive rails comprises:removing portions of the conductive material of the conductive structures that are exposed in the openings formed in the stacked structure such that the conductive structures are laterally recessed relative to the insulating structures; andAnother conductive material of the conductive track is grown within the opening, the conductive track extending from the conductive structure into the opening and laterally beyond the outer sidewalls of the insulating structure.18.18. The method of claim 17, wherein growing the other conductive material of the conductive track within the opening comprises growing the another conductive material to be at least partially with the outer sidewall of the insulating structure overlapping.19.18. The method of claim 17, further comprising forming an inhibitory material to be absorbed within the insulating structure and on the outer sidewalls of the conductive structure prior to growing the other conductive material of the conductive track at least one of the formation accelerators.20.19. The method of any one of claims 15-19, wherein forming the stacked structure comprising alternating conductive and insulating structures comprises forming at least one of a conductive liner material and a dielectric barrier material around the conductive structure One.21.19. The method of any one of claims 15-19, wherein forming the conductive track comprises forming grooves in opposite corners of the insulating structure, and forming the conductive track to exhibit a substantially rectangular shape cross-sectional shape, at least a portion of the conductive track is formed within the groove of the insulating structure.22.16. The method of claim 15 or claim 16, wherein forming the conductive track comprises:forming a polysilicon material laterally adjacent to the conductive structure; andConverting at least some of the polysilicon material to a conductive material including beta phase tungsten.23.An electronic system comprising:input device;output device;a processor device operably coupled to the input device and the output device; anda memory device operably coupled to the processor device and comprising at least one microelectronic device comprising:a string of memory cells extending vertically through a stacked structure comprising a vertically alternating sequence of substantially tungsten-free insulating and conductive structures arranged in layers; andAn additional conductive structure horizontally adjacent to the conductive structure of the stacked structure and comprising beta phase tungsten, the additional conductive structure having a vertical height greater than the vertical height of the conductive structure of the stacked structure.24.24. The electronic system of claim 23, wherein the additional conductive structures partially surround outer sidewalls of pairs of insulating structures that are vertically adjacent to each other.25.An electronic system as claimed in claim 23 or claim 24, wherein the memory device comprises a 3DNAND flash memory device. |
Microelectronic devices and related memory devices, electronic systems and methodspriority claimThis application claims US Patent Application No. 16/990,580, "Microelectronic Devices Including Conductive Rails, and Related Memory Devices, Electronic Systems, and Methods)" of the submission date.technical fieldIn various embodiments, the present disclosure relates generally to the field of microelectronic device design and manufacture. More specifically, the present disclosure relates to microelectronic devices and apparatuses including conductive tracks adjacent to conductive structures in conductive layers, and to related memory devices, electronic systems, and methods of forming microelectronic devices.Background techniqueA continuing goal of the microelectronics industry is to increase the memory density (eg, the number of memory cells per memory die) of memory devices such as non-volatile memory devices (eg, NAND flash memory devices). One way to increase memory density in non-volatile memory devices is to utilize a vertical memory array (also known as a "three-dimensional (3D) memory array") architecture. Conventional vertical memory arrays include vertical memory strings extending through openings in one or more conductive stack structures comprising layers of conductive structures and insulating structures. Each vertical memory string may include at least one selection device coupled in series with the series combination of vertically stacked memory cells. Compared to structures with conventional planar (eg, two-dimensional) transistor arrangements, this configuration permits a greater number of switching devices (eg, transistors) to be located in the die area by building the array up (eg, vertically) on the die units (that is, the length and width of the active surface consumed).Vertical memory array architectures generally include electrical connections between conductive structures and access lines (eg, word lines) of layers of conductive stack structures of memory devices such that memory cells of a vertical memory array can be uniquely selected for use for write, read or erase operations. One method of forming such electrical connections includes forming so-called "step" (or "step") structures at the edges (eg, horizontal ends) of the layers of the conductive stack of the memory device. The stepped structure includes individual "steps" that define contact regions of the conductive structure on which the conductive contact structure can be positioned to provide electrical access to the conductive structure.As vertical memory array technology has advanced, additional memory densities have been provided by forming vertical memory arrays that include stacks that include additional layers of conductive structures, and thus at various steps associated therewith. Additional stepped structures and/or additional steps are included in the structure. As the number of layers of conductive structures increases, processing conditions for forming vertical memory strings extending through the stack become increasingly difficult. Furthermore, as the thickness of each layer is decreased to increase the number of layers within a given height of the stack, the resistivity of the conductive structure can increase and the conductivity can exhibit a corresponding decrease. However, a reduction in the conductivity of the conductive structures may affect the performance of the memory cell string.SUMMARY OF THE INVENTIONEmbodiments described herein include microelectronic devices and apparatus that include conductive tracks adjacent to conductive structures in conductive layers, and relate to related memory devices, electronic systems, and methods of forming microelectronic devices. According to one embodiment described herein, a microelectronic device includes: a stacked structure including alternating conductive and insulating structures arranged in layers, each of the layers including a conductive structure and an insulating structure, respectively; a memory a string of cells extending vertically through the stack, the string of memory cells including channel material extending vertically through the stack; and conductive rails laterally adjacent the conductive tracks of the stack The conductive track includes a material composition different from that of the conductive structure of the stacked structure.According to additional embodiments described herein, a memory device includes: a stack structure including layers of alternating conductive structures and insulating structures; guide pillars extending vertically through the stack structure, each guide pillar including a channel structure including a semiconductive material extending vertically through the stack structure; and a conductive track vertically along sidewalls of the conductive structure and the insulating structure of the stack structure By extension, the conductive track has a greater electrical conductivity than the conductive structure.Furthermore, according to additional embodiments described herein, a method of forming a microelectronic device includes: forming a stack including vertically alternating conductive and insulating structures; forming a channel including a channel extending vertically through the stack a memory string of material and at least one dielectric material; and forming conductive tracks along outer sidewalls of the conductive structures of the stacked structure, the conductive tracks comprising a different material composition than the conductive structures of the stacked structure material composition.According to other embodiments described herein, an electronic system includes: an input device; an output device; a processor device operably coupled to the input device and the output device; and a memory device operably coupled to the processor device and including at least one microelectronic device including: a string of memory cells extending vertically through a stacked structure including substantially free tungsten arranged in layers vertical alternating sequences of insulating and conductive structures; and additional conductive structures horizontally adjacent to said conductive structures of said stack structure and comprising beta phase tungsten, said additional conductive structures having a greater vertical height than said stack The vertical height of the conductive structure of the structure.Description of drawings1A-1J are simplified cross-sectional views illustrating a method of forming a microelectronic device in accordance with an embodiment of the present disclosure;2 is a partially cutaway perspective view of a microelectronic device according to an embodiment of the present disclosure;3 is a block diagram of an electronic system according to an embodiment of the present disclosure; and4 is a block diagram of a processor-based system according to an embodiment of the present disclosure.detailed descriptionThe following description provides specific details, such as material compositions, shapes, and sizes, in order to provide a thorough description of embodiments of the present disclosure. However, one of ordinary skill in the art will understand that embodiments of the present disclosure may be practiced without having to employ these specific details. Indeed, embodiments of the present disclosure may be practiced in conjunction with conventional microelectronic device fabrication techniques employed in the industry. Additionally, the description provided below does not form a complete process flow for fabricating a microelectronic device (eg, a memory device such as a 3DNAND flash memory device). The structures described below do not form a complete microelectronic device. Only those process acts and structures necessary to understand embodiments of the present disclosure are described in detail below. Additional actions to form a complete microelectronic device can be performed by conventional fabrication techniques.The drawings presented herein are for illustrative purposes only and are not intended to be actual views of any particular material, component, structure, device, or system. Variations in the shapes depicted in the drawings due, for example, to manufacturing techniques and/or tolerances are to be expected. Therefore, the embodiments described herein should not be construed as limited to the particular shapes or regions shown, but to encompass deviations from shapes that may result, for example, by manufacturing. For example, regions illustrated or described as box-shaped may have rough and/or nonlinear features, and regions illustrated or described as circular may include some rough and/or linear features. Furthermore, the sharp corners shown may be rounded, and vice versa. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the precise shape of a region and are not intended to limit the scope of the claims. The drawings are not necessarily drawn to scale. In addition, common elements between the figures may retain the same numerical designation.As used herein, the terms "vertical", "longitudinal", "horizontal" and "lateral" refer to the principal plane of a structure and are not necessarily bounded by the Earth's gravitational field. A "horizontal" or "transverse" direction is a direction substantially parallel to the main plane of the structure, and a "vertical" or "longitudinal" direction is a direction substantially perpendicular to the main plane of the structure. The principal plane of the structure is defined by the surface of the structure having a relatively large area compared to the other surfaces of the structure. With reference to the figures, a "horizontal" or "lateral" direction may be perpendicular to the indicated "Z" axis, and may be parallel to the indicated "X" axis and/or parallel to the indicated "Y" axis; and "vertical" or The "longitudinal" direction may be parallel to the indicated "Z" axis, may be perpendicular to the indicated "X" axis, and may be perpendicular to the indicated "Y" axis.As used herein, referring to an element as being "on" or "over" another element means and includes that the element is directly on top of, directly adjacent to (eg, directly laterally adjacent to, directly adjacent to, vertically adjacent to, directly below, or in direct contact with, another element. It also includes the element being indirectly on top of another element, indirectly adjacent (eg, indirectly laterally adjacent, indirectly vertically adjacent) to another element, indirectly under or near another element, with other elements in between. element. In contrast, when an element is referred to as being "directly on" or "directly adjacent to" another element, there are no intervening elements present.As used herein, for example "below", "below", "lower", "bottom", "above", "upper", "top", "front", "rear", "left side" Spatially relative terms such as ", "right side" may be used herein for ease of description to describe one element or feature's relationship to another element or feature as illustrated in the figures. Unless otherwise specified, spatially relative terms are intended to encompass different orientations of the material in addition to the orientation depicted in the figures. For example, if the material in the figures is reversed, elements described as "below" or "beneath" or "under" or "on the bottom" of other elements or features would then be oriented "on the bottom" of the other elements or features above" or "on top". Thus, the term "under" can encompass both an orientation of above and below depending on the context in which the term is used, as will be apparent to one of ordinary skill in the art. The material may be otherwise oriented (eg, rotated 90 degrees, upside down, flipped) and the spatially relative descriptors used herein interpreted accordingly.As used herein, features (eg, regions, structures, devices) described as "adjacent" to each other refer to and include the disclosed identifiers (or identities) that are located in the closest proximity (eg, closest) to each other feature. Additional features (eg, additional regions, additional structures, additional devices) that do not match the disclosed identities (or identities) of "adjacent" features may be positioned between "adjacent" features. In other words, "adjacent" features may be positioned directly adjacent to each other such that no other features intervene between the "adjacent" features; or "adjacent" features may be positioned indirectly adjacent to each other such that there are no other features than at least At least one feature of an identification other than the identification to which an "adjacent" feature is associated is positioned between "adjacent" features. Accordingly, features described as being "vertically adjacent" to each other refer to and include features of the disclosed identifiers (or identities) that are located in the vertical closest proximity (eg, vertically closest) to each other. Furthermore, features described as being "horizontally adjacent" to each other refer to and include features of the disclosed identifiers (or identities) that are located horizontally closest (eg, horizontally closest) to each other.As used herein, the term "spacing" refers to the distance between the same point in two adjacent (ie, adjacent) features.As used herein, the singular forms "a/an" and "the (the)" are intended to include the plural forms as well, unless the context clearly dictates otherwise.As used herein, "and/or" includes any and all combinations of one or more of the associated listed items.As used herein, the term "substantially" with respect to a given parameter, characteristic or condition means and includes what would be understood by one of ordinary skill in the art that the given parameter, characteristic or condition conforms to a degree of variance (eg, within an acceptable tolerance) )Degree. By way of example, a parameter, property or condition may be satisfied by at least 90.0%, at least 95.0%, at least 99.0%, at least 99.9%, or even 100.0%, depending on the particular parameter, property or condition that is substantially satisfied.As used herein, "about" or "approximately" with respect to a numerical value of a particular parameter includes both the numerical value and the degree of variation in the numerical value that would be understood by those of ordinary skill in the art to be within acceptable tolerances for the particular parameter. For example, "about" or "approximately" with respect to a numerical value can include additional numerical values within the range of 90.0% to 108.0% of the numerical value, such as within the range of 95.0% to 105.0% of the numerical value, within the range of 97.5% to 108.0% of the numerical value Within the range of 102.5%, within the range of 99.0% to 101.0% of the value, within the range of 99.5% to 100.5% of the value, or within the range of 99.9% to 100.1% of the value.As used herein, the term "memory device" refers to and includes a microelectronic device that exhibits memory functionality, but is not necessarily limited to memory functionality. In other words, by way of example only, the term "memory device" refers to and includes not only conventional memory (eg, conventional volatile memory, such as conventional dynamic random access memory (DRAM); conventional non-volatile memory, such as conventional NAND memory), but also includes application specific integrated circuits (ASICs) (eg, systems on chips (SoCs)), microelectronic devices that combine logic and memory, and graphics processing units (GPUs) that incorporate memory.As used herein, "conductive material" refers to and includes a conductive material, such as one or more of the following: metals (eg, tungsten (W), titanium (Ti), molybdenum (Mo), niobium (Nb), Vanadium (V), Hafnium (Hf), Tantalum (Ta), Chromium (Cr), Zirconium (Zr), Iron (Fe), Ruthenium (Ru), Osmium (Os), Cobalt (Co), Rhodium (Rh), iridium (Ir), nickel (Ni), palladium (Pa), platinum (Pt), copper (Cu), silver (Ag), gold (Au), aluminum (Al)), alloys (eg, Co-based alloys, Fe based alloys, Ni based alloys, Fe and Ni based alloys, Co and Ni based alloys, Fe and Co based alloys, Co and Ni and Fe based alloys, Al based alloys, Cu based alloys, Magnesium (Mg)-based alloys, Ti-based alloys, steel, mild steel, stainless steel), conductive metal-containing materials (eg, conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides) , and conductively doped semiconductor materials (eg, conductively doped polysilicon, conductively doped germanium (Ge), conductively doped silicon germanium (SiGe)). In addition, "conductive structure" refers to and includes structures formed of and including a conductive material.As used herein, "insulating material" refers to and includes an electrically insulating material, such as one or more of the following: at least one dielectric oxide material (eg, silicon oxide (SiOx), phosphosilicate glass, boron Silicate glass, borophosphosilicate glass, fluorosilicate glass, aluminum oxide (AlOx), hafnium dioxide (HfOx), niobium oxide (NbOx), titanium oxide (TiOx), zirconium oxide (ZrOx), oxide one or more of tantalum (TaOx) and magnesium oxide (MgOx), at least one dielectric nitride material (eg, silicon nitride (SiNy)), at least one dielectric oxynitride material (eg, silicon oxynitride) (SiOxNy)), and at least one dielectric carbon oxynitride material (eg, silicon carbon oxynitride (SiOxCzNy)). A chemical formula (eg, SiOx, AlOx, HfOx, NbOx, TiOx, SiNy, SiOxNy, SiOxCzNy) containing "x", "y", and "z" herein is meant for another element (eg, Si, Al, Hf, Nb) , Ti) containing an average ratio of "x" atoms of one element, "y" atoms of another element, and "z" atoms of additional elements (if present) for each atom of the material. Because chemical formulas represent relative atomic ratios rather than strict chemical structures, insulating materials may include one or more stoichiometric compounds and/or one or more non-stoichiometric compounds, and "x", "y" and "z" " (if present) may or may not have an integer value. As used herein, the term "non-stoichiometric compound" refers to and includes a compound having a composition of elements that cannot be represented by a ratio of well-defined natural numbers and that violates the law of proportionality. In addition, the "insulating structure" refers to and includes a structure formed of and including an insulating material.Unless otherwise specified, the materials described herein can be formed by conventional techniques including, but not limited to, spin coating, thick layer coating (eg, spray coating), chemical vapor deposition (CVD), atomic layer deposition (ALD), plasma Volume Enhanced ALD, Physical Vapor Deposition (PVD), Plasma Enhanced Chemical Vapor Deposition (PECVD) or Low Pressure Chemical Vapor Deposition (LPCVD). Alternatively, the material can be grown in situ. Depending on the particular material to be formed, techniques for depositing or growing the material can be selected by those skilled in the art. Unless context dictates otherwise, removal of material may be accomplished by any suitable technique, including but not limited to etching, atomic layer removal processes, abrasive planarization (eg, chemical mechanical planarization), or other known methods.FIGS. 1A-1J illustrate a method of forming a microelectronic device structure of a microelectronic device (eg, a memory device such as a 3DNAND flash memory device) in accordance with embodiments of the present disclosure, wherein FIGS. 1D and 1J are FIGS. 1C and 1I , respectively. magnified section. Referring to Figure 1A, a microelectronic device structure 100 may be formed to include a stacked structure 101 including a vertical (eg, in the Z-direction) alternating sequence of insulating structures 104 and other insulating structures 106 arranged in layers 102. Each layer 102 may respectively include at least one of the insulating structures 104 that is directly vertically adjacent to at least one of the other insulating structures 106 .In some embodiments, the number (eg, amount) of layers 102 of stack structure 101 is in the range of 32 layers 102 to 256 layers 102 . In some embodiments, the stack structure 101 includes 128 layers 102 . However, the present disclosure is not limited thereto, and the stacked structure 101 may include a different number of layers 102 . The stack structure 101 may include at least one (eg, one, two, more than two) stack structures vertically overlying the source structures 108 . For example, the stack structure 101 may include a single-stack structure or a dual-stack structure of a 3D memory device (eg, a 3D NAND flash memory device).The insulating structure 104 may be formed of, for example, at least one dielectric material and include at least one dielectric material, such as at least one dielectric oxide material (eg, SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate) (one or more of acid glass, fluorosilicate glass, AlOx, HfOx, NbOx, TiOx, ZrOx, TaOx, and MgOx). In some embodiments, insulating structure 104 is formed of and includes SiO2.The other insulating structures 106 may be formed from and include insulating materials that are different from the insulating structures 104 and exhibit etch selectivity relative to the insulating structures 104 . In some embodiments, the other insulating structures 106 are formed of at least one dielectric nitride material (eg, SiNy) or at least one oxynitride material (eg, SiOxNy) and include at least one dielectric nitride material or at least one An oxynitride material. In some embodiments, the other insulating structures 106 include Si3N4.Stacked structure 101 may be formed on or over source structure 108 (eg, source plate). The source structure 108 may be formed of and include a conductive material, such as doped with at least one P-type dopant (eg, one or more of boron, aluminum, and gallium) or at least one N-type dopant ( For example, arsenic, phosphorous, antimony) semiconductor materials (eg, polysilicon).With continued reference to FIG. 1A , guide posts 110 of material may be formed to extend vertically (eg, in the Z-direction) through the stack structure 101 . As will be described herein, the material of the guide posts 110 can be used to form memory cells of the memory device after subsequent processing of the microelectronic device structure. The conductive pillars 110 may each include an insulating material 112, a channel material 114 horizontally adjacent to the insulating material 112, a tunnel dielectric material (also referred to as "tunneling dielectric material") 116 horizontally adjacent to the channel material 114, a horizontal Memory material 118 adjacent to tunnel dielectric material 116 and a dielectric blocking material (also referred to as "charge blocking material") 120 horizontally adjacent to memory material 118 . The dielectric barrier material 120 may be horizontally adjacent to one level in the other insulating structure 106 of one layer 102 in the stack structure 101 . Channel material 114 may be inserted horizontally between insulating material 112 and tunnel dielectric material 116; tunnel dielectric material 116 may be inserted horizontally between channel material 114 and memory material 118; memory material 118 may be inserted horizontally between between the tunnel dielectric material 116 and the dielectric barrier material 120;The insulating material 112 may be formed of and include at least one insulating material. In some embodiments, insulating material 112 is formed of and includes a dielectric oxide material, such as SiO2. In additional embodiments, insulating material 112 includes an air gap.The channel material 114 can be made of at least one semiconductor material (at least one elemental semiconductor material, such as polysilicon; at least one III-V synthetic semiconductor material, at least one II-VI synthetic semiconductor material, at least one organic semiconductor material, GaAs, One or more of InP, GaP, GaN, other semiconductor materials) and at least one oxide semiconductor material are formed from and include the one or more. In some embodiments, the channel material 114 includes amorphous silicon or polysilicon. In some embodiments, the channel material 114 includes a doped semiconductor material.Tunnel dielectric material 116 may be formed from and include a dielectric material by which charge tunneling may be performed under suitable electrical bias conditions, such as by hot carrier injection or by Fowler-Nordheim tunneling induced charge transfer. As a non-limiting example, the tunnel dielectric material 116 may be formed from and include one or more of a dielectric oxide material, a dielectric nitride material, and a dielectric oxynitride material. In some embodiments, the tunnel dielectric material 116 includes SiO2. In other embodiments, the tunnel dielectric material 116 includes SiOxNy.The memory material 118 may include a charge trapping material or a conductive material. As a non-limiting example, the memory material 118 may be formed from and include one or more of the following: silicon nitride, silicon oxynitride, polysilicon (doped polysilicon), conductive materials (eg, tungsten, molybdenum) , tantalum, titanium, platinum, ruthenium and their alloys, or metal suicides such as tungsten, molybdenum, tantalum, titanium, nickel, cobalt, or combinations thereof), and semiconducting materials (e.g., polycrystalline semiconducting conductive materials, amorphous semiconductor materials). In some embodiments, the memory material 118 includes Si3N4.The dielectric barrier material 120 may be formed of and include a dielectric material such as one of a dielectric oxide (eg, SiOx), a dielectric nitride (eg, SiNy), and a dielectric oxynitride (eg, SiOxNy). or more, or another dielectric material. In some embodiments, the dielectric barrier material 120 includes SiOxNy.In some embodiments, tunnel dielectric material 116, memory material 118, and dielectric barrier material 120 together may include structures configured to trap charge, such as oxide-nitride-oxide (ONO) structures. In some such embodiments, tunnel dielectric material 116 includes SiO2, memory material 118 includes Si3N4, and dielectric barrier material 120 includes SiO2.Referring to FIG. 1B , trenches 122 , which may also be referred to as “slits” or “replacement gate trenches,” may be formed through stack structure 101 . In some embodiments, trench 122 is formed to extend vertically completely through stack structure 101 and expose portions of source structure 108 . The trenches 122 may be formed by, for example, exposing the microelectronic device structure 100 to one or more etchants to remove portions of the insulating structures 104 and other insulating structures 106 of the stack structure 101 . Slots 122 may divide microelectronic device structure 100 into separate blocks, such as first block 124 and second block 126 . As shown in FIG. 1B , the first block 124 and the second block 126 may each include a plurality (eg, many, more than one) of the guide posts 110 .Referring to FIG. 1C , after the trenches 122 are formed, the other insulating structures 106 ( FIG. 1B ) of the stack structure 101 may at least partially (eg, substantially) use the trenches 122 through a so-called "replace gate" or "gate hold" process. remove. As a non-limiting example, the other insulating structures 106 may be at least partially removed by exposing the other insulating structures 106 to at least one wet etchant including one or more of phosphoric acid, sulfuric acid, hydrochloric acid, nitric acid, or another material. In some embodiments, the other insulating structures 106 are at least partially removed by exposing the other insulating structures 106 to so-called "wet nitride strips" comprising a wet etchant including phosphoric acid.As shown in FIG. 1C , after the other insulating structures 106 are removed, conductive structures 128 may be formed between vertically adjacent insulating structures 104 at locations corresponding to the locations of the other insulating structures 106 ( FIG. 1B ) to form Layers 130 of insulating structures 104 and conductive structures 128 and strings 132 of memory cells 134 extending vertically through stack structure 101 .In some embodiments, conductive structures 128 function as word lines (eg, local word lines) that include strings 132 of memory cells 134 . Additionally, one or more (eg, one to five) conductive structures 128 of the vertical lower layer 130 (eg, the vertical lowermost layer 130 ) may serve as select gate structures (eg, select gate source (SGS) structure). Additionally, one or more (eg, one to five) conductive structures 128 of the vertical upper layer 130 (eg, the uppermost vertical layer 130 ) may be used as select gate structures (eg, select gate drain (SGD) structure).Conductive structures 128 may be formed from and include conductive materials, such as one or more of the following: tungsten, titanium, nickel, platinum, rhodium, ruthenium, iridium, aluminum, copper, molybdenum, silver, gold, metal alloys, metal-containing Materials (eg, metal nitrides, metal suicides, metal carbides, metal oxides), including titanium nitride (TiN), tantalum nitride (TaN), tungsten nitride (WN), titanium aluminum nitride (TiAlN) , materials of one or more of iridium oxide (IrOx), ruthenium oxide (RuOx), alloys thereof, conductively doped semiconductor materials (eg, conductively doped silicon, conductively doped germanium, conductively doped doped silicon germanium), polysilicon, and other materials that exhibit electrical conductivity. In some embodiments, conductive structure 128 includes a material comprising one or more of titanium, ruthenium, aluminum, and molybdenum, but is substantially free (eg, substantially absent) of tungsten. In some such embodiments, conductive structures 128 may include at least some atoms of a precursor material (eg, chlorine, carbon, oxygen) used to form conductive structures 128 .The intersections of conductive structures 128 and pillars 110 may form individual memory cells 134 of string 132 of memory cells 134 . FIG. 1D shows an enlarged portion of block D of FIG. 1C and shows memory cell 134 in accordance with an embodiment of the present disclosure. 1D, the memory cells 134 may each include a channel material 114, a tunnel dielectric material 116 horizontally adjacent to the channel material 114, a memory material 118 horizontally adjacent to the tunnel dielectric material, a dielectric barrier material 120, and a The dielectric barrier material 120 is horizontally adjacent to the conductive structures 128 . In other embodiments, the memory cells 134 include so-called "floating gate" memory cells that include a floating gate (eg, a metal floating gate) as a charge storage structure. The floating gate may be horizontally interposed between the central structure of the guide post 110 and the conductive structure 128 of the layer 130 of the stack structure 101 .In some embodiments, as shown in FIG. 1D , the dielectric barrier material 136 may be formed directly adjacent to the dielectric barrier material 120 and directly adjacent to the insulating structure 104 . In some embodiments, conductive liner material 138 may be directly adjacent to dielectric barrier material 136 and conductive structure 128 . For ease of illustration and understanding, dielectric barrier material 136 is not shown in FIG. 1C , although it should be understood that microelectronic device structure 100 may include one or both of dielectric barrier material 136 and conductive liner material 138 .The conductive liner material 138, if present, may be formed from and include a seed material from which the conductive structures 128 may be formed. The conductive liner material 138 may be formed of and include, for example, a metal (eg, titanium, tantalum), a metal nitride (eg, tungsten nitride, titanium nitride, tantalum nitride), or another material. In some embodiments, the conductive liner material 138 includes titanium nitride. In other embodiments, the dielectric barrier material 136 is in direct contact with each of the conductive structure 128 and the insulating structure 104 , and the microelectronic device structure 100 is substantially (eg, completely) between the dielectric barrier material 136 and the conductive structure 128 ) does not contain conductive liner material 138. In other words, in some embodiments, each layer 130 is free of titanium nitride material between the insulating structure 104 and the conductive structure 128 .The dielectric barrier material 136 may be formed from and include one or more of the following: metal oxides (eg, aluminum oxide, hafnium oxide, zirconium oxide, lanthanum oxide, yttrium oxide, tantalum oxide, gadolinium oxide, one or more of niobium oxide, titanium oxide), dielectric suicides (eg, aluminum silicide, hafnium silicide, zirconium silicide, lanthanum silicide, yttrium silicide, tantalum silicide), and dielectric nitrides (eg, aluminum nitride, hafnium nitride, lanthanum nitride, yttrium nitride, tantalum nitride). In some embodiments, the dielectric barrier material 136 includes aluminum oxide.Referring to FIG. 1E , after forming dielectric barrier material 136 , conductive liner material 138 (if present), and conductive structure 128 , conductive structure 128 , conductive liner material 138 , and a portion of dielectric barrier material 136 may be removed from the surface defining trench 122 to form recessed portions 140 of conductive structures 128 and electrically isolate adjacent conductive structures 128 from each other. In other words, removal of portions of conductive structures 128, conductive liner material 138, and dielectric isolation material 136 may physically and electrically isolate conductive structures 128 from each other.In some embodiments, the conductive liner material 138 and the conductive material of the conductive structures 128 are removed by exposing the conductive liner material 138 and the conductive material of the conductive structures 128 to one or more wet etchants through the trenches 122 . The wet etchant may include one or more of phosphoric acid, acetic acid, nitric acid, hydrochloric acid, aqua regia, or hydrogen peroxide. However, the present disclosure is not so limited, and conductive liner material 138 and conductive material of conductive structure 128 may be removed using other etchants and/or material removal processes (eg, vapor phase removal processes, atomic layer removal processes). In some embodiments, the conductive liner material 138 is removed by exposure to one or more dry etchants (eg, one or more chlorine-containing dry etchants). As non-limiting examples, the one or more dry etchants may include one or more of chlorine, boron trichloride (BCL3), oxygen, and argon. In some embodiments, the conductive liner material 138 is removed by exposure to a dry etchant including chlorine gas and boron trichloride.The width of the grooves 122 may be tailored, at least in part, based on the granularity of the conductive structures 128 to reduce two or more proximity of the conductive structures 128 between adjacent blocks (eg, between the first block 124 and the second block 126 ). Bridging (eg, electrical connection) occurs between the parts. In some embodiments, trenches 122 are formed with a width greater than the width of conventional trenches to provide sufficient electrical isolation between adjacent blocks. Forming the trenches 122 may also remove the outermost portions of the insulating structures 104 , where the remaining number of insulating structures 104 have a width W1 relative to opposite points of the outer sidewalls 144 of the insulating structures 104 . Slots 122 are formed such that each of conductive structure 128 and conductive liner material 138 is laterally recessed relative to insulating structure 104 such that outer sidewalls 142 of conductive structures 128 are relative to guide posts 110 compared to outer sidewalls 144 of insulating structures 104 closer to the corresponding ones of the guide posts 110 . In other words, the width W1 of the insulating structure 104 is greater than the width W2 of the conductive structure 128 , as shown in FIG. 1E . Thus, the conductive liner material 138 (if present) may extend along only a portion of the width W1 of the adjacent insulating structure 104 .Referring to FIG. 1F , the conductive track 150 may be formed at least horizontally adjacent to the conductive structure 128 (eg, horizontally thereon). Because forming trench 122 removes some of conductive structure 128 and conductive liner material 138, conductive structure 128 and conductive liner material 138 of layer 130 of FIG. 1E may exhibit resistance greater than desired. To reduce resistance, conductive tracks 150 may be formed to extend (eg, laterally) from each exposed portion of conductive structure 128 and, if present, conductive liner material 138 .Conductive rail 150 may be formed of and include at least one conductive material, such as one or more of the following: tungsten, titanium, nickel, platinum, rhodium, ruthenium, iridium, aluminum, copper, molybdenum, silver, gold , metal alloys, metal-containing materials (eg, metal nitrides, metal silicides, metal carbides, metal oxides), including titanium nitride (TiN), tantalum nitride (TaN), tungsten nitride (WN), nitrogen Materials of at least one of titanium aluminum oxide (TiAlN), iridium oxide (IrOx), ruthenium oxide (RuOx), alloys thereof, conductively doped semiconductor materials (eg, conductively doped silicon, conductively doped germanium, conductively doped silicon germanium), polysilicon, and other materials that exhibit electrical conductivity. In some embodiments, the conductive tracks 150 are formed of and contain tungsten.Conductive rail 150 may have a different material composition than that of conductive structure 128 . For example, the conductive rails 150 may include tungsten, while the initially formed conductive structures 128 of replacement gate material may be formed from and include one or more of titanium, ruthenium, aluminum, and molybdenum, as discussed above, But substantially free (eg, substantially absent) of tungsten. Accordingly, conductive structures 128 may be substantially free (eg, substantially absent) of halogen-containing precursors (eg, fluorine) used to form tungsten, and conductive tracks 150 may be substantially free (eg, substantially absent) Additional precursors (eg, chlorine, carbon, oxygen) for forming tungsten-free materials such as titanium, ruthenium, aluminum, or molybdenum. In some embodiments, the conductive tracks 150 have a greater conductivity than the conductive structures 128 .The conductive tracks 150 may be grown, deposited (eg, by ALD, CVD, pulsed CVD, metal organic CVD). In some embodiments, the conductive tracks 150 are formed by depositing a liner material (eg, a titanium nitride material) and then tungsten to a range greater than the desired range of the conductive tracks 150 . Thereafter, portions of the tungsten material may be removed (eg, recessed) to form a desired extent (eg, cross-sectional area) of the conductive track 150 . In other embodiments, the conductive tracks 150 are formed by selectively growing tungsten on the conductive structures 128 after the conductive structures 128 are recessed. For example, the conductive track 150 may be formed with a target comprising the material of the conductive track 150 . In some such embodiments, the conductive tracks 150 may be formed (eg, deposited) laterally adjacent to the conductive structures 128 by exposing a target comprising the material of the conductive tracks 150 with an ionized gas (eg, argon). to form. In some embodiments, at least some argon gas may be present within conductive rail 150 .In some embodiments, the conductive rails 150 may be substantially free of halogens and moisture. In additional embodiments, the conductive rails 150 may contain less fluorine and/or less moisture than other conductive materials. For example, in some embodiments, conductive tracks 150 are formed with targets comprising tungsten and are formed without the use of fluorine-containing precursors. In contrast, conductive structures formed with a fluorine-containing precursor (eg, tungsten hexafluoride) may contain at least some residual fluorine. Additionally, residual fluorine can react with moisture or other materials to form impurities in the conductive structure, thereby reducing its conductivity.Conductive rail 150 may include tungsten that exhibits different properties than the material of conductive structure 128 . For example, conductive tracks 150 may exhibit different particle sizes, different electrical properties, and fewer impurities than conductive structures 128 . In some embodiments, the conductive tracks 150 include tungsten with a grain size larger than the grain size of the material of the conductive structures 128 . Because the particle size of the material may be based, at least in part, on the thickness (eg, height) of the material, the conductive structures 128 may exhibit particle sizes ranging from about 0.1 times to about 10 times the thickness of the conductive structures 128 . In some embodiments, the conductive tracks 150 exhibit a lower resistivity than the conductive structures 128 . Conductive structures 128 may be formed from and include materials tailored to reduce (eg, minimize) layer voids that may occur during formation of conductive structures 128 within layer 130 . Because the resistivity of the material may be based at least in part on the thickness (eg, height) of the material, in some cases, such as when the thickness of the conductive structures 128 decreases after the spacing of the layers 130 is decreased, the conductive structures 128 may exhibit higher conductivity than Rail 150 has low resistivity.In yet other embodiments, the conductive tracks 150 are formed by atomic layer deposition. In some such embodiments, the conductor tracks 150 are formed with precursors including tungsten hexafluoride (WF 6 ) and silane (SiH 4 ) for forming the conductor tracks 150 . Thus, in some embodiments, the conductive tracks 150 are formed with halogen-containing precursors. In some such embodiments, the conductive rails 150 may include at least some halogen (eg, fluorine).For example, a precursor material (eg, a semiconducting liner material) may be formed from and include at least one semiconducting material, such as one or more of the following: silicon material, silicon germanium material, boron material, germanium materials, gallium arsenide materials, gallium nitride materials and indium phosphide materials. As a non-limiting example, the precursor material may be formed from and include at least one silicon material. As used herein, the term "silicon material" refers to and includes a material comprising elemental silicon or a silicon compound. For example, the precursor material may be formed from and include one or more of monocrystalline and polycrystalline silicon. In some embodiments, the precursor material includes polysilicon.The precursor material may be formed to exhibit desirable dimensions (eg, height, width) based, at least in part, on the desired dimensions of the conductive tracks 150 and may be formed using one or more conventional conformal deposition processes, such as conventional conformal One or more of a CVD process and a conventional ALD process. In some embodiments, the precursor material is doped (eg, impregnated with) one or more dopants (eg, chemicals). Dopants of the doped precursor material may include materials that facilitate or facilitate subsequent formation of tungsten (eg, beta-phase tungsten) from the doped precursor material, as described in further detail below. In some embodiments, the dopant includes at least one N-type dopant, such as one or more of phosphorus (P), arsenic (Ar), antimony (Sb), and bismuth (Bi). In additional embodiments, the dopant includes at least one P-type dopant, such as one or more of boron (B), aluminum (Al), and gallium (Ga). In other embodiments, the dopant includes one or more of the following: carbon (C), fluorine (F), chlorine (Cl), bromine (Br), hydrogen (H), deuterium (2H), helium ( He), Neon (Ne) and Argon (Ar).The precursor material of the conductive tracks 150 may be doped with at least one dopant using conventional processes (eg, conventional plasma doping (PLAD) implantation processes, conventional diffusion processes) to form a doped precursor material , which is not described in detail in this paper. If employed, the PLAD implant process may implant dopants across the entire conductor rail 150 . As a non-limiting example, one or more phosphorus-containing species (eg, phosphorus atoms, phosphorus-containing molecules, phosphide ions, phosphorus-containing ions) may be implanted into the precursor material to form a doped precursor material. For example, the phosphorus-containing species may include phosphide ions (P3-). As another non-limiting example, one or more arsenic-containing species (eg, arsenic atoms, arsenic-containing molecules, arsenic ions, arsenic-containing ions) may be implanted into the precursor material to form a doped precursor material. For example, the arsenic-containing species may include arsenic ions (As3+). In some embodiments, after dopant implantation, the amount of dopant within the doped precursor material ranges from about 0.001 atomic % to about 10 atomic %. Portions of the doped precursor material of conductive rail 150 may individually exhibit substantially uniform distribution of dopants within their semiconducting material, or may individually exhibit non-uniform distribution of dopants within their semiconducting material .Afterwards, the portion of the doped precursor material may be converted into conductive tracks 150 comprising tungsten and dopants of the doped precursor material. The conversion process can convert portions of a semiconducting material (eg, a silicon material such as polysilicon) that include a doped precursor material with dopants dispersed therein relatively quickly compared to an undoped semiconducting material into tungsten.At least some of the tungsten in the conductor rails 150 may include beta phase tungsten. β-phase tungsten has a metastable A15 cubic structure. Particles of beta-phase tungsten may exhibit a substantially columnar shape. The tungsten contained in the conductive track 150 may exist only in the beta phase, or may exist in the beta phase and the alpha (alpha) phase. If present, alpha-phase tungsten has a metastable body-centered cubic structure. Particles of alpha-phase tungsten may exhibit a substantially isometric shape. If the conductor rail 150 includes beta-phase tungsten and alpha-phase tungsten, the amount of beta-phase tungsten included in the conductor rail 150 may be different from the amount of alpha-phase tungsten included in the conductor rail 150 , or may be the same as the amount of the alpha-phase tungsten included in the conductor rail 150 . The amount of α-phase tungsten in is basically the same. In some embodiments, the amount of beta-phase tungsten contained in the conductor rails 150 is greater than the amount of alpha-phase tungsten contained in the conductor rails 150 . For example, at least a majority (eg, greater than or equal to about 50%, such as greater than or equal to about 60%, greater than or equal to about 70%, greater than or equal to about 80%, greater than or equal to about 90%, greater than or equal to about 90%, greater than or equal to about equal to about 95% or greater than or equal to about 99%) tungsten may exist in the beta phase.The dopants included in the conductive tracks 150 may be substantially the same as the dopants included in the doped precursor material used to form the conductive tracks 150 . For example, dopants (eg, N-type dopants, P-type dopants, other dopants) used to form conductive traces 150 may be present in conductive traces 150 after conductive traces 150 are formed. In some embodiments, the conductive tracks 150 comprise beta-phase tungsten doped with one or more of As and P. The dopants of the conductor rails 150 may support (eg, promote, facilitate) the stability of the beta-phase tungsten of the conductor rails 150 .Conductive rail 150 may exhibit a substantially uniform distribution of its dopants, or may exhibit a non-uniform distribution of its dopants. The distribution of dopants within conductive track 150 may be substantially the same as or different from the distribution of dopants within the doped precursor material.Conductive tracks 150 may be formed by treating a doped precursor material with one or more chemicals to facilitate the conversion of semiconducting material (eg, silicon material) to tungsten (eg, beta-phase tungsten, alpha-phase tungsten). As a non-limiting example, if the doped precursor material includes a doped silicon material, such as doped polysilicon, the doped precursor material may be treated with tungsten hexafluoride (WF6) to form the conductive tracks 150 . Silicon (Si) of the doped precursor material can react with WF6 to produce tungsten (W) and silicon tetrafluoride (SiF4). The generated SiF4 is removed as a gas. The resulting W remains with the dopants of the doped precursor material to form conductive tracks 150 . For example, the doped precursor material can be treated with WF6 using conventional CVD equipment at temperatures in the range of about 200°C to about 500°C.Conductive rails 150 may be formed adjacent (eg, on, directly on) outer sidewalls 142 (eg, on, directly on) of conductive structures 128 (and, if present, conductive liner material 138 ) remaining after formation of trench 122 of FIG. 1E . , deposition, growth). In some embodiments, conductive structures 128 are used as seed material for growing conductive tracks 150, as discussed above. In some embodiments, the phase (eg, beta phase, alpha phase) of the conductive track 150 may depend, at least in part, on the phase (eg, beta phase, alpha phase) of the material of the conductive structure 128, in embodiments the The materials include precursor materials such as conductive tracks 150 grown directly on conductive structures 128 .Formation (eg, deposition, growth) may continue or be repeated at least until conductive traces 150 extend laterally beyond outer sidewalls 144 of insulating structures 104 . In embodiments in which conductive liner material 138 is present, conductive tracks 150 also extend laterally beyond the sidewalls (eg, side ends) of conductive liner material 138 . The formation (eg, deposition, growth) of the conductive traces 150 can be tailored to form a desired number of conductive traces 150 to reduce the resistance exhibited by the conductive structures 128 while not allowing electrical shorts between vertically adjacent conductive structures 128 .In some embodiments, such as the embodiment of FIG. IF, conductive tracks 150 are formed (eg, deposited, grown) until all extend laterally beyond, and in some cases vertically overlap, outer sidewalls 144 of insulating structures 104, while Electrical isolation between adjacent blocks (eg, first block 124, second block 126) is still provided. In other embodiments, the conductive tracks 150 are formed until they all extend laterally beyond the outer sidewall 144 of the insulating structure 104 without vertically overlapping the outer sidewall 144 . In other words, lower surface 148 and upper surface 152 may be substantially coplanar with the lower and upper surfaces of conductive structure 128 and/or conductive liner material 138 while not adjacent to outer sidewall 144 of insulating structure 104 . Thus, the conductive tracks 150 exhibit a height equal to or greater than the height of the conductive structures 128 of the stack structure 101 .When the conductive rail 150 extends laterally beyond the insulating structure 104 , the maximum width W3 defined by the outer sidewalls 146 of the conductive rail 150 is greater than the maximum width W1 defined by the outer sidewalls 144 of the insulating layer 104 , and thus greater than the outer sidewalls of the conductive structure 128 . 142 defines the maximum width W2. As used herein, "outer" sidewalls 142 , 144 , 146 are sidewalls proximate to the sidewalls of respective ones of the blocks (eg, first block 124 , second block 126 ), ie, proximate the opposite of guide post 110 side wall. Accordingly, the conductive rails 150 extend away from the conductive posts 110 from the respective conductive structures 128 , such that the stack structure 101 includes a conductive layer of layer 130 that includes conductive rails 150 that are laterally wider than the insulating structures 104 . In some embodiments, the width W2 of the conductive structures 128 may be substantially similar (eg, substantially the same) as the width between the outermost surfaces of the outermost ones of the guide posts 110 . In other words, the conductive structures 128 may extend within the region of the stack structure 101 laterally bounded by the guide posts 110 while not extending over each lateral end of the blocks (eg, the first block 124, the second block 126). beyond the outermost guide post 110 . In other embodiments, at least a portion of the conductive structure 128 is interposed between the post 110 and the conductive rail 150 such that the post 110 is not in direct physical contact with the conductive rail 150 .As a non-limiting example, the width W3 of the conductive layer may exceed the width W2 of the conductive structure 128 by a range of about 5 nm to about 100 nm, such as about 5 nm to about 10 nm, about 10 nm to about 20 nm, about 20 nm to about 50 nm, or about 50 nm to about 50 nm to about 100nm. Thus, each conductive track 150 may have a horizontal width in the range of about 5 nm to about 100 nm, eg, about 5 nm to about 10 nm, about 10 nm to about 20 nm, about 20 nm to about 50 nm, or about 50 nm to about 100 nm. Furthermore, the width W3 of the conductive layer may exceed the width W1 of the insulating structure 104 by a range of about 2 nm to about 50 nm, eg, about 2 nm to about 5 nm, about 5 nm to about 10 nm, about 10 nm to about 20 nm, or about 20 nm to about 50 nm.Each conductive track 150 is separated (eg, spaced) from adjacent conductive tracks 150 (eg, conductive tracks 150 above and/or below) by a separation distance D1 sufficient to couple each conductive structure of a single layer 130 Each conductive track 150 of 128 is electrically isolated from each other conductive track 150 coupled to each conductive structure 128 of another individual layer 130 that is vertically adjacent to that individual layer 130 . The separation distance D1 is defined by the dimension by which the lower surface 148 of one conductor rail 150 is separated from the upper surface 152 of an adjacent one of the conductor rails 150 . In some embodiments, the separation distance D1 between each pair of vertically adjacent conductive rails 150 is substantially equal (eg, substantially uniform) along the stack structure 101 . In other embodiments, the separation distance D1 varies at different elevations of the stack structure 101, provided that each pair of adjacent conductive rails 150 are electrically isolated from each other. As a non-limiting example, the distance D1 between adjacent pairs of conductive tracks 150 may be in the range of about 2 nm to about 20 nm, such as about 2 nm to about 5 nm, about 5 nm to about 10 nm, about 10 nm to about 15 nm, or about 15 nm to about 15 nm to about 20nm.In some embodiments, the height H2 of a single conductive track 150 (defined as the vertical dimension between the lowest elevation of the lower surface 148 and the highest elevation of the upper surface 152 ) and the height H1 of the individual conductive structures 128 (eg, defined as the lower The vertical dimension between the lowest elevation of surface 154 and the highest elevation of upper surface 156) is substantially the same. In other words, the lower surfaces 148 of the conductive rails 150 may be substantially coplanar with the lower surfaces 154 of the conductive structures 128, and the upper surfaces 152 of the conductive rails 150 may be substantially coplanar with the upper surfaces 156 of the conductive structures 128, as described above exposition. In other embodiments, the height H2 of the conductive track 150 is relatively greater than the height H1 of the conductive structure 128, as shown in FIG. 1F. As used herein, the "remaining non-track portion" of each layer 130 refers to the portion of the layer 130 that is outside the confines of the conductive tracks 150 coupled to the conductive structures 128 of the layer 130 . The remaining non-rail portions of each layer 130 contain conductive structures 128 and, if present, conductive liner material 138 . The lower surface 154 and the upper surface 156 of the remaining non-rail portion of the single layer 130 may be bounded by the conductive structure 128 in the layer 130 consisting of the conductive structure 128 or in the layer 130 including both the conductive structure 128 and the conductive liner material 138 Defined by a conductive liner material 138, as in the microelectronic device structure 100 of FIG. IF and other embodiments of the present disclosure.As a non-limiting example, the height H1 of a single conductive structure 128 may be in the range of about 10 nm to about 50 nm, such as about 10 nm to about 20 nm, about 20 nm to about 30 nm, about 30 nm to about 40 nm, or about 40 nm to about 50 nm. If present, conductive liner material 138 may have a thickness (eg, height) in the range of about 0.5 nm to about 5 nm; In addition, the height H2 of the conductive track 150 may be in the range of about 20 nm to about 100 nm, eg, about 20 nm to about 30 nm, about 30 nm to about 40 nm, about 40 nm to about 50 nm, or about 50 nm to about 100 nm. For example, the height H2 of the single conductive track 150 may be about 1% to about 250% greater than the height H1 of the single conductive structure 128 (eg, about 10% to about 250%, about 25% to about 125%, about 50% to about 100%) ).Referring back to FIG. 1F , additional portions of insulating structure 104 may be removed (eg, etched) adjacent outer sidewalls 144 (eg, at opposite outer corners) of insulating structure 104 prior to forming conductive traces 150 . Accordingly, grooves 158 may be formed in the corners of insulating structures 104 by removing portions of insulating structures 104 that extend beyond outer sidewalls 142 of conductive structures 128 but not removing portions of insulating structures 104 along their vertical centerlines (eg, incision). In other words, the grooves 158 form recessed portions at opposite outer corners that extend into the insulating structure 104, thereby facilitating each individual exhibiting a substantially rectangular cross-sectional shape (eg, a substantially square cross-sectional shape) The conductive track 150 is formed with its lower surface 148 and upper surface 152 being substantially flat.Although the lower surface 148 and the upper surface 152 of the conductor rail 150 may be substantially flat as shown in the embodiment of FIG. IF, at least some of the lower surface 148 and the upper surface 152 of the conductor rail 150 may be structured in other ways , but still contains a height H2 that is greater relative to the height H1 of the conductive structure 128 . For example, the formation of conductive track 150 may create a tapered (eg, non-planar) surface along at least some of its lower surface 148 and upper surface 152 such that conductive track 150 forms at least one concave region adjacent groove 158 . Alternatively or additionally, the conductor rails 150 may be formed such that their outer sidewalls 146 are formed in a vertical convex shape such that the conductor rails 150 form mushroom-shaped conductor rails 150 . The vertical convex shape of the concave portion and/or outer sidewall 146 may be a natural consequence of the actions of the formation (eg, deposition, growth) process performed to form the conductive track 150 .Referring now to FIG. 1G , the remaining (eg, unfilled) portions of trenches 122 ( FIG. 1F ) may be filled with fill material 160 . Fill material 160 may extend through stack structure 101 and be adjacent (eg, directly on) the exposed upper surface of source structure 108 . Additionally, filler material 160 may be located between adjacent blocks (eg, first block 124 and second block 126 ) at locations corresponding to grooves 122 .The filling material 160 may be formed of and include at least one insulating material. In some embodiments, filler material 160 has substantially the same material composition as insulating structure 104 . Fill material 160 may be substantially uniform or non-uniform, as discussed in more detail with reference to FIG. 1J . As used herein, the term "uniform" means that the amount of material does not vary (eg, varies) throughout different portions (eg, different horizontal portions, different vertical portions) of another material or structure. Conversely, as used herein, the term "non-uniform" means that the amount of material varies throughout different parts of another material or structure.Those skilled in the art will appreciate that the features and feature configurations described above with respect to FIGS. 1A-1G may be used for the design needs of different microelectronic devices (eg, different memory devices) in accordance with additional embodiments of the present disclosure. By way of non-limiting example, FIGS. 1H and 1I illustrate simplified partial cross-sectional views of a method of forming a microelectronic device structure having a configuration other than microelectronic device structure 100, according to additional embodiments of the present disclosure. Throughout the remainder of the description and drawings, functionally similar features (eg, structures, devices) are referred to by similar reference numerals. To avoid repetition, not all of the features shown in the remaining figures (including FIGS. 1H and 1I ) are described in detail herein. On the contrary, unless otherwise described below, features denoted by reference numerals of previously described features (whether the previously described features are described before or after the current paragraph) should be understood to be substantially similar to the previously described features.Figure 1H shows a simplified partial cross-sectional view of a microelectronic device structure 100'. At the processing stage depicted in Figure 1H, the microelectronic device structure 100' may be substantially similar to the microelectronic device structure 100 at the processing stage depicted in Figure IF. Additionally, the microelectronic device structure 100' at the processing stage depicted in Figure 1I may be substantially similar to the microelectronic device structure 100 at the processing stage depicted in Figure 1G. Furthermore, FIG. 1J shows an enlarged portion of block J of FIG. 1I that is consistent with and equally applicable to the embodiment of the microelectronic device structure 100 ′ of FIG. 1I .Referring back to FIG. 1H , the conductive tracks 150 of the microelectronic device structure 100 ′ can be formed at least adjacent to the conductive structure 128 (eg, on, directly on) as in previous embodiments of the microelectronic device structure 100 . Conductive rails 150 may be formed to extend (eg, laterally) from each exposed portion of conductive structure 128 and, if present, conductive liner material 138 . Conductive rail 150 may have a different material composition than that of conductive structure 128 . For example, conductive structures 128 may include materials including one or more of titanium, ruthenium, aluminum, and molybdenum, and conductive tracks 150 include tungsten.Conductive tracks 150 may be grown, deposited (eg, by ALD, CVD, pulsed CVD, metal organic CVD, PVD) on outer sidewalls 142 of conductive structures 128 . Formation (eg, deposition, growth) may continue or be repeated at least until the conductive traces 150 extend laterally beyond the outer sidewalls 144 of the insulating structures 104, as in the previous embodiment of Figure IF. However, the conductive tracks 150 may be formed adjacent to the upper and lower surfaces of the insulating structure 104 without forming the grooves 158 in opposite outer corners of the insulating structure 104 prior to forming the conductive tracks 150 ( FIG. 1F ). In other words, the portion of the insulating structure 104 between the post 110 and the conductive rail 150 may exhibit a substantially rectangular cross-sectional shape (eg, a substantially square cross-sectional shape) with the outer sidewall 142 and its upper surface and the lower surface is substantially flat. Accordingly, the conductive rails 150 define a "T" shape extending away from the outer sidewalls 142 of the conductive structures 128 and adjacent to opposite outer corners of the insulating structures 104, such that the conductive rails 150 are characterized herein as "T-shaped" conductive rails.The conductive rail 150 includes a first portion 150a positioned laterally beyond the outer sidewall 144 of the insulating structure 104 and defining a height H3 (defined as the dimension between the lowest elevation of the lower surface 148 and the highest elevation of the upper surface 152 ), which height is here The height H1 of the conductive structure 128 in the embodiment is greater than the remaining non-rail portion of each layer 130 . Conductive rail 150 includes a second portion 150b located vertically between insulating structures 104 having a height substantially equal to height H1 of conductive structures 128 including conductive liner material 138 (if present) of the remaining non-rail portions of each layer 130 . Thus, the T-shaped conductive rails 150 extend between vertically adjacent portions of the insulating structure 104 , and the first portion 150a may partially surround (eg, laterally surround) a portion of the insulating structure 104 . The conductive rails 150 each define a smaller height proximate the inner portion of the guide post 110 than at the outer portion remote from the guide post 110 .As a non-limiting example, the height H1 of a single conductive structure 128 may be in the range of about 10 nm to about 50 nm, such as about 10 nm to about 20 nm, about 20 nm to about 30 nm, about 30 nm to about 40 nm, or about 40 nm to about 50 nm. If present, conductive liner material 138 may have a thickness (eg, height) in the range of about 0.5 nm to about 5 nm; Furthermore, the height H3 of a single conductive track 150 may be in the range of about 20 nm to about 100 nm, eg, about 20 nm to about 30 nm, about 30 nm to about 40 nm, about 40 nm to about 50 nm, or about 50 nm to about 100 nm. For example, the height H3 of the single conductive track 150 may be about 1% to about 500% greater than the height H1 of the single conductive structure 128 (eg, about 10% to about 250%, about 25% to about 125%, about 50% to about 100%) ).Opposing lower surfaces 148 and upper surfaces 152 of adjacent conductive rails 150 are separated (eg, spaced apart) by a separation distance D2 sufficient to provide adequate electrical isolation therebetween. Separation distance D2 can be tailored to be the minimum distance that achieves electrical isolation while providing the maximum amount of conductive material provided by conductive tracks 150 to the total amount of conductive material (including conductive structure 128 and conductive liner material 138 ) within each layer 130 . As a non-limiting example, the distance D2 between adjacent pairs of conductive traces 150 may be in the range of about 2 nm to about 20 nm, such as about 2 nm to about 5 nm, about 5 nm to about 10 nm, about 10 nm to about 15 nm, or about 15 nm to about 15 nm to about 20nm.Referring to FIG. 1I , the remainder of the trenches 122 ( FIG. 1H ) may be filled with fill material 160 . Fill material 160 may extend through stack structure 101 and be adjacent to (eg, directly on) the exposed upper surface of source structure 108 . Additionally, filler material 160 may be located between adjacent blocks (eg, first block 124 and second block 126 ) at locations corresponding to grooves 122 .The filling material 160 may be formed of and include at least one insulating material. In some embodiments, filler material 160 has substantially the same material composition as insulating structure 104 . Filler material 160 may be substantially uniform or non-uniform. FIG. 1J shows an enlarged portion of Block J of FIG. 1I and shows a non-uniform configuration of fill material 160 comprising three materials that are different from each other in a stacked arrangement, according to an embodiment of the present disclosure. 1J, the fill material 160 may include one or more insulating (eg, dielectric) materials, such as a nitride material 162 (eg, silicon nitride), an oxide material 164 (eg, silicon oxide (eg, silicon dioxide) )) and polysilicon 166.Fill material 160 may be formed in trench 122 ( FIGS. 1F , 1H ), for example, by forming (eg, conformally forming) a nitride adjacent outer sidewall 146 of conductive rail 150 and adjacent outer sidewall 144 of insulating structure 104 Material 162 , oxide material 164 is formed (eg, conformally formed) adjacent to nitride material 162 , and polysilicon 166 is formed (eg, conformally formed) adjacent to oxide material 164 . Once the fill material 160 is formed, the outer sidewalls 144 of the insulating structures 104 and the outer sidewalls 146 of the conductive tracks 150 are adjacent to the fill material 160 . In embodiments including nitride material 162 , oxide material 164 , and polysilicon 166 of fill material 160 , outer sidewalls 144 of insulating structures 104 and outer sidewalls 146 of conductive rails 150 and outermost material of fill material 160 (eg, nitrogen compound material 162) in direct contact, as shown in Figure 1J. The nitride material 162 of the fill material 160 may also be in direct contact with at least some of the lower surface 148 ( FIG. 1H ) and the upper surface 152 ( FIG. 1H ) of the conductor rail 150 . In some embodiments, the filler material 160 includes one or more air gaps. Air gaps may be located in the narrowest spaces between adjacent conductive rails (eg, conductive rails 150 ) and may further facilitate electrical isolation between adjacent conductive rails 150 . For clarity, and for ease of understanding of the figures and associated descriptions, fill material 160 (eg, nitride material 162, oxide material 164, and polysilicon 166) is described and illustrated with reference to microelectronic device structure 100' of Figures 1I and 1J . However, the disclosure of fill material 160 (eg, nitride material 162, oxide material 164, and polysilicon 166) applies equally to the embodiments of microelectronic device structure 100 discussed above with reference to FIG. 1G.Referring back to FIG. 1J , the formation of conductive tracks 150 may include selectively forming (eg, depositing, growing) conductive tracks 150 on conductive structures 128 (and optional, if present, conductive liner material 138 ). That is, the conductive tracks 150 may be formed at least on the conductive structures 128 while not forming the conductive tracks 150 on the insulating structures 104 at all (according to some embodiments) or the conductive tracks 150 may be formed on the insulating structures 104 A minimum number of conductive tracks 150 formed on conductive structures 128 are formed that are removed (eg, etched) without completely removing them.In some embodiments, the selective formation of conductive tracks 150 may be facilitated or regulated by pre-treating stack structure 101 ( FIGS. 1G , 1I ) prior to forming conductive tracks 150 and thus prior to forming fill material 160 . In some such embodiments, as shown in FIG. 1J , the surfaces of the insulating structures 104 exposed within the trenches 122 ( FIGS. 1F , 1H ) may be treated to inhibit the formation of conductive tracks 150 thereon. For example, the inhibiting material 168 may be formed (eg, continuously formed, discontinuously formed) to be absorbed within at least some portions of the insulating structure 104 . The inhibitory material 168 may be formulated to selectively form on the insulating structure 104 and inhibit the formation (eg, deposition, growth (eg, growth)) of the conductive tracks 150 on the inhibitory material 168 . The inhibitory material 168 may be formed of and include, but is not limited to, an organic inhibitor (eg, a polymer), which may be selectively formed on the insulating structure 104 (eg, silicon dioxide). Inhibitory material 168 may be formulated to inhibit deposition, growth, adsorption, or absorption of conductive track 150 during its formation on conductive structure 128 (and, if present, conductive liner material 138 ). Accordingly, the microelectronic device structures 100 , 100 ′ may also include inhibiting material 168 on the outer sidewalls 144 of the insulating structures 104 , between the insulating structures 104 and the fill material 160 (eg, the nitride material 162 ).In other embodiments that use pretreatment prior to forming (eg, depositing, growing) conductive tracks 150, the surfaces of conductive structures 128 and, if present, conductive liner material 138 exposed within trenches 122 (FIGS. 1F, 1H) may be subjected to processing to facilitate the formation of conductive tracks 150 thereon. For example, referring to FIG. 1J, a formation accelerator 170 may be formed (eg, deposited) on the outer sidewalls 142 of the conductive structures 128 (and, if present, the conductive liner material 138). The formation promoter 170 may include, consist essentially of, or consist of boron (B) or silicon (Si). In other embodiments, formation accelerator 170 may be the original surface of outer sidewall 142 exposed by conductive structure 128 (and, if present, conductive liner material 138 ) resulting from, for example, a wet cleaning or dry cleaning process.Formation accelerator 170 may be formulated such that conductive tracks 150 are formed on formation accelerator 170 at a faster rate than insulating structures 104 during formation of conductive tracks 150 on conductive structures 128 (and, if present, conductive liner material 138 ). Accordingly, the microelectronic device structures 100, 100' may also include a formation accelerator 170 between the conductive tracks 150 and the conductive structures 128 (and, if present, the conductive liner material 138).In other embodiments, conductive traces 150 , conductive structures 128 , and insulating structures 104 may be formulated such that conductive traces 150 are selectively formed (eg, grown, deposited) on conductive structures 128 without prior formation of conductive traces 150 Pretreatment (eg, inhibit material formation, accelerator formation).In yet other embodiments, the conductive tracks 150 may be selectively formed on the conductive structures 128 by cycling through the formation and removal (eg, etching) stages. During the formation phase, conductive tracks 150 may be formed on all surfaces exposed in trenches ( FIGS. 1F , 1H ) 122 , but on conductive structures 128 at a greater rate than on insulating structures 104 . Accordingly, a larger number of conductive tracks 150 may be formed on conductive structure 128 than a smaller number formed on insulating structure 104 . Between each formation stage, a removal (eg, etching) stage may be performed to remove some of the conductive tracks 150 at a consistent rate. Accordingly, a smaller number of conductive tracks 150 that have been formed on insulating structures 104 may be removed, while leaving at least some of the larger number of conductive tracks 150 that have been formed on conductive structures 128 . Repeating these formation and removal stages in a cycle may permit conductive tracks 150 to accumulate (eg, deposit, grow) on conductive structures 128 without forming a sustained number of conductive tracks 150 on insulating structures 104 . For clarity, and for ease of understanding of the figures and associated description, inhibiting material 168 and formation promoter 170 are described and illustrated with reference to microelectronic device structure 100' of Figures 1I and 1J. However, the disclosure of inhibitory material 168 and formation accelerator 170 applies equally to the embodiments of microelectronic device structure 100 discussed above with reference to FIG. 1G .As described above, the stack structure 101 of the microelectronic device structures 100, 100' is formed to include a conductive structure 128 formed from a first material composition (eg, titanium, ruthenium, aluminum, and molybdenum) and a second, different material Conductive tracks 150 formed (eg, tungsten) may facilitate improved performance of the microelectronic device structures 100, 100'.For example, the presence of conductive tracks 150 laterally adjacent to conductive structures 128 effectively increases the amount of conductive material present in the conductive layers of layer 130 compared to conductive layers without conductive tracks, while not forcing layer 130 or The horizontal footprint of the blocks (eg, first block 124, second block 126) increases. The increased amount of conductive material (eg, conductive structures 128 and conductive tracks 150 ) may provide a reduced resistivity (eg, resistance level) of the conductive material in each respective layer 130 . In some embodiments, the conductive material may exhibit a resistance that is about 1% to about 50% or higher percentages less than the resistance of the conductive material of a conventional conductive layer of a 3D NAND structure. For example, while a conventional conductive layer can exhibit a resistance of about 13 Ω·μm, the conductive layer of the structures of embodiments of the present disclosure can exhibit a resistance of about 5 Ω·μm. Lower resistances can be achieved without forcing the pitch or critical dimension (CD) of the pillars 110 to increase. Accordingly, reduced resistivity can be achieved even as the pitch or CD of the pillars 110 continues to shrink to smaller values and as the thickness (eg, height in the Z-direction) of the conductive layer of layer 130 continues to decrease.Additionally, because conductive tracks 150 having a second, different material composition are formed laterally adjacent to conductive structures 128 having a first material composition, conductive tracks 150 may exhibit lower resistivity relative to conductive structures 128 . Because conductive structures 128 may be formed from and include materials tailored to reduce (eg, minimize) layer voids within the conductive layers of layer 130, conductive structures 128 may be selected for use in forming ( For example, such materials are deposited, grown) to improve properties, and conductive tracks 150 may be selected to improve properties (eg, reduce resistivity) during use and operation of microelectronic device structures 100, 100'. Because the resistivity of the material may be based, at least in part, on the thickness of the material, the presence of conductive structures 128 may provide a reduced thickness of the conductive layer of layer 130 while not significantly due to the provision of lower resistivity material within conductive tracks 150 reduce conductivity. Additionally, conductive structures 128 may not contain halides, such as fluorine, which may be present in conductive structures formed with halide-containing precursors. The reduced resistivity of the conductive material of layer 130 may improve the performance of strings 132 of memory cells 134 .Microelectronic device structures formed in accordance with embodiments described herein may exhibit improved performance by enabling a reduced occurrence of layer voids during formation of conductive material (eg, conductive structures 128 ) within layer 130 . Furthermore, the reduction in resistivity can be achieved by providing additional conductive material (eg, conductive tracks 150 ) that extend beyond the boundaries of insulating structures 104 to provide increased cross-sectional areas of the conductive material within each layer 130 , and thus achieve increase in conductivity. Additional performance improvements may be achieved by including conductive structures 128 composed of a first material and conductive tracks 150 composed of a second, different material, which configurations may exhibit improved performance compared to conventional microelectronic device structures. In contrast, fabrication of conventional microelectronic device structures can include fabrication of conductive layers having individual material compositions and reduced cross-sectional areas of the conductive material within each layer.Accordingly, in accordance with some embodiments of the present disclosure, a microelectronic device includes: a stacked structure including alternating conductive and insulating structures arranged in layers, each of the layers including a conductive structure and an insulating structure, respectively; a memory a string of cells extending vertically through the stack, the string of memory cells including channel material extending vertically through the stack; and conductive rails laterally adjacent the conductive tracks of the stack structure. The conductive track includes a material composition different from that of the conductive structure of the stacked structure.Furthermore, in accordance with other embodiments of the present disclosure, a method of forming a microelectronic device includes: forming a stack including vertically alternating conductive structures and insulating structures; forming a channel material including vertically extending through the stack and memory strings of at least one dielectric material; and conductive tracks are formed along outer sidewalls of the conductive structures of the stacked structure. The conductive track includes a material composition different from that of the conductive structure of the stacked structure.FIG. 2 shows a partially cut-away perspective view of a portion of a microelectronic device 201 (eg, a memory device such as a dual-stack 3D NAND flash memory device) including a microelectronic device structure 200 . The microelectronic device structure 200 may be substantially similar to one of the microelectronic device structures 100 , 100 ′ previously described with reference to FIGS. 1A-1J . As shown in FIG. 2, the microelectronic device structure 200 can include a stepped structure 220 defining a contact region for connecting the access line 206 to the conductive structure 205 (eg, corresponding to the conductive structure 128 (FIG. 1C)). Microelectronic device structure 200 may include vertical strings 207 (eg, strings 132 ( FIG. 1C )) of memory cells 203 (eg, corresponding to memory cells 134 ( FIG. 1C )) coupled in series with each other. Vertical strings 207 may extend vertically (eg, in the Z-direction) and orthogonal to conductive lines and conductive structures 205, such as data lines 202, source layers 204 (eg, including source structures 108 (FIG. 1C)) , access line 206, first select gate 208 (eg, upper select gate, drain select gate (SGD), select line 209, and second select gate 210 (eg, lower select gate, source select gate (SGS). Select gates 208 may be divided horizontally (eg, in the Y direction) into each other through trenches 230 (eg, fill material 160 ( FIG. 1G ) formed within replacement gate trenches 122 ( FIG. 1E ) , FIG. 1I )) a plurality of blocks 232 (eg, blocks 124 , 126 ( FIG. 1C )) that are separated horizontally (eg, in the Y direction).Vertical conductive contacts 211 can electrically couple components to each other, as shown. For example, select line 209 may be electrically coupled to first select gate 208 and access line 206 may be electrically coupled to conductive structure 205 . The microelectronic device 201 may also include a control unit 212 positioned under the memory array, which may include string driver circuitry, transfer gates, circuitry for select gates, select conductive lines (eg, data lines 202, memory Take at least one of the circuitry of line 206), circuitry for amplifying the signal, and circuitry for sensing the signal. For example, control unit 212 may be electrically coupled to data line 202 , source layer 204 , access line 206 , first select gate 208 and second select gate 210 . In some embodiments, the control unit 212 includes complementary metal oxide semiconductor (CMOS) circuitry. In such embodiments, the control unit 212 may be characterized as having a "CMOS under array" ("CuA") configuration.The first select gates 208 may extend horizontally in a first direction (eg, the X direction) and may be coupled to first ends (eg, upper ends) of respective first groups of vertical strings 207 of memory cells 203 . The second select gate 210 may be formed in a substantially flat configuration and may be coupled to a second opposite end (eg, the lower end) of the vertical string 207 of memory cells 203 .The data lines 202 (eg, digit lines, bit lines) may extend horizontally in a second direction (eg, in the Y direction) at an angle relative to the first direction in which the first select gates 208 extend (eg, perpendicular to the first direction). Each data line 202 may be coupled at a first end (eg, upper end) of each group of vertical strings 207 to each group of vertical strings 207 extending in a second direction (eg, the Y direction). An additional single group of vertical strings 207 extending in a first direction (eg, the X direction) and coupled to respective first select gates 208 may be shared with a single group of vertical strings 207 coupled to a single data line 202 A specific vertical string 207 . Thus, a single vertical string 207 of memory cells 203 at the intersection of a single first select gate 208 and a single data line 202 may be selected. Thus, the first select gate 208 may be used to select the memory cells 203 in the vertical string 207 of memory cells 203 .Conductive structures 205 (eg, word line plates) may extend in respective horizontal planes. The conductive structures 205 can be stacked vertically such that each conductive structure 205 is coupled to at least some of the vertical strings 207 of memory cells 203 and the vertical strings 207 of memory cells 203 extend vertically through the stack including the conductive structures 205 . Conductive structure 205 may be coupled to or may form the control gate of memory cell 203 .The first select gate 208 and the second select gate 210 may be used to select the vertical string 207 of memory cells 203 interposed between the data line 202 and the source layer 204 . Thus, a single memory cell 203 may be selected and electrically coupled to data by manipulating (eg, by selecting) the appropriate first select gate 208, second select gate 210, and conductive structure 205 coupled to the particular memory cell 203 Line 202.The stepped structures 220 may be configured to provide electrical connections between the access lines 206 and the conductive structures 205 through the vertical conductive contacts 211 . In other words, individual conductive structures 205 may be selected via access lines 206 in electrical communication with corresponding vertical conductive contacts 211 that are in electrical communication with conductive structures 205 .Data lines 202 may be electrically coupled to vertical strings 207 through conductive contact structures 234 (eg, contact structures formed over guide posts 110 (FIG. 1C)).Accordingly, in accordance with additional embodiments of the present disclosure, a memory device includes: a stack structure including layers of alternating conductive structures and insulating structures; guide pillars extending vertically through the stack structure, each guide pillar including a channel structure including a semiconductive material extending vertically through the stack structure; and a conductive track vertically along sidewalls of the conductive structure and the insulating structure of the stack structure extend. The conductive rails have a greater electrical conductivity than the conductive structures.According to embodiments of the present disclosure, microelectronic devices including microelectronic devices (eg, microelectronic device 201 ) and microelectronic device structures (eg, microelectronic device structures 100 , 100 ′, 200 ) including the following may be used in the present disclosure An embodiment of an electronic system: a conductive track 150 comprising a material composition different from that of the conductive structure 128 . For example, FIG. 3 is a block diagram of an electronic system 303 according to an embodiment of the present disclosure. Electronic system 303 may include, for example, a computer or computer hardware components, servers or other networked hardware components, cellular telephones, digital cameras, personal digital assistants (PDAs), portable media (eg, music) players, Wi-Fi, or cellular-enabled tablets Computers (eg, or tablets), e-books, navigation devices, etc. Electronic system 303 includes at least one memory device 305 . Memory device 305 may include, for example, microelectronic device structures previously described herein (eg, microelectronic device structures 100 , 100 ′, 200 ) or including conductive structures 128 and conductive rails 150 previously described with reference to FIGS. 1A-1J and 2 An embodiment of a microelectronic device (eg, microelectronic device 201).Electronic system 303 may further include at least one electronic signal processor device 307 (commonly referred to as a "microprocessor"). Electronic signal processor device 307 may optionally include a microelectronic device or microelectronic device structure previously described herein (eg, microelectronic device 201 or microelectronic device structure 100 previously described with reference to FIGS. 1A-1J and 2 , 100', one or more of 200) embodiments. Electronic system 303 may further include one or more input devices 309 for a user to input information into electronic system 303, such as a mouse or other pointing device, keyboard, touchpad, buttons, or control panel. Electronic system 303 may further include one or more output devices 311, such as monitors, displays, printers, audio output jacks, speakers, etc., for outputting information (eg, visual or audio output) to a user. In some embodiments, input device 309 and output device 311 may comprise a single touch screen device that may be used for both inputting information to electronic system 303 and outputting visual information to a user. Input device 309 and output device 311 may be in electrical communication with one or more of memory device 305 and electronic signal processor device 307 .Referring to Figure 4, a processor-based system 400 is depicted. Processor-based system 400 may include various microelectronic devices and microelectronic device structures fabricated in accordance with embodiments of the present disclosure (eg, including microelectronic device 201 or one or more of microelectronic device structures 100 , 100 ′, 200 ) individual microelectronic devices and microelectronic device structures). Processor-based system 400 may be any of a variety of types, such as a computer, pager, cellular telephone, personal assistant, control circuit, or other electronic device. Processor-based system 400 may include one or more processors 402 , such as microprocessors, for controlling system functions and processing of requests in processor-based system 400 . Processor 402 and other subcomponents of processor-based system 400 may include microelectronic devices and microelectronic device structures (eg, including microelectronic device 201 or microelectronic device structures 100, 100', Microelectronic devices and microelectronic device structures of one or more of 200).The processor-based system 400 may include a power supply 404 in operative communication with the processor 402 . For example, if processor-based system 400 is a portable system, power source 404 may include one or more of the following: fuel cells, power scavenging devices, permanent batteries, replaceable batteries, and rechargeable batteries. The power supply 404 may also include an AC adapter; thus, the processor-based system 400 may be plugged into, for example, a wall outlet. The power supply 404 may also include a DC adapter so that the processor-based system 400 can be plugged into, for example, a vehicle cigarette lighter or vehicle power port.Various other devices may be coupled to processor 402 depending on the functions performed by processor-based system 400 . For example, user interface 406 may be coupled to processor 402 . User interface 406 may include input devices such as buttons, switches, keyboards, light pens, mice, digitizers and styluses, touch screens, voice recognition systems, microphones, or combinations thereof. Display 408 may also be coupled to processor 402 . Display 408 may include an LCD display, SED display, CRT display, DLP display, plasma display, OLED display, LED display, three-dimensional projection, audio display, or combinations thereof. Additionally, RF subsystem/baseband processor 410 may also be coupled to processor 402 . RF subsystem/baseband processor 410 may include an antenna coupled to an RF receiver and RF transmitter (not shown). One communication port 412 or more than one communication port 412 may also be coupled to the processor 402 . The communication port 412 may be used to couple to one or more peripheral devices 414, such as a modem, printer, computer, scanner, or camera, or to a network, such as a local area network, remote area network, intranet, or the Internet.The processor 402 may control the processor-based system 400 by implementing software programs stored in memory. Software programs may include, for example, operating systems, database software, drafting software, word processing software, media editing software, or media playback software. Memory is operably coupled to processor 402 to store and facilitate execution of various programs. For example, processor 402 may be coupled to system memory 416, which may include one or more of the following: spin torque transfer magnetic random access memory (STT-MRAM), magnetic random access memory (MRAM), dynamic random access memory Access memory (DRAM), static random access memory (SRAM), particle track memory, and other known memory types. System memory 416 may include volatile memory, non-volatile memory, or a combination thereof. System memory 416 is typically large so that it can store dynamically loaded applications and data. In some embodiments, system memory 416 may include semiconductor devices, such as the microelectronic devices and microelectronic device structures described above (eg, microelectronic device 201 and microelectronic device structures 100 , 100 ′, 200 ), or its combination.Processor 402 may also be coupled to non-volatile memory 418, which does not imply that system memory 416 must be volatile. Nonvolatile memory 418 may include one or more of the following: STT-MRAM, MRAM, read only memory (ROM) such as EPROM, resistive read only memory (RROM), and flash memory to be used in conjunction with system memory 416 . flash memory. The size of the non-volatile memory 418 is typically chosen to be just enough to store any required operating system, application programs, and fixed data. Additionally, non-volatile memory 418 may include mass storage, such as disk drive memory, such as a hybrid drive that includes resistive memory or other types of non-volatile solid state memory. Non-volatile memory 418 may include microelectronic devices, such as the microelectronic devices and microelectronic device structures described above (eg, microelectronic device 201 and microelectronic device structures 100, 100', 200), or combinations thereof.Accordingly, in at least some embodiments, an electronic device includes: an input device; an output device; a processor device operably coupled to the input device and the output device; and a memory device operably coupled The device is a processor device and includes at least one microelectronic device. The at least one microelectronic device includes a string of memory cells extending vertically through a stacked structure including a vertically alternating sequence of substantially tungsten-free insulating and conductive structures arranged in layers; and additional conductive structures , which is horizontally adjacent to the conductive structure of the stacked structure and includes beta phase tungsten. The vertical height of the additional conductive structure is greater than the vertical height of the conductive structure of the stacked structure.Embodiments of the present disclosure may be further characterized, but not limited to, as set forth below.Embodiment 1: A microelectronic device comprising: a stacked structure comprising alternating conductive and insulating structures arranged in layers, each of the layers comprising a conductive structure and an insulating structure, respectively; a string of memory cells, which extending vertically through the stack, the string of memory cells including channel material extending vertically through the stack; and a conductive rail laterally adjacent to the conductive structure of the stack, the The conductive track includes a material composition different from that of the conductive structure of the stack structure.Embodiment 2: The microelectronic device of Embodiment 1, wherein the conductive track is in direct physical contact with the conductive structure, the conductive track extending horizontally beyond a horizontal boundary of the insulating structure.Embodiment 3: The microelectronic device of Embodiment 1 or Embodiment 2, wherein the conductive rail comprises a T-shaped rail of conductive material, a first portion of the T-shaped rail being positioned laterally beyond the insulating structure. The outer sidewall, the second portion of the T-rail is positioned vertically between portions of the insulating structure, the second portion having a height substantially equal to the height of the conductive structure.Embodiment 4: The microelectronic device of any one of Embodiments 1-3, wherein the conductive track has a greater conductivity than the conductive structure.Embodiment 5: The microelectronic device of any one of Embodiments 1-4, wherein: the conductive rail further comprises phosphorus, arsenic, antimony, bismuth, boron, aluminum, gallium, carbon, fluorine, chlorine, one or more of bromine and argon; and the conductive structure of the stacked structure is substantially free of fluorine.Embodiment 6: The microelectronic device of any one of Embodiments 1-5, further comprising a conductive liner material between the insulating structure and the conductive structure, wherein the conductive track comprises tungsten, And the conductive lining material includes titanium nitride.Embodiment 7: The microelectronic device of any one of Embodiments 1-6, further comprising a dielectric barrier material between and in direct contact with the insulating structure and the conductive structure.Embodiment 8: The microelectronic device of any one of Embodiments 1-7, further comprising a dielectric material within a replacement gate trench extending through the stacked structure, wherein laterally adjacent to the A lateral dimension of the dielectric material of the insulating structure is greater than a lateral dimension of the dielectric material laterally adjacent to the conductive track.Embodiment 9: A memory device comprising: a stack structure comprising alternating layers of conductive structures and insulating structures; guide posts extending vertically through the stack structure, each guide post comprising a channel structure, The channel structure includes a semiconductive material extending vertically through the stack structure; and a conductive track extending vertically along sidewalls of the conductive structure and the insulating structure of the stack structure, the The conductive rails have a greater electrical conductivity than the conductive structures.Embodiment 10: The memory device of Embodiment 9, wherein the conductive structures of the stacked structure are relatively close to the guide posts, and the conductive rails are relatively far from the guide posts, the conductive structures are composed of A conductive material of one material composition is formed, and the conductive tracks are formed of another conductive material having a second, different material composition.Embodiment 11: The memory device of Embodiment 9 or Embodiment 10, further comprising a data line structure overlying the stacked structure and a source structure underlying the stacked structure, the conductive posts electrically connected to the data line structure and the source structure.Embodiment 12: The memory device of Embodiment 9 or Embodiment 10, wherein the conductive rails extend vertically along portions of the sidewalls of vertically adjacent pairs of the insulating structures.Embodiment 13: The memory device of any of Embodiments 9-12, wherein the conductive rails each exhibit a height equal to or greater than a height of each of the conductive structures in the stacked structures, respectively.Embodiment 14: The memory device of any one of Embodiments 9-13, wherein the conductive rails comprise tungsten, and the conductive structures of the stacked structure comprise one of titanium, ruthenium, aluminum, and molybdenum or more.Embodiment 15: A method of forming a microelectronic device, the method comprising: forming a stack structure including vertically alternating conductive structures and insulating structures; forming a channel material and at least one material extending vertically through the stack structure. memory strings of a dielectric material; and forming conductive tracks along outer sidewalls of the conductive structures of the stacked structure, the conductive tracks comprising a material composition different from that of the conductive structures of the stacked structure.Embodiment 16: The method of Embodiment 15, wherein: forming the stacked structure comprises forming the conductive structures of the stacked structure using atomic layer deposition; and forming the conductive tracks comprises using chemical vapor deposition and atomic layer deposition One or more of the depositions form the conductive tracks.Embodiment 17: The method of Embodiment 15, wherein forming the conductive rails comprises removing portions of conductive material of the conductive structures that are exposed in openings formed in the stack structure to allow the conductive traces to be electrically conductive a structure laterally recessed relative to the insulating structure; and growing another conductive material of the conductive track within the opening, the conductive track extending from the conductive structure into the opening and laterally beyond the insulating the outer side walls of the structure.Embodiment 18: The method of Embodiment 17, wherein growing the another conductive material of the conductive track within the opening comprises growing the another conductive material to be with the outside of the insulating structure The walls at least partially overlap.Embodiment 19: The method of Embodiment 17 or Embodiment 18, further comprising forming an inhibitory material to be absorbed within the insulating structure and the At least one of the formation accelerators on the outer sidewalls of the conductive structure.Embodiment 20: The method of any one of Embodiments 15-19, wherein forming the stacked structure comprising alternating conductive and insulating structures comprises forming a conductive liner material and a dielectric around the conductive structures at least one of the barrier materials.Embodiment 21: The method of any one of Embodiments 15-20, wherein forming the conductive track comprises forming grooves in opposite corners of the insulating structure, and forming the conductive track to exhibit substantially The upper part has a rectangular cross-sectional shape, and at least a portion of the conductive track is formed in the groove of the insulating structure.Embodiment 22: The method of Embodiment 15 or Embodiment 16, wherein forming the conductive tracks comprises: forming polysilicon material laterally adjacent to the conductive structures; and converting at least some of the polysilicon material to include Conductive material of beta phase tungsten.Embodiment 23: An electronic system comprising: an input device; an output device; a processor device operably coupled to the input device and the output device; and a memory device operably coupled to the output device a processor device and including at least one microelectronic device including: a string of memory cells extending vertically through a stacked structure including a substantially tungsten-free insulating structure arranged in layers and a vertically alternating sequence of conductive structures; and additional conductive structures horizontally adjacent to the conductive structures of the stacked structure and comprising beta phase tungsten, the additional conductive structures having a greater vertical height than the stacked structures The vertical height of the conductive structure.Embodiment 24: The electronic system of Embodiment 23, wherein the additional conductive structures partially surround outer sidewalls of pairs of insulating structures that are vertically adjacent to each other.Embodiment 25: The electronic system of Embodiment 23 or Embodiment 24, wherein the memory device comprises a 3D NAND flash memory device.Although certain illustrative embodiments have been described in connection with the drawings, those skilled in the art will recognize and appreciate that the embodiments encompassed by this disclosure are not limited to those explicitly shown and described herein. Rather, many additions, deletions, and modifications to the embodiments described herein may be made without departing from the scope of the embodiments covered by this disclosure, such as those claimed hereinafter, including legal equivalents . Additionally, features from one disclosed embodiment may be combined with features of another disclosed embodiment while remaining within the scope of the present disclosure. |
The present invention details a method which characterizes an STI fabrication process, and more particularly provides information relating to a variation in the STI sidewall profile between trenches in a middle portion of an array and a trench on an outer portion thereof. The method comprises forming two STI arrays with an STI fabrication process, forming a conductive layer over each array, biasing each conductive layer and determining a current associated therewith. The two current are then utilized to ascertain the variation of interest. |
What is claimed is: 1. A method of characterizing a first shallow trench isolation process, comprising:forming a first array of isolation regions using the first shallow trench isolation process, wherein each of the isolation regions in the first array have a first length, and wherein the first array has a first area associated therewith; forming a second array of isolation regions using the first shallow trench isolation process, wherein each of the isolation regions in the second array have a second length, and wherein the second array has a second area associated therewith, and wherein the first area and second areas are different and the first and second lengths are not equal; forming a first conductive layer over the first array and a second conductive layer over the second array, wherein the first and second conductive layers are electrically isolated from one another; biasing the first conductive layer and determining a first current associated therewith; biasing the second conductive layer and determining a second current associated therewith; evaluating a variation between isolation regions in an interior portion of the first and second arrays and an outer edge of the first and second arrays using the first and second currents. 2. The method of claim 1, wherein evaluating the variation between isolation regions in the interior portion and the outer edge of the arrays comprises using the first and second currents to generate two equations which are solved concurrently to provide a numeric indication reflecting the variation.3. The method of claim 2, wherein the first array has the first area A1, and the first length is L1, and wherein a first equation of the two equations comprisesIg1=JnA1+JeL1, wherein Ig1 comprises the first current, Jn comprises a current density associated with an isolation region in an inner portion of an array fabricated by the first shallow trench isolation process, and Je represents a current per unit length associated with an isolation region on an outer edge of the array.4. The method of claim 3, wherein the second array has the second area A2, and the second length is L2, and wherein a second equation of the two equations comprisesIg2=JnA2+JeL2, wherein Ig2 comprises the second current.5. The method of claim 4, wherein the first and second equations are used to determine Jn and Je.6. The method of claim 5, further comprising repeating the steps of forming first and second arrays with a second shallow trench isolation process, forming first and second conductive layers thereover, biasing the first and second conductive layers to determine first and second currents, determining Jn and Je for the second shallow trench isolation process, and evaluating whether the first or second shallow trench isolation is preferred using Je for the first and second shallow trench isolation processes.7. The method of claim 1, wherein forming the first and second conductive layers comprises:depositing a conductive material; and etching a longitudinal space in the conductive material, thereby separating the conductive material into the first and second conductive layers, respectively. 8. The method of claim 1, wherein biasing the first conductive layer comprises coupling a DC potential of a predetermine value thereto and measuring an injection current into a substrate in which the first array resides, wherein the injection current comprises the first current.9. The method of claim 8, wherein biasing the second conductive layer comprises coupling a DC potential of the predetermined value thereto and measuring an injection current into a substrate in which the second array resides, wherein the current comprises the second current.10. A method of characterizing a first shallow trench isolation process, comprising:forming a first array of isolation regions using the first shallow trench isolation process in a first substrate, wherein each of the isolation regions in the first area have a first length (L1), and wherein the first area has a first area (A1) associated therewith; forming a second array of isolation regions using the first shallow trench isolation process in a second substrate, wherein each of the isolation regions in the second array having a second length (L2), wherein the second array has a second area (A2) associated therewith, and wherein L1 does not equal L2 and A1 does not equal A2; forming a first conductive layer over the first array; forming a second conductive layer over the second array; biasing the first conductive layer with a predetermine potential and measuring a first injection current (Ig1) into the first substrate in response thereto; biasing the second conductive layer with the predetermine potential and measuring a second injection current (Ig2) into the second substrate in response thereto; using Ig1 and Ig2 to determine a variation between isolation regions in an interior portion of first and second arrays and an outer edge of the first and second arrays. 11. The method of claim 10, wherein using Ig1 and Ig2 comprises: forming a first equation,Ig1=JnA1+JeL1, wherein Jn represents a current density associated with an isolation region in an array of isolation regions formed with the first shallow trench isolation process, and Je is a current per unit length associated with an outer edge isolation region of an array of isolation regions formed with the first shallow trench isolation process; forming a second equation, Ig2=JnA2+JeL2; and solving the first and second equations concurrently to determine Jn and Je. 12. The method of claim 11, further comprising:repeating the acts of claims 10 and 11 for a second shallow trench isolation process, thereby determining Jn and Je for the second shallow trench isolation process; and selecting one of the first and second shallow trench isolation process as a preferred process based on Je of the first and second shallow trench isolation processes, respectively. |
TECHNICAL FIELD OF INVENTIONThe present invention relates to the characterization of shallow trench isolation structures. In particular, the invention provides a method which can distinguish between an outer edge current component and a normal current component associated with an array of shallow trench isolation structures.BACKGROUND OF THE INVENTIONShallow trench isolation (STI) has become a common isolation method for deep submicron CMOS technologies and for some power devices. The shallow trench isolation process begins with a relatively shallow trench, which is first etched in a silicon (Si) substrate. This trench is refilled with an insulator material and the surface is planarized to complete the isolation structure. During fabrication, the shape of both the top and bottom corners of the trench are important for device performance. Sharp corners with a small bending radius or with faceting can cause high electric fields, high mechanical stress, and non uniform oxide thickness, resulting in a degradation of device performance and gate oxide integrity problems.In addition, the etching process can result in shallow trench isolation regions within an array having different geometric shapes, in particular, the sidewalls of the trenches can be different. For instance, an adjacent trench formation or etching process, can influence the manner in which a given trench forms. This is especially true for STI regions located at the ends, or edges, of an array of STI regions and will be explained in detail later.Prior art focused on trench corner characterization and developing process controls which are intended to minimize extreme bends in the shape of both the top and bottom corners of a trench. Models of trench shape and formation using experimental and numerical methods are available; concentrating on detection of sharp corners, extreme bends in trench sidewalls, and defects in the refill process. In addition, prior art modeled the shape of the trenches and the thickness of the oxide as accurately as possible. However, the prior art does not characterize differences between the shape of shallow trench isolation regions located on the edge of an STI array versus STI regions located in the center of an array. Prior art considered this issue as negligible, however as the industry proceeds with downward scaling of electronic devices, these differences need to be taken into account and characterized. This becomes apparent when considering the surface area of a transistor gate located adjacent to an STI region, where the STI region is located on an edge of an STI, and comparing it to the surface area of a transistor gate where the STI region is located within the center of such an array.SUMMARY OF THE INVENTIONThe following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention, nor delineate the scope of the invention. Its primary purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.The present invention details an approach in which the properties of an STI array can be easily characterized, including differences between the edge regions and the isolation regions located in the center of an STI array. This approach takes advantage of an intrinsic current enhancement inherent from the STI regions located on the outer edges of an STI array. A numerical method is then employed, which can distinguish two current components and thus provides vital characteristics of the STI sidewall structure. This approach allows for easy identification of the different characteristics for an STI array.To the accomplishment of the foregoing and related ends, the invention comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects and implementations of the invention. These are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a perspective view of an exemplary prior art flash memory device in which shallow trench isolators are utilized to electronically isolate source and drain regions, respectively, of memory cells along a word line thereof.FIG. 2 is a cross sectional view of the prior art flash memory device of FIG. 1 taken along line 2-2'.FIG. 3 is a perspective view of an exemplary flash memory device in which the effects of down scaling STI regions is illustrated.FIG. 4 is a cross sectional view of a flash memory device of FIG. 3 taken along line 3-3' in which the variations in STI regions is illustrated.FIGS. 5A & 5B are cross sectional views of prior art flash memory devices which compares variations in the shape of STI edge regions and the effects of down scaling the STI array.FIG. 6 is a perspective view emphasizing the effects of STI sidewall variation on edge transistor surface areas employed in an array.FIG. 7 is a cross sectional view emphasizing an STI region formed on the STI array edge versus an STI formed within center portion of an STI array.FIGS. 8A & 8B are top and sectional views, respectively, of an STI array.FIG. 9 is a flow chart illustrating a method of characterizing an STI process in accordance with the present invention.FIGS. 10A & 10B are top and cross sectional views, respectively, for an STI with a first isolation area.FIGS. 11A & 11B are top and cross sectional views respectively, for an STI with a second differing isolation area.FIG. 12 is a perspective view of two differing STI arrays having a bias applied thereto.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will now be described with respect to the accompanying drawings, in which like numbered elements represent like parts. In order to facilitate an understanding of various advantageous features of the present invention, a brief discussion of the prior art will be provided. Subsequently, the various features of the present invention will be discussed in detail in conjunction with several exemplary figures.As can be seen in FIG. 1, an exemplary, simplified flash memory device comprises several transistors or stacked gate cells, separated by trench isolation regions 1. Each cell comprises a drain 2, a source 3 and a stacked gate 4. The drain 2 and source 3 of each cell typically comprise an N-type material embedded in a P-type substrate 6. The stacked gate 4 may include several layers (or stacks) of oxides and conductive materials. Typically the memory cells are arranged, for example, as shown in FIG. 1, with an insulating trench, or STI region 1, electrically separating groups (or banks) of cells 5 along a word line.FIG. 2 provides an exemplary cross sectional illustration of the flash memory device of FIG. 1 taken along line 2-2'. From FIG. 2 it is clear that trench isolation regions 1 can be formed in a relatively consistent manner, when devices and regions are not scaled down and therefore the group of memory cells 5 performance is likewise consistent. In other words, the side wall shape of the outer trenches 8, is generally similar to that of the inner trench 7, because the shallow trench isolators have relatively large distances between them, thus minimizing their influence on an adjacent STI etching process.As flash memory devices are scaled down, or include more devices, device isolation techniques have to be modified. Shallow trench isolation (STI) has become a common isolation method for deep submicron CMOS technologies. The shallow trench isolation process begins with a relatively shallow trench, which is first etched in a silicon (Si) substrate. This trench is refilled with an insulator material and the surface is planarized to complete the isolation structure. During fabrication, the shape of both the top and bottom corners of the trench become important for device performance. Sharp corners with a small bending radius, or with faceting, can cause high electric fields, high mechanical stress and non uniform oxide thickness, resulting in a degradation of device performance and gate oxide integrity problems. In addition, the etching processes can result in shallow trench isolators with different geometric shapes, in particular, the sidewalls of the trenches can be different. This occurs during the formation or etching process of the array of respective trenches. An adjacent trench formation, or etching, process can influence the manner in which a given trench forms. This is especially true for STI regions located at the ends or edges of an array of such regions and is illustrated, for example, in FIGS. 3 and 4.FIG. 3 illustrates an exemplary flash memory device that has been scaled down or includes more devices. This scaled down flash memory device comprises several memory cells arranged in a manner similar to that of FIG. 1, separated by shallow trench isolation regions. Each memory cell comprises a drain 2, a source 3 and a stacked gate 4. The drain 2 and source 3 of each memory typically comprises an N-type material embedded in a substrate 6. The gate 4 may include several layers (or stacks) of oxides and/or conductors such as polysilicon. The memory cells are arranged, as shown in FIG. 3, with an insulating trench, or shallow trench isolation region, electrically separating memory cells 11, 12 along a word line. As the STI regions 10 decrease in size, variations in the side portions thereof on those regions on the outside of the array have a larger influence on the operation of memory cells associated therewith. Note that such variations in STI trench profiles in FIG. 3 and other figures are not necessarily drawn to scale, but rather are illustrated as such for purposes of clarity.FIG. 4 provides an exemplary cross sectional illustration of the flash memory device of FIG. 3 taken along 3-3'. As clearly seen in FIG. 4, the shallow trench isolation regions 10 located on the edges of the array, have a different sidewall shape 9 than the sidewalls associated with the shallow trench isolation regions located within the center of the STI array due to micro-loading in the formation thereof. This difference can cause the memory cells 11 associated with the ends of the STI array to behave differently than the memory cells 12 located in the center of the STI array. This is especially true if the surface areas 13 of source/drain regions of the edge memory cells 11 are not equal or similar to the source/drain regions of memory cells 11 in the center of the array (e.g., different surface areas 13 can lead to different memory cell injection currents causing non-uniformities in the programming and erasing of memory cells across the memory cell array).FIG. 5A provides a cross sectional illustration of the prior art device of FIG. 1 while FIG. 5B provides a cross sectional illustration of the device of FIG. 3. These figures are presented in order to emphasize the trench shape differences 8, 9 which can occur as memory cell density is increased. As can be seen, the source/drain areas 14 of the outside memory cell requires a certain degree of process control. However, as more memory cells are added to a device and STI regions become smaller, this surface area 13 becomes more difficult to control; therefore the process controls must become more stringent, including identification of key process parameters.FIG. 6 presents an exemplary three dimensional illustration emphasizing the effects of device downscaling on edge transistor surface areas. A shallow trench isolation region, formed in the center portion of a STI array has a shape similar to 20, in which the STI's top portion 23 corresponds to 26 representing an area of the STI region. The surface area of the STI's top portion 23, effectively blocks injection current from entering a portion of a transistor gate of cell adjacent the region 20. However, a STI region formed on an array's edge would include area 21, for example, resulting in STI top portions 23 and 24 corresponding to a surface area 26 and 27. Therefore an STI region on an array edge reduces a surface area of the source/drain regions 25 associated therewith. Since a current drive of a transistor or memory cell is a function of the width/length ratio (W/L) of the device, the variation in shape of STI regions between the center of an STI array and an edge thereof causes corresponding variations in transistor or memory cell behavior across the array, which in many cases is undesirable.FIG. 7 provides a cross sectional view of an STI region formed on the STI array edge 31 versus an STI formed within center portion of an array 30 (with the differences being amplified or exaggerated for purposes of illustration). Trenches formed within the center of the array are shaped similar to 30. Trenches formed on an array's edge undergo a different formation 31, for example. This formation results in a trench shape similar to 31 and occurs generally on one side of the trench (the side which does not have a trench adjacent to it). After the trench is formed, the array is filled with an insulator material 32 and the STI is complete.FIGS. 8A and 8B illustrate top and cross sectional views of an STI array, respectively. Because the outer edge STI regions 40 and 41 have a different intrinsic shape 43, memory cells associated therewith will have different electrical characteristics from the center portions of the STI array 42. The intrinsic shape 43 of the STI outer edges 41 causes some injection current variation for transistors or memory cells associated therewith versus devices associated with center STI shapes 42.FIG. 9 is a flow chart illustrating a method 100 of characterizing an STI process in accordance with the present invention. As discussed above, an STI process employed in generating an STI array will produce variations between STI regions in the center and on the outer edges of the array. By characterizing an STI process, various processes may be evaluated to determine which type of STI process provides optimal uniformity with respect to the position of an STI region within an STI array. While the exemplary method 100 is illustrated and described herein as a series of acts or events, it will be appreciated that the present invention is not limited by the illustrated ordering of such acts or events, as some acts or events may occur in different orders or concurrently with other acts or events apart from that shown or described herein, in accordance with the invention. In addition, not all illustrated acts or events may be required to implement a methodology in accordance with the present invention. Moreover, it will be appreciated that the method 100 may be implemented in association whit apparatus and systems illustrated and described herein and as well as in association with other systems not illustrated.In FIG. 9, the method 100 begins by forming a first array of STI regions at 102, wherein each of the STI regions within the array generally have a first length and the first array has a first area associated therewith. For example, as illustrated in FIGS. 10A and 10B, which represent a top plan view and cross section, respectively, the STI fingers 50 each have a length L1 and array width W1 such that an array area A1 is calculated by L1*W1. Continuing in FIG. 9, a second array of STI regions is formed at 104, with the same STI fabrication that was used to form the first array, wherein the second array has a second length and a second area associated, wherein the first and second lengths are different and the first and second areas are different, respectively. For example, as illustrated in FIGS. 11A and 11B, a second array of STI regions 51 have a second length L2 and a second area A2 calculated by L2*W2, wherein L1<>L2 and A1<>A2.The method 100 continues at 106, wherein first and second conductive layers are formed over the first and second STI arrays, respectively. In one example, as illustrated FIG. 12, if the first and second STI arrays are integrated onto a single substrate, the first and second conductive layers may comprise a single conductive layer such as a metal or polysilicon film separated into two sections 61 and 64 with a space or insulating region 63 disposed therebetween. As illustrated in FIG. 12, the first conductive layer 64 overlies the first STI array of regions 50 while the second conductive layer 61 overlies the second STI array of regions 51.At 108, the first conductive layer 64 is biased with a voltage 62 and a first current (Ig1) is determined, for example, using a current meter 65. At 110, the second conductive layer 61 is biased with a voltage 60, and a second current (Ig2) is determined using, for example, a current meter 66. The method 100 then ascertains an amount of STI region variation between STI regions on an outer edge of an array from STI regions not on the outer edges of arrays (for the STI fabrication process employed to form the first and second arrays) using the first and second currents at 112.In accordance with one aspect of the present invention, act 112 comprises using the measured currents Ig1 and Ig2, to set up two equations with two unknowns. For the first array:Ig1=JnA1+JeL1,wherein, Jn comprises the current density associated with one finger or region of the first STI array, and Je represents a current per unit length associated with an outer edge of an STI region of the outer edge of the first array. Similarly, for the second array:Ig2=JnA2+JeL2.Since A1, L1, A2 and L2 are known and Ig1 and Ig2 have been measured, we have two equations with two unknowns (Jn and Je), solving the two equations, Jn and Je are determined wherein Je provides information relating to the character of the outer edge of an STI array made by the STI fabrication process being evaluated. By repeating such analysis for additional STI arrays fabricated with different STI fabrication processes, STI fabrication processes can be characterized with respect to the variation in STI regions from outer edges to inside portions of the array.Although the invention has been shown and described with respect to a certain implementation or implementations, it will be appreciated, by those skilled in the art, that equivalent alterations and modifications will occur, to others skilled in the art, upon the reading and understanding of this specification and the annexed drawings. In particular, regard to the various functions performed by the above described components (assemblies, devices, circuits, etc.), the terms (including a reference to a "means") used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (i.e., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary embodiments of the invention. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several implementations or applications of the invention, such features may be combined with one or more other features of the other implementations, as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the term, "includes", "has", "having", and/or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the terms "comprises" and "comprising". |
A transformer (100, 180, 500) is described. The transformer comprises a top conductive coil (106, 206, 506), a bottom conductive coil (102), and a dielectric layer (104, 504) separating the top conductive coil from the bottom conductive coil. The top conductive coil comprises an outermost portion (110, 510) having multiple segments (142, 144, 148). The segments are configured to reduce the peak electric field in a region of the dielectric layer near the outer edge of the top conductive coil. The top conductive coil may comprise a first lateral segment (142), and a second lateral segment (144) that is laterally offset with respect to the first lateral segment. The first lateral segment may be closer to the center of the top conductive coil than the second lateral segment, and may be closer to the bottom conductive coil than the second lateral segment. The transformer may be formed using microfabrication techniques. |
CLAIMSWhat is claimed is:1. An apparatus comprising:a first conductive coil;a second conductive coil; anda dielectric layer separating the first conductive coil from the second conductive coil; wherein the first conductive coil comprises an outermost portion having a non-planar bottom surface.2. The apparatus of claim 1, wherein the outermost portion comprises a first lateral segment and a second lateral segment connected to the first lateral segment and laterally offset from the first lateral segment.3. The apparatus of claim 2, wherein at least a portion of the first lateral segment is closer to a center of the first conductive coil than the second lateral segment.4. The apparatus of claim 3, wherein the first lateral segment is closer to the second conductive coil than the second lateral segment.5. The apparatus of claim 1, wherein at least a portion of the outermost segment sits on a raised portion of the dielectric layer.6. The apparatus of claim 5, wherein the raised portion of the dielectric layer is formed on a dielectric ridge.7. The apparatus of claim 1, wherein the first conductive coil has a first radius and the second conductive coil has a second radius, and wherein the second radius is less than the first radius.8. The apparatus of claim 1, wherein the dielectric layer is a first dielectric layer and has a first dielectric constant, and wherein the second conductive coil is encased in a second dielectric5853258.1 layer having a second dielectric constant, wherein the second dielectric constant is greater than the first dielectric constant.9. The apparatus of claim 1, wherein the first conductive coil is a spiral.10. The apparatus of claim 1, wherein the first conductive coil and the second conductive coils are disposed on a semiconductor substrate.11. An apparatus comprising:a first conductive coil;a second conductive coil;a dielectric layer separating the first conductive coil from the second conductive coil; and a controller electrically coupled to the first conductive coil;wherein the first conductive coil comprises an outermost portion having a non-planar bottom surface.12. The apparatus of claim 11, wherein the outermost portion comprises a first lateral segment and a second lateral segment connected to the first lateral segment and laterally offset from the first lateral segment.13. The apparatus of claim 12, wherein at least a portion of the first lateral segment is closer to a center of the first conductive coil than the second lateral segment.14. The apparatus of claim 13, wherein the first lateral segment is closer to the second conductive coil than the second lateral segment.15. The apparatus of claim 11, wherein the controller comprises a motor driver.16. The apparatus of claim 11, wherein at least a portion of the outermost segment sits on a raised portion of the dielectric layer.17. A method for fabricating an isolator, the method comprising:forming a first metallization layer on a semiconductor substrate, and patterning the first metallization layer to obtain a first conductive coil;5853258.1 forming a dielectric layer on the semiconductor substrate to cover the first conductive coil;forming a dielectric ridge; andforming a second metallization layer on the dielectric layer, and patterning the second metallization layer to obtain a second conductive coil such that an outermost portion of the second conductive coil partially lies over the dielectric ridge.18. The method of claim 17, wherein patterning the second metallization layer comprises forming a first portion of the outermost portion to lie on the dielectric layer and forming a second portion of the outermost portion to lie on the dielectric ridge, such that the first portion is closer than the second portion with respect to a center of the first second conductive coil.19. The method of claim 17, wherein the dielectric ridge is formed on the dielectric layer.20. The method of claim 17, wherein the dielectric layer covers, at least partially, the dielectric ridge.5853258.1 |
ISOLATION TRANSFORMER FOR INCREASED VOLTAGE OPERATIONS AND RELATED METHODSCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application is a continuation claiming the benefit under 35 U.S.C. § 120 of U.S. Pat. App. Serial No. 15/347,724, filed November 9, 2016 under Attorney Docket No. G0766.70126US00 and entitled "MAGNETIC ISOLATORS FOR INCREASED VOLTAGE OPERATIONS AND RELATED METHODS," which is hereby incorporated herein by reference in its entirety.FIELD OF DISCLOSURE[0002] The present application relates to microfabricated magnetic isolators.BACKGROUND[0003] Some magnetic isolators include a primary winding and a secondary winding. Typically, a signal is provided to the primary winding of the isolator, and is coupled via magnetic induction to the secondary winding.SUMMARY OF THE DISCLOSURE[0004] According to some embodiments, a magnetic isolator is described. The magnetic isolator may comprise a top conductive coil, a bottom conductive coil, and a dielectric layer separating the top conductive coil from the bottom conductive coil. The top conductive coil may comprise an outermost portion having multiple segments. The segments may be configured to reduce the peak electric field in a region of the dielectric layer near the outer edge of the top conductive coil. The top conductive coil may comprise a first lateral segment, and a second lateral segment that is laterally offset with respect to the first lateral segment. The first lateral segment may be closer to the center of the top conductive coil than the second lateral segment, and may be closer to the bottom conductive coil than the second lateral segment. The magnetic isolator may be formed using microfabrication techniques.[0005] According to one aspect of the present application, an apparatus is provided. The apparatus may comprise a first conductive coil, a second conductive coil, and a dielectric layer separating the first conductive coil from the second conductive coil, wherein the first conductive coil comprises an outermost portion having a non-planar bottom surface. [0006] According to another aspect of the present application, an apparatus is provided. The apparatus may comprise a first conductive coil, a second conductive coil, a dielectric layer separating the first conductive coil from the second conductive coil, and a controller electrically coupled to the first conductive coil, wherein the first conductive coil comprises an outermost portion having a non-planar bottom surface.[0007] According to yet another aspect of the present application, a method for fabricating an isolator is provided. The method may comprise forming a first metallization layer on a semiconductor substrate, and patterning the first metallization layer to obtain a first conductive coil, forming a dielectric layer on the semiconductor substrate to cover the first conductive coil, forming a dielectric ridge, and forming a second metallization layer on the dielectric layer, and patterning the second metallization layer to obtain a second conductive coil such that an outermost portion of the second conductive coil partially lies over the dielectric ridge.BRIEF DESCRIPTION OF DRAWINGS[0008] Various aspects and embodiments of the application will be described with reference to the following figures. It should be appreciated that the figures are not necessarily drawn to scale. Items appearing in multiple figures are indicated by the same reference number in all the figures in which they appear.[0009] FIG. 1A is a cross sectional view illustrating a magnetic isolator, according to some non- limiting embodiments.[0010] FIG. IB is a perspective cutaway view illustrating a non-planar outermost portion of a conductive coil as may be used in an isolator, according to some non-limiting embodiments.[0011] FIG. 1C is a cross sectional view illustrating an alternative implementation of a magnetic isolator, according to some non-limiting embodiments.[0012] FIG. ID is a top view illustrating a conductive coil as may be used in an isolator, according to some non-limiting embodiments.[0013] FIG. 2 is a cross sectional view illustrating an alternative magnetic isolator to that of FIG. 1A, according to some non-limiting embodiments.[0014] FIG. 3 is a flowchart illustrating a method for microfabricating a magnetic isolator, according to some non-limiting embodiments.[0015] FIGs. 4A-4G are cross sectional views collectively illustrating a method for fabricating a magnetic isolator, according to some non-limiting embodiments.[0016] FIG. 5 is a cross sectional view illustrating another magnetic isolator, according to some non-limiting embodiments.5853258.1 [0017] FIG. 6A is a top view illustrating a photomask for forming a dielectric ridge, according to some non-limiting embodiments.[0018] FIG. 6B is a top view illustrating another photomask for forming a dielectric ridge, according to some non-limiting embodiments.[0019] FIG. 7 is a block diagram illustrating a system comprising a magnetic isolator, according to some non-limiting embodiments.DETAILED DESCRIPTION[0020] Applicant has appreciated that for a microfabricated magnetic isolator having primary and secondary coils separated by a dielectric layer, the maximum voltage at which the magnetic isolator can be operated may be increased by reducing the probability of electric breakdown in the dielectric layer. Electric breakdown can occur when the local electric field within a dielectric material exceeds the material's breakdown electric field. When electric breakdown occurs, a conductive path is formed within the dielectric material. Such a conductive path may electrically short the primary and secondary coils of the isolator, thus preventing the magnetic isolator from providing the desired isolation. Applicant has appreciated that certain regions of the dielectric material are particularly susceptible to electric breakdown, due to a localized peak in the electric field. Such regions of large localized electric field may reside near the outer edge of a coil of the isolator.[0021] According to one aspect of the present application, an outermost portion of a coil of a microfabricated magnetic isolator may be shaped in a manner which reduces the peak electric field near the edge of the coil. In this way, the probability of electric breakdown in the dielectric material may be reduced. In at least some embodiments the reduction may be significant.Consequently, an isolator exhibiting such a reduced peak electric field may withstand larger voltages, compared with conventional microfabricated magnetic isolators.[0022] In some embodiments, the peak electric field may be reduced by shaping the outermost portion of a conductive coil to include first and second lateral segments coupled together but offset from one another. The structure may resemble a stair step in some embodiments. In this way, the peak electric field may be reduced and/or moved compared to conductive coils having a single, planar segment as an outermost portion. In some embodiments, the isolator may be used as an ISO coupler.[0023] The aspects and embodiments described above, as well as additional aspects and embodiments, are described further below. These aspects and/or embodiments may be used5853258.1 individually, all together, or in any combination of two or more, as the application is not limited in this respect.[0024] FIG. 1A illustrates a magnetic isolator, according to some non-limiting embodiments. Magnetic isolator 100, also referred to herein simply as an "isolator" or a "transformer", may comprise a substrate 101, a bottom conductive coil 102, a dielectric layer 104, and a top conductive coil 106. In some embodiments, the top conductive coil may serve as the primary winding, and the bottom conductive coil may serve as the secondary winding. In other embodiments, the opposite configuration may be used.[0025] The terms "bottom" and "top" are used herein to refer to the relative location of the conductive coils with respect to the substrate along the y-axis. In particular, the term bottom will be used to indicate the conductive coil that is closer to the substrate and the term top to indicate the conductive coil that is farther from the substrate. In some embodiments, substrate 101 may comprise a semiconductor substrate, such as a silicon substrate. However, other materials may be used.[0026] Top conductive coil 106 may be formed on dielectric layer 104. Top conductive coil 106 may comprise one or more loops, and may be shaped as a spiral or according to any other suitable configuration. In some embodiments, the loops may be connected to each other, thus forming a continuous winding. Top conductive coil 106 may comprise any suitable conductive material, such as aluminum, copper, gold, silver, or chromium. During operation of magnetic isolator 100, an alternating current (AC) signal may be applied to a conductive coil (either the top or the bottom conductive coil), and an AC electric current may flow in the conductive coil. Consequently, a magnetic field may be generated. The generated magnetic field may have a component along the y-axis, and may be coupled to the opposite conductive coil, thus giving rise to an AC electromotive force in the opposite conductive coil. In this way, the AC signal may be coupled between the conductive coils, while at the same time direct current (DC) signals may be blocked via galvanic isolation. The ability to block DC signals may be desirable in applications in which two or more electric circuits must communicate, but their grounds are at different potentials. Galvanically isolating the conductive coils may prevent accidental currents. For example, galvanic isolation may prevent current flowing through a person's body, even if the person physically contacts the secondary portion of the magnetic isolator. Magnetic isolator 100 may be configured to operate at voltages equal to or greater than 600V, equal to or greater than 900V, equal to or greater than 1200V, equal to or greater than 1500V, or equal to or greater than 1800V.5853258.1 [0027] Conductive coil 106 may comprise an outermost portion 110, which may correspond to at least a portion of the outer periphery of the top conductive coil. Conductive coil 106 may be configured to limit the magnitude of the peak electric field in a region 115 near the outer edge of the conductive coil. Applicant has appreciated that regions of the dielectric layer near the outer edge of a conductive coil exhibit electric fields that are greater than in other regions of a magnetic isolator. This may be due in part to the outer edge not being bounded on both sides by another conductive portion of the conductive coil at the same potential. By contrast, inner portions of the conductive coil may be shielded by neighboring portions of the conductive coil at approximately equal potential, thus preventing undesirably high electric fields near those inner portions. Thus, outer regions of the conductive coil may be particularly susceptible to electric breakdown. To limit the magnitude of the peak electric field, outermost portion 110 may comprise a stepped portion, a Z-shaped portion, an L-shaped portion, a C-shaped portion, stairlike (or stair step) shaped portion, or other configurations comprising a non-planar portion, as can be seen in FIG. 1A.[0028] FIG. IB illustrates an example of an outermost portion 110 of the conductive coil 106 of FIG. 1A, according to some non-limiting embodiments. Outermost portion 110 may comprise lateral segment 142, lateral segment 144, and segment 148. As illustrated, outermost portion 110 may be disposed on dielectric layer 104.[0029] The lateral segments may be offset with respect to each other along the x-direction. Lateral segments 142 and 144 may be connected to each other by segment 148, and may be offset from one another along the x-axis. It can be seen that while segments 142 and 144 are offset from each other along the x-axis, in some embodiments there may be some overlap of those segments in the lateral direction (the x-direction). In some embodiments, lateral segment 144 may be disposed on dielectric ridge 140. Dielectric ridge 140 may comprise the same material as dielectric layer 104, though the application is not limited in this respect. In some embodiments, outer edge 154 of lateral segment 144 may extend beyond outer edge 152 of lateral segment 142. In some embodiments, outer edge 152 may be closer to the center of conductive coil 106 than outer edge 154 along the x-axis. In some embodiments, bottom edge 164 of lateral segment 144 may extend beyond bottom edge 162 of lateral segment 142. In some embodiments, bottom edge 162 may be closer to the bottom conductive coil 102 than bottom edge 164 along the y-axis. Bottom edge 162 and bottom edge 164 may collectively form a non- planar bottom surface of outermost portion 110.[0030] While FIG. IB illustrates a dielectric portion 140 being higher (along the y-axis) than lateral segment 142, the opposite configuration is also possible. In the latter configuration,5853258.1 lateral portions 142 and 144 may partially overlap with each other along the x-axis. In some embodiments, outermost portion 110 may comprise a curved structure, such that the outer edge of the outermost portion 110 is curved. According to one aspect of the present application, the peak electric field arising in a region near outermost portion 110 may be lower than the peak electric field that would arise if outermost portion 110 was replaced with a planar portion (e.g. , a portion having a rectangular cross section just like the illustrated inner portions of the conductive coil 106).[0031] Referring back to FIG. 1A, conductive coil 106 may be connected to pad 120. Pad 120 may comprise a conductive material, and may be disposed within conductive coil 106 in some embodiments. Pad 120 may be bonded to a wire 122. In this way, conductive coil 106 may be electrically coupled to a device disposed outside substrate 101.[0032] Bottom conductive coil 102 may be formed as a metallization layer on a surface of the substrate. Conductive coil 102 may comprise one or more loops, and may be shaped as a spiral, or may have any other suitable configuration. In some embodiments, the loops may be connected to each other, thus forming a continuous winding. Conductive coil 102 may comprise any suitable conductive material, such as aluminum, copper, gold, silver, or chromium.Conductive coil 102 may be connected to a pad 132. Pad 132 may comprise a conductive material, and may be exposed by forming of an opening on a surface of the substrate. Pad 132 may be bonded to a wire 134. In this way, conductive coil 102 may be electrically coupled to a device disposed outside substrate 101. Conducive coil 102 may be connected to pad 132 through metal wiring (or traces) 130. Metal wiring 130 may be connected to conductive coil 102 through one or more vias.[0033] Dielectric layer 104 may be disposed on substrate 101, and may cover, at least partially, conductive coil 102. Dielectric layer 104 may comprise one or more materials having a large electric breakdown (e.g. , greater than lOOKV/mm, greater than 500KV/mm, greater than lOOOKV/mm, greater than 2000KV/mm, greater than 3000KV/mm, greater than 4000KV/mm, between 2000KV/mm and 5000KV/mm or between any suitable range within such range). In some embodiments, dielectric layer 104 may comprise polyimide. In some embodiments, dielectric layer 104 may comprise more than one layer of dielectric material. In this way, if one of the dielectric layers experiences electrical breakdown, the presence of additional layers may mitigate the probability of forming a conductive path between the top conductive coil and the bottom conductive coil. Such multiple dielectric layers may be made from the same material (e.g. , polyimide), or from different materials.5853258.1 [0034] Alternatively, the bottom conductive coil of a magnetic isolator may be formed on a surface of a dielectric layer. Magnetic isolator 180, which is illustrated in FIG. 1C, may comprise a top conductive coil, dielectric layer 104, a bottom conductive coil, dielectric layer 105, and substrate 101. In some embodiments, dielectric layers 104 and 105 may be formed from the same dielectric material, such as polyimide. However, different dielectric materials may be used in other embodiments. In some embodiments, the bottom conductive coil may be formed from two conductive layers, though any other suitable number of conductive layers may be used. In some embodiments, the lower conductive layer of the bottom conductive coil comprises titanium tungsten. In some embodiments, the upped conductive layer of the bottom conductive coil comprises gold. The outermost portion of the bottom conductive coil may comprise segments 173 and 174. Segment 173 may extend outwardly from the outer edge 175 of segment 174. The use of segment 173 may reduce the peak electric field near the outermost portion of the bottom conductive coil. In some embodiments, both the top and the bottom conductive coils may have multi segmented outermost portions as illustrated in FIG. 1C. In other embodiments, only one conductive coil (either the top or the bottom conductive coil) may have multi segmented outermost portions. Thus, it should be appreciated that magnetic isolators according to aspects of the present application may include a top coil with a structure of the types described herein to reduce peak electric field, a bottom coil with a structure of the types described herein to reduce peak electric field, or both top and bottom coils with structures of the types described herein to reduce peak electric field. Stated another way, the configurations of top coils described herein may be applied to bottom coils as well. In some embodiments, the top conductive coil may be formed using a bottom conductive layer 161 and a top conductive layer 162. The bottom conductive layer may comprise titanium tungsten in some embodiments. The top conductive layer may comprise gold in some embodiments.[0035] FIG. ID is a top view illustrating top conductive coil 106 according to an embodiment of the present application. As illustrated, conductive coil 106 may comprise an outermost portion 110, which may represent the outer periphery of the conductive coil. That is, in the illustrated example the conductive coil is a spiral, and the outermost portion 110 is the largest loop of the spiral. Outermost portion 110 may comprise a lateral segment (e.g. , lateral portion 144) in a different plane than the remainder of conductive coil 106. For example, a lateral segment positioned further from the underlying substrate than the other portions of the conductive coil may be provided, as shown in FIGs. 1A- 1B.[0036] According to one aspect of the present application, the probability of electric breakdown in the dielectric layer may be reduced by moving the location of the peak electric field nearer5853258.1 bottom conductive coil 102, and by shielding bottom conductive coil 102 with a material having a dielectric constant greater than that of dielectric layer 104. In this way, the peak electric field may be reduced by the high-dielectric constant material, thus resulting in an attenuation of its magnitude.[0037] FIG. 2 illustrates a magnetic isolator 200 having a dielectric layer 225, a bottom conductive coil 102, a top conductive coil 206 and a dielectric layer 104. Dielectric layer 225 may have a higher dielectric constant than dielectric layer 104, and may contain at least a portion of bottom conductive coil 102 within its boundaries. In some embodiments, dielectric layer 225 may comprise silicon nitride.[0038] To move the peak electric field away from top conductive coil 206, that coil may have a radius Ri greater than the radius R2of bottom conductive coil 102. While not shown in FIG. 2, magnetic isolator 200 may comprise an outermost portion of the type described in connection with FIGs. 1A-1C. Thus, according to an aspect of the present application a magnetic isolator includes first and second coils, where one of the two coils has a larger radius than the other coil, and wherein one of the two coils has an outermost portion that is a stair step shape, like that shows in FIGs. 1A and IB. In some embodiments, the same coil has the larger radius also has the outermost portion with a stair step shape.[0039] In some embodiments, magnetic isolators of the types described herein may be microfabricated using semiconductor fabrication techniques. FIG. 3 is a flowchart illustrating a method 300 for microfabricating a magnetic isolator of the type described herein, according to some non-limiting embodiments. At act 302, a semiconductor substrate may be obtained. In some embodiments, the semiconductor substrate may comprise silicon. At act 304, a bottom conductive coil may be formed on the semiconductor substrate. In some embodiments, the bottom conductive coil may be formed by creating a metallization layer on a surface of the semiconductor substrate, and by patterning the metallization layer to obtain the desired shape. At act 306, a dielectric layer may be deposited on the semiconductor substrate. The dielectric layer may comprise polyimide in some embodiments. The dielectric layer may cover, at least partially, the bottom conductive coil. At act 308, a dielectric ridge may be formed on the dielectric layer. The dielectric ridge may have a curved shaped, at least in some of its portions. In some embodiments, the dielectric ridge may be shaped to form a circle, or at least a portion of a circle. At act 310, a top conductive coil may be formed on the dielectric layer. In some embodiments, the top conductive coil may be formed by creating a metallization layer on a surface of the dielectric layer, and by patterning the metallization layer to obtain the desired shape. The top conductive coil may comprise an outermost portion lying, at least in part, on the dielectric ridge.5853258.1 [0040] FIGs. 4A-4G are cross sectional views illustrating a non-limiting example of a method for microfabricating a magnetic isolator of the type described herein. In the process step illustrated in FIG. 4A, a dielectric material 103 may be disposed on substrate 101 (not shown in FIG. 4A). Dielectric material 103 may comprise silicon oxide in some embodiments. Bottom conductive coil 102, metal wiring 130 and pad 132 may be formed using photolithographic techniques. To provide access to the pad from an external circuit, an opening may be formed in dielectric material 103 in correspondence with pad 132.[0041] In the process step illustrated in FIG. 4B, dielectric layer 104 may be formed on substrate lOlto cover, at least partially, conductive coil 102. Dielectric layer 104 may be comprise multiple dielectric materials in some embodiments, and may be formed using any suitable deposition technique, such as physical vapor deposition (PVD) or spin coating.[0042] In the process step illustrated in FIG. 4C, dielectric ridge 140 may be formed. Dielectric ridge 140 may be formed by depositing a layer of dielectric material (e.g. , polyimide), and by patterning the dielectric material to obtained the desired shape.[0043] In the process step illustrated in FIG. 4D, a layer of dielectric material 150 may be deposited on dielectric layer 104. Such a dielectric material may have a greater dielectric constant than dielectric material 104. The use of dielectric material 150 may improve the endurance of the device with respect to the application of short voltages spikes. In some embodiments, dielectric material 150 may comprise silicon nitride.[0044] In the process step illustrated in FIG. 4E, top conductive coil 106 may be formed. In some embodiments, top conductive coil 106 may be formed by electroplating. The outermost portion of top conductive coil 106 may be formed in part on dielectric ridge 140, thus forming a plurality of segments as described in connection with FIG. IB.[0045] In the process step illustrated in FIG. 4F, a dielectric material 160 may be formed to cover top dielectric coil 106. Dielectric material 160 may comprise polyimide or silicon oxide. An opening may be formed in dielectric material 160 in correspondence with pad 120.[0046] In the process step illustrated in FIG. 4G, dielectric material 150 may be removed in regions not covered by dielectric layer 104. Dielectric material 150 may be removed using etching techniques.[0047] In the embodiments illustrated in FIGs. 1A-1C, dielectric ridge 140 may be formed on a surface of dielectric layer 104. Furthermore, the outermost portion 110 may be partially formed on dielectric ridge 140. As a result, the outermost portion of the top conductive coil may exhibit a multi-segment configuration. As described above, such a configuration may limit the peak electric field. In other embodiments, a dielectric layer may be formed between the dielectric5853258.1 ridge and the top conductive coil. In this way, the top conductive coil may be formed on a smoother surface (e.g. , a continuous surface without discontinuities in the area where the outermost portion is formed) compared with the embodiments illustrated in FIGs. 1A- 1C.[0048] FIG. 5 is a cross sectional view of a magnetic isolator, according to some non-limiting embodiments. Magnetic isolator 500 may comprise substrate 100, bottom conductive coil 102, dielectric ridge 540, dielectric layer 504, and top conductive coil 506. The dielectric ridge 540 may be formed using polyimide in some embodiments, though other materials may be used. In some embodiments, the dielectric ridge may have a curved shaped, at least in some of its portions. In some embodiments, the dielectric ridge may be shaped to form a circle, or at least a portion of a circle. In some embodiments, the dielectric ridge may be formed on a surface of substrate 100. For example, the dielectric ridge may be formed after the bottom conductive coil 102 has been formed. Dielectric layer 504 may cover, at least partially, dielectric ridge 540. Dielectric layer 504 may comprise polyimide in some embodiments, though other materials may be used. In some embodiments, dielectric layer 504 may comprise more than one layer of dielectric material. In this way, if one of the dielectric layers experiences electrical breakdown, the presence of additional layers may mitigate the probability of forming a conductive path between the top conductive coil and the bottom conductive coil. Such multiple dielectric layers may be made from the same material (e.g. , polyimide), or from different materials.[0049] Dielectric layer 504 may exhibit a raised portion 505 in the region that sits over dielectric ridge 540. Such a raised portion may exhibit a smoother profile compared to dielectric ridge 540. Top conductive coil 506 may be formed on dielectric layer 504, such that outermost portion 510 sits, at least partially, on the raised portion 505. In this way, outermost portion 510 may exhibit a non-planar bottom surface 562. Compared to a case in which the top conductive coil is formed on a planar surface, the peak electric field may be limited in this configuration.Furthermore, compared to the embodiments illustrated in FIGs. 1A-1C, weakening of the isolation strength of dielectric material 504 may be limited. Accordingly, in contrast to the embodiments of FIG. 2C, in which the dielectric layer 104 may be exposed to aphotolithographic process during the formation of dielectric ridge 140, when the outermost portion is formed using the configuration described in connection with FIG. 5, the dielectric layer 504 may not be exposed. When the dielectric layer is not exposed, its isolation strength may be preserved.[0050] Dielectric ridge 540 may be formed lithographically using a photomask. Accordingly, a photomask may be used in a lithographic process step to selectively illuminate a region to be removed (or to selectively illuminate a region not to be removed). In some embodiments,5853258.1 photomask 600, illustrated in FIG. 6A, may be used to form a dielectric ridge. The photomask 600 may comprise contour 602. Contour 602 may have one or more rings in some embodiments. When a lithographic process step is performed using photomask 600, regions corresponding to contour 602 may not be illuminated. As a result, the dielectric ridge may be formed in the regions defined by contour 602.[0051] Alternatively, a dielectric ridge may be formed lithographically using photomask 650, which is illustrated in FIG. 6B. Photomask 650 may comprise apertures 652. The apertures may have circular shapes in some embodiments. When a lithographic process step is performed using photomask 650, regions corresponding to aperture 652 may not be illuminated. As a result, the dielectric ridge may be formed in the regions outside apertures 652.[0052] A magnetic isolator of the types described herein may be deployed in various settings to galvanically isolate one portion of an electric circuit from another. One such setting is in industrial applications. In some embodiments, a magnetic isolator may isolate a motor driver from other portions of an electric system. The motor driver may operate at voltages equal to or greater than 600V in some embodiments, and may comprise an inverter to convert a DC signal to an AC signal. In some embodiments, the motor driver may comprise one or more insulated gate bipolar transistors (IGBT), and may drive an electric motor according to a three-phase configuration.[0053] Another such setting is in photovoltaic systems. In some embodiments, a magnetic isolator may be installed in a photovoltaic system to isolate a photovoltaic panel and/or an inverter from other parts of the system. In some embodiments, a magnetic isolator may be installed between a photovoltaic panel and an inverter.[0054] Another such setting is in electric vehicles. In some embodiments, a magnetic isolator of the type described herein may be used to isolate any suitable part of an electric vehicle, such as a battery or a motor driver, from other parts of the vehicle.[0055] FIG. 7 is a block diagram illustrating an example of a system comprising a magnetic isolator of the type described here. System 700 may comprise magnetic isolator 702, low- voltage device 704, and high-voltage device 706. In some embodiments, low-voltage device 704 may comprise a conductive part. In some embodiments, low-voltage device 704 may comprise a device operating at less than 500V. In some embodiments, high-voltage device 706 may comprise a device operating at 500V or higher.[0056] Magnetic isolator 702 may be implemented using magnetic isolator 100, 180, 200 or 500, and may be disposed between the low-voltage device and the high-voltage device. By isolating the two devices from one another, a user may be able to physically contact the low-5853258.1 voltage device without being electrically shocked or harmed. Low-voltage device 704 may comprise a user interface unit, such as a computer or other types of terminals, and/or a communication interface, such as a cable, an antenna or an electronic transceiver. High-voltage device 706 may comprise a motor driver, an inverter, a battery, a photovoltaic panel, or any other suitable device operating at 500V or higher. In the embodiments in which high- voltage device 706 comprises a motor driver, high- voltage device 706 may be connected to an electric motor 708.[0057] Aspects of the present application may provide one or more benefits, some of which have been previously described. Now described are some non-limiting examples of such benefits. It should be appreciated that not all aspects and embodiments necessarily provide all of the benefits now described. Further, it should be appreciated that aspects of the present application may provide additional benefits to those now described.[0058] Aspects of the present application provide a magnetic isolator capable of withstanding voltages exceeding 600V while limiting the probability of electric breakdown. As a result of such a reduction in the probability of electric breakdown, the lifetime of the magnetic isolator may be extended.[0059] Also, as described, some aspects may be embodied as one or more methods. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.[0060] The terms "approximately" and "about" may be used to mean within +20% of a target value in some embodiments, within +10% of a target value in some embodiments, within +5% of a target value in some embodiments, and yet within +2% of a target value in some embodiments. The terms "approximately" and "about" may include the target value.5853258.1 |
A method for coupling electrical contacts is provided that includes forming, over a contact of die (20), a bonded ball (30) that includes a neck (50), a collar (54), a shoulder (58) and a base (60). The shoulder has a downwardly and outwardly sloping shoulder (58) that makes an angle (58A) of 105-130 degrees with an upwardly extending vertical line (35). The neck (50) and shoulder (58) overlie substantially all of the area of the base (60) that makes contact with the die (20). |
CLAIMS 1. A method for coupling electrical contacts, comprising: providing a capillary having an inner channel and a tip, the inner channel disposed along the length of the capillary and extending toward the tip, a portion of the inner channel flaring outwardly to form an inner chamfer that defines an aperture having an inner chamfer diameter at the tip of the capillary, wherein the inner chamfer has an inner chamfer face, and a first portion of the inner chamfer face and a second portion of the inner chamfer face positioned opposite from the first portion together define an inner chamfer angle in a range between100-150 degrees; providing a wire in the inner channel, the wire having a wire tip that is accessible through the aperture; forming a free air ball using the wire tip, the free air ball having a diameter between 90 percent and 110 percent of the inner chamfer diameter; positioning the free air ball over an electrical contact of a die; after positioning the free air ball, applying, through the inner chamfer face, a bond force of 20 gram- force or less on the free air ball for a time period of ten milliseconds or less; and after the time period, lifting the capillary from the electrical contact, leaving on the electrical contact a bonded ball including a neck, a downwardly sloping shoulder extending from the neck at an angle between 105 and 130 degrees from an imaginary vertical axis, and a base, the neck and the shoulder overlying all of an area of the base that makes contact with the die.2. The method of Claim 1, wherein the area of the bonded ball that makes contact with the die does not have a physical dimension that exceeds the inner chamfer diameter.3. The method of Claim 1, wherein the bonded ball does not include a flange that extends from the base. 4. The method of Claim 1, wherein the wire comprises a wire diameter equal to or less than 25 micrometers .5. A method for coupling electrical contacts, comprising; providing an electronic component having a contact that is to be coupled to another contact; forming, over the contact, a bonded ball having a downwardly sloping shoulder that extends from a point and ends at an edge of the shoulder, the downwardly sloping shoulder having an angle between 105-130 degrees from a first imaginary vertical line that intersects the point but being absent a structure that makes contact with the electronic component and also extends in an outward direction from a second imaginary vertical line intersecting the edge; and coupling the bonded ball to the another contact using a wire. 6. The method of Claim 5, wherein the wire comprises a diameter equal to or less than 25 micrometers .7. The method of Claim 5, wherein the wire has a diameter less than 25 micrometers, and wherein forming a bonded ball comprises: providing the wire in a capillary having an inner chamfer diameter; and forming the bonded ball from a free air ball having a diameter that is different from the inner chamfer diameter by two micrometers or less. 8. The method of Claim 5, wherein forming a bonded ball comprises: providing the wire in a capillary having an inner chamfer diameter; and forming the bonded ball from a free air ball having a diameter that is 92-108 percent of the inner chamfer diameter.9. The method of Claim 5, wherein forming a bonded ball comprises forming the bonded ball using a bond force equal to or less than 20 gram-force, the bond force applied only to the shoulder.10. The method of Claim 5, wherein forming a bonded ball comprises forming the bonded ball by applying a bond force equal to or less than 20 gram-force for ten milliseconds or less.11. The method of Claim 5, wherein the wire is dispensed from a capillary having an inner chamfer diameter, and wherein forming a bonded ball comprises forming a free air ball at a tip of the wire, the free air ball having a diameter that is different from the inner chamfer diameter by two micrometers or less.12. The method of Claim 5, wherein the bonded ball comprises a neck, a collar, and a base, the neck connected to the collar, the collar connected to the shoulder, and the shoulder connected to the base, the collar having a first height that is shorter than a second height of the base.13. The method of Claim 5, wherein forming a bonded ball comprises forming the bonded ball using a capillary having a tip that defines an aperture, the aperture having an inner chamfer diameter, and wherein the bonded ball does not have a flange that extends outside of an imaginary vertical cylinder that is parallel to a center axis of the capillary and intersects with the boundary of the aperture .14. A system, comprising; an electronic device having a housing and a plurality of electronic components positioned in the housing, at least one of the electronic components having a contact that is coupled to another contact of another one of the electronic components; and a bonded ball coupling the contact and the another contact by a wire strand, the bonded ball having a downwardly sloping shoulder that extends from a point and ends at an edge of the shoulder, the downwardly sloping shoulder having an angle between 105-130 degrees from a first imaginary vertical line that intersects the point but being absent a structure that makes contact with the electronic component and also extends in an outward direction from a second imaginary vertical line intersecting the edge.15. The system of Claim 14, wherein the angle is between 105-115 degrees. 16. The system of Claim 14, wherein the wire strand is formed from gold and has a diameter less than 25 micrometers .17. The system of Claim 14, wherein the electronic device is a computer.18. The system of Claim 14, wherein the angle is between 125-130 degrees. 19. The system of Claim 14, wherein the bonded ball comprises a neck, a collar, and a base, the neck connected to the collar, the collar connected to the shoulder, and the shoulder connected to the base, the collar having a first height that is shorter than a second height of the base.20. The system of Claim 14, wherein the at least one of the electronic components is a die, and the another one of the electronic components is a substrate. |
METHOD AND SYSTEM FOR IMPROVED WIRE BONDING This invention relates generally to electronics and more particularly to a method and system for improved wire bonding. BACKGROUND OF THE INVENTION Miniaturization of integrated circuit (IC) chips is a challenge faced by most chip manufacturers. This trend towards miniaturization in turn pushes the limits of numerous high density packaging processes. An example of such a process is the wire bond process. The wire bond process, or "wire bonding," refers to a process of connecting electronic components and conducting tracks using a piece of wire. For example, a die may be coupled to a substrate by forming a bonded ball at each contact of the die and then looping the wire from the bonded ball to a corresponding external contact of the substrate. As the size of the contacts of the die and the substrate is reduced due to miniaturization, the size of the bonded ball may also need to be reduced. Further, wire bonding process may result in peeling and other types of pad damage because of the relatively delicate structure of miniaturized components. SUMMARY OF THE INVENTION According to one embodiment of the invention, a method for coupling electrical contacts is provided. The method includes providing an electronic component having a contact that is to be coupled to another contact. The method also includes forming, over the contact, a bonded ball having a downwardly sloping shoulder that extends from a point and ends at an edge of the shoulder. The downwardly sloping shoulder has an angle between 105-130 degrees from a first imaginary vertical line that intersects the point. The downwardly sloping shoulder does not have a structure that makes contact with the electronic component and also extends in an outward direction from a second imaginary vertical line intersecting the edge of the shoulder. The method also includes coupling the bonded ball to the another contact using a wire. Some embodiments of the invention provide numerous technical advantages. Other embodiment may realize some, none, or all of these advantages. For example, according to one embodiment, the probability of pad damage associated with wire bonding is reduced by forming a bonded ball that has a relatively flat shoulder and no flange. In another embodiment, the footprint of the bonded ball is reduced by eliminating the flange that may extend from the shoulder of the bonded ball. Other advantages may be readily ascertainable by those skilled in the art. BRIEF DESCRIPTION OF THE DRAWINGS Reference is now made to the following description taken in conjunction with the accompanying drawings, wherein like reference numbers represent like parts, in which: FIGURE 1 is a schematic diagram illustrating a wire bonding system that may benefit from the teachings of the present invention; FIGURE 2A is a diagram of a cross-sectional view of one embodiment of a bonded ball shown in FIGURE 1; FIGURES 2B and 2C are diagrams each showing a cross sectional view of one embodiment of a shoulder of the bonded ball shown in FIGURE 2A; FIGURE 2D is a diagram of a cross-sectional view of one embodiment of a capillary that may be used to form the bonded ball shown in FIGURE 2A; and FIGURE 3 is a flow chart illustrating one embodiment of a method of wire bonding. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Embodiments of the invention are described with reference to FIGURES 1 through 3 of the drawings . FIGURE 1 is a schematic diagram illustrating one embodiment of a wire bonding system 10 that benefits from the teachings of the present invention. System 10 includes a substrate 14 having a plurality of contacts 18, a die 20 having a plurality of contacts 24, and a capillary 34 attached to an arm 38. A particular contact 24 of die 20 is required to be electrically coupled to a particular contact 18 of substrate 14. Contacts 24 are coupled to their respectively corresponding contacts 18 using wire strands 28 and corresponding bonded balls 30. As shown in FIGURE 1, in one embodiment, bonded ball 30 has a generally circular footprint on die 20. Capillary 34 and arm 38 are coupled to a control system (not explicitly shown in FIGURE 1) . The control system is operable to manipulate arm 38 and capillary 34 to couple contacts 24 and 18 by dispensing wire strands 28. This process is referred to as wire bonding. An example of a wire bonding process is described as follows. A strand of wire is provided in capillary 34. A tip of the strand of wire is accessible through the tip of capillary 34. An electronic flame off (EFO) firing is used to form a free air ball (FAB) 32 at the tip of the wire. Capillary 34 is lowered to position FAB 32 on a particular contact 24. Then an initial ball deformation is made by applying a suitable level of bond force to form a bonded ball, such as bonded ball 30. After bonded ball 30 is formed on contact 24, capillary 34 is raised and the looping, of the wire takes place as capillary 34 travels from the position of bonded ball 30 to a particular contact 18 to which the contact 24 is to be coupled. Once capillary 34 reaches the particular contact 18, a stitch is formed at the contact 18 by deforming the wire against the contact 18 to make a wedge-shaped impression. Then this wire bonding process is repeated for other contacts 24 and 18. With the trend towards miniaturization of IC chips, the size of the contacts 18 of die 24 and substrate 14 is reduced. Thus, the size of a bonded ball may also need to be reduced. Further, the probability of damage to the pad, which is an area surrounding contact 18, may increase during the wire bonding process because smaller components tend to be more delicate. One example of damage to the pad is peeling, which refers to the removal of a layer of the pad and/or contact 18 as capillary 34 is lifted to form another wire connection. According to one embodiment of the invention, a method and system for an improved wire bonding process is provided by forming bonded ball 30 that has a downwardly- sloping shoulder at an angle of approximately 105 to 130 degrees from an imaginary vertical line and a footprint that does not extend outside of an aperture having the inner chamfer diameter of a capillary. This is advantageous in some embodiments because the probability of damage to the pad surrounding a contact is reduced. In one embodiment, this is advantageous also because the footprint of the bonded ball is reduced, which facilitates the miniaturization of electronic components. Additional details of the example embodiments of the invention are described below in greater detail in conjunction with FIGURES 2A-3. FIGURE 2A is a diagram of a side cross-sectional view of one embodiment of bonded ball 30 shown in FIGURE 1. FIGURES 2B and 2C are diagrams each showing a side cross-sectional view of a shoulder 58 of bonded ball 30 shown in FIGURE 2A. FIGURES 2A-2C are described jointly. Referring to FIGURE 2A, bonded ball 30 includes a neck 50, a collar 54, a base 60, and shoulder 58. Collar 54 has a height 54H, a base 60 has a diameter 60D and a side 60S, and shoulder 58 has a width 58W, an angle 58A, and an edge 58E. Neck 50 is coupled to collar 54, collar 54 is coupled to shoulder 58, and shoulder 58 is coupled to base 60. In one embodiment, as shown in FIGURE 1, bonded ball 30 has a generally circular footprint. As shown in FIGURE 2B, shoulder 58 begins at a point 58P and ends at an edge 58E of shoulder 58. Point 58P is located where collar 54 ends. In one embodiment where bonded ball 30 does not have collar 54, point 58P is located where neck 50 ends. Angle 58A is defined by shoulder 58 and an imaginary vertical line 33 that intersects with point 58P. Referring back to FIGURE 2A, in one embodiment, angle 58A of shoulder 58 (also referred to as shoulder angle 58) is in a range of 105 to 130 degrees, and bonded ball 30 does not have a flange 61. As shown in FIGURE 2A, the absence of flange 61 is indicated by the use of phantom lines to outline flange 61. Flange 61 is described using FIGURE 2B. Referring to FIGURE 2B, a "flange" refers to any structure that makes contact with die 20 and also extends outwardly from an imaginary vertical line 35 that intersects with edge 58E of shoulder 58. For example, as shown in FIGURE 2A, flange 61 would extend outwardly from an imaginary vertical line intersecting edge 58E and also make contact with die 20. Because flange 61 is absent, in one embodiment, side 60S of base 60 is approximately flat and does not cross imaginary vertical line 35, as shown in FIGURE 2B. In one embodiment, side 60S may extend outwardly from imaginary vertical line 35 but does not make contact with die 20, as shown in FIGURE 2C. Such a structure shown in FIGURE 2C is not considered to be a flange. Referring back to FIGURE 2A, bonded ball 30 having shoulder 58 at angle 58A of 105-130 degrees but does not have flange 61 is advantageous in some embodiments because such a bonded ball has a smaller footprint on die 20, which promotes the miniaturization of electronic components and lowers the probability of pad damage. In more specific embodiments, shoulder angle 58A is in a range of 105 to 115 degrees, or between 125 to 130 degrees. Forming a bonded ball using any of the above- described ranges of shoulder angle 58A, in conjunction with the elimination of a flange that extends outwardly from imaginary vertical line 35, results in a formation of a "short" shoulder 58 that reduces the probability of damage to pad area surrounding a contact, such as contact 24 shown in FIGURE 1. In one embodiment, a range of 105- 115 degrees for shoulder angle 58A is particularly advantageous because the range is associated with a reduced shoulder width 58W, which further reduces the probability of pad damage. In one embodiment, shoulder 58 is the only portion of bonded ball 30 that is operable to receive bond force and/or ultrasonic vibration from capillary 34 during bonded ball formation. In one embodiment, height 54H of collar 54 is shorter than height 60H of base 60. (Height 60H is also referred to as bonded ball thickness 60H) . This is advantageous in some embodiments because having collar height 54H that is shorter than bonded ball thickness 60H reduces the probability of pad damage. FIGURE 2D is a diagram of a side cross-sectional view of a tip region 34T of capillary 34 that may be used to form some embodiments of bonded ball 30 shown in FIGURE 2A. As shown in FIGURE 2A, capillary 34 is approximately cylindrical and has varying diameters along the length. As shown in FIGURE 2D, tip region 34T of capillary 34 includes a tip 74, an inner channel 78 (also referred to as a hold 78) that extends along the length of capillary 34, a chamfer 80 that extends outwardly from the end of hole 78 to tip 74, and an aperture 70 defined at tip 74 by chamfer 80. Tip 74 has a tip diameter 74D. Hole 78 has a hole diameter 78D, and chamfer 80 has a chamfer face width 80W. Aperture 70 has a diameter 70D. Diameter 70D is also referred to as an inner chamfer diameter 70D. As shown in FIGURE 2D, inner chamfer diameter 70D is greater than hole diameter 78D because chamfer 80 extends outwardly from hole 78. Chamfer 80 extends outwardly from hole 78 at an inner chamfer angle 84. Inner chamfer angle 84 is measured between opposing portions of chamfer 80, as shown using phantom lines in FIGURE 2D. In one embodiment, inner chamfer angle 84 is in a range of 100 to 150 degrees, with particularly suitable ranges being between 100 to 110 degrees and 130 to 150 degrees. The above-identified ranges are advantageous in some embodiments because a bonded ball resulting from using a capillary having such inner chamfer angles will have a "short" shoulder that reduces the possibility of pad damage. In one embodiment, chamfer diameter 70D may be 42 micrometers or less, and hole diameter 78D may be 32 micrometers or less. Referring to FIGURES 2A and 2D, in one embodiment, angle 58A of shoulder 58 is in a range of 105 to 130 degrees (which results from forming shoulder 58 with capillary 34 having inner chamfer angle 84 in the range of 100-150 degrees) , and diameter 60D of base 60 is equal to or less than chamfer diameter 70D of capillary 34 used to form bonded ball 30. In another embodiment where side 60S of base 60 is not flat, as shown in FIGURE 2C, diameter 60D of base 60 measured at the surface that makes physical contact with die 20 is equal to or less than chamfer diameter 70D. In some embodiments, one skilled in the art may form a bonded ball having a shoulder at an angle between 105- 130 degrees and no flange, such as bonded ball 30 shown in FIGURE 2A, by providing a wire having a suitable diameter in a capillary, such as capillary 34, forming a free air ball having a suitable diameter, and applying a suitable level and type of bond force for a suitable length of time. The type/diameter of the wire, the diameter of the free air ball, the level of bond force, and the bond force time period may be optimized based on particular design specifications by one skilled in- the art to form different embodiments of a bonded ball of the present invention. One example process of forming one embodiment of bonded ball 30 is described below in conjunction with FIGURE 3. FIGURE 3 is a flow chart illustrating one embodiment of a method 100 for improved wire bonding. One embodiment of method 100 is described using capillary 34 shown in FIGURE 2D and bonded ball 30 shown in FIGURE 2A. However, any suitable device or a combination of devices may be used to implement method 100. Method 100 starts at step 104. At step 108, a capillary having a chamfer'angle of approximately 100 to 150 degrees is provided. An example of the capillary of step 108 is capillary 34; however, any suitable device may be used. At step 110, wire is provided in capillary 34. The wire may be formed from gold, aluminum, copper, or any other suitable material. In one embodiment, the diameter of the wire is less than 25 micrometers. However, other wire diameters may be used depending on the particular requirements for the wire bonding process and hole diameter 78D of capillary 34. At step 114, a free air ball is formed at the tip of the wire provided at step 110. Example diameters of the free air ball include, but are not limited to, 85-115 percent, 90-110 percent, and 92-108 percent of chamfer diameter 70D (shown in FIGURE 2D) . In some embodiments, the diameter of the free air ball is different from chamfer diameter 70 by 2 micrometers or less. Any suitable diameter for forming a bonded ball having no flange may be used as a free air ball diameter of step 114. At step 118, bond force is applied on free air ball for a predetermined period of time. In one embodiment, a bond force of 20 gram-force is applied for 10 milliseconds or less; however, any suitable level of bond force may be applied for any suitable length of time, depending on the design specifications of bonded ball 30. In conjunction with the applied bond force, ultrasonic vibration may also be provided to enhance the formation of bonded ball 30. In one embodiment, bond force and/or ultrasonic vibration is applied only through chamfer face 80. At step 120, capillary is lifted for the looping of the wire. Method 100 stops at step 124. Although some embodiments of the present invention have been described in detail, it should be understood that various changes, substitutions, and alterations can be made hereto. |
A wireless programmable logic device (102) contains a wireless component (110) and a programmable logic component (106). A remote wireless host (132) can be used to program the programmable logic device (102). Some product designs require multiple programmable logic devices. When wireless programmable logic devices are used in the design, all of them can receive data and commands from the host. As a result, the wireless host can control the order of configuration and the start time of these logic devices. There is no need to build glue logic for this purpose. Consequently, the efficiency in product design is improved. If there are problems in programming a programmable logic device, the host can log the failed operation in its memory. This information could be used to improve production flow. |
CLAIMS 1. An integrated circuit communicating digital data with a remote host (132), said integrated circuit comprising: a wireless transceiver (110) for receiving said digital data from said remote host; a base band unit (108) connected to said wireless transceiver (110) to perform data processing operations on said digital data; and a programmable logic component (106) connected to said base band unit (108) using said digital data to configure said programmable logic component.2. The integrated circuit of claim 1 wherein said integrated circuit is a FPGA and said digital data is configuration bitstream data.3. The integrated circuit of claim 1 wherein wireless transceiver (110) is a radio frequency transceiver, and said wireless transceiver (110) and said base band unit (108) conform to Bluetooth protocol.4. The integrated circuit of claim 1 wherein said base band unit (108) and said wireless transceiver (110) further transmit a reply to said remote host.5. A system comprising: a remote host (132) comprising a wireless circuit for communicating digital data; and at least two programmable logic devices (139,140) connected in a circuit, each of said at least two programmable logic devices comprising: a wireless transceiver (110) for receiving said digital data from said remote host (132); a base band unit (108) connected to said wireless transceiver (110) to perform data processing operations on said digital data; and a programmable logic component (106) connected to said base band unit (108) using said digital data to configure said programmable logic component.6. The system of claim 5 wherein said at least two programmable logic devices (139,140) are FPGAs and said digital data is configuration bitstream data.7. The system of claim 5 wherein at least one of said at least two programmable logic devices (139,140) transmits a reply to said remote host.8. The system of claim 5 wherein said digital data is generated by an external source, and wherein said wireless circuit further comprises: an interface (154) for receiving said digital data from said external source; a processor (152) for processing said digital data; and a transceiver (162) for transmitting said digital data to said at least two programmable logic devices.9. The system of claim 5 wherein said at least two programmable logic devices (139,140) have different start times, and wherein said host (132) transmits start commands to said at least two programmable logic devices at different times.10. The system of claim 5 further comprising at least one slave programmable logic device, wherein at least one of said at least two programmable logic devices is a master device delivering said digital data to said at least one slave programmable logic device.11. A method for wireless communication between a remote host (132) and a programmable logic device, comprising the steps of: receiving, by a target programmable logic device (139), a query transmitted by said host (132); receiving, by said target programmable logic device (139), a set of digital data transmitted by said host; and configuring said target programmable logic device (139) using at least a portion of said set of digital data.12. The method of claim 11 further comprising the steps of: receiving, by said host (132), a signal from said target programmable logic device (139) indicating a status of said configuring step; and logging said status by said host (132).13. The method of claim 11 further comprising a step of sending, by said programmable logic device (139) to said host (132), a request for reconfiguration. |
WIRELESS PROGRAMMABLE LOGIC DEVICESFIELD OF THE INVENTIONThis invention relates to programmable logic devices, and more specification to programmable logic devices that can interface with a remote host using wireless communication.BACKGROUND OF THE INVENTIONProgrammable logic devices exist as a well-known type of integrated circuit (IC) that may be programmed by a user to perform specified logic functions. There are different types of programmable logic devices, such as programmable logic arrays (PLAs) and complex programmable logic devices (CPLDs). One type of programmable logic devices, called the field programmable gate array (FPGA), is very popular because of a superior combination of capacity, flexibility and cost. A FPGA typically includes an array of configurable logic blocks (CLBs) surrounded by a ring of programmable input/output blocks (IOBs). The CLBs and IOBs are interconnected by a programmable interconnect structure. The CLBs, IOBs, and interconnect structure are typically programmed by loading a stream of configuration data (bitstream) into internal configuration memory cells that define how the CLBs, IOBs, and interconnect structure are configured. The configuration bitstream may be read from an external memory (e. g., an external PROM). The collective states of the individual memory cells then determine the function of the FPGA. Due to advances in semiconductor processing technology, more and more transistors can be fabricated onto the same area in an IC. This leads to more functionality. As a result, pin counts of the devices need to be increased to support the functionality. Recently, some of the FPGAs have around one thousand pins. Because these FPGAs can be programmed to perform many functions, they are used in more and more product designs.In some complex product designs, more than one FPGA is used in a product. Some of these FPGAs need to start operation at different times after configuration. In the past, engineers have to design glue logic to handle the configuration and start time of these FPGAs. In many cases, this glue logic takes up valuable real estate on a circuit board. In addition, the glue logic is typically custom designed for each product. Consequently, it is a time consuming and inefficient process. The large number of pins on a FPGA also means that the circuit board is more congested because many of the pins are connected to other ICs. Thus, it is increasing difficult to find space on a circuit board to place the above-mentioned glue logic. Therefore, it is desirable to reduce unnecessary circuits on a circuit board. It is also desirable to improve efficiency in using FPGAs.SUMMARY OF THE INVENTIONThe programmable logic device of the present invention is a single IC that contains a wireless component connected to a conventional programmable logic component. The wireless component can receive and process wireless data from a remote wireless host. The data is delivered to the programmable logic component for programming the same. One advantage of this invention is that the programming data is stored remotely and all the programming circuitry is located on the IC. Thus, minimum real estate on a circuit board is used for programming purpose. Some product designs require multiple programmable logic devices. When wireless programmable logic devices are used, all of them can receive data and commands from a remote wireless host. As a result, the wireless host can control the order of configuration and the start time of these logic devices. There is no need to build glue logic for this purpose. Consequently, the efficiency in product design is improved. If there are problems in programming a programmable logic device, the host can log the failed operation in its memory. The logged information may include the identification of the programmable logic device, the time of communication, etc. This information could be used to improve production flow. The above summary of the present invention is not intended to describe each disclosed embodiment of the present invention. The figures and detailed description that follow provide additional example embodiments and aspects of the present invention.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the detailed description and the following figures, in which like reference numerals refer to similar elements. FIG. 1 is a block diagram showing a wireless programmable logic device of the present invention. FIG. 2 is a block diagram of a wireless configuration system of the present invention. FIG. 3 is a block diagram of a configuration host of the present invention. FIG. 4 is a flow chart of a configuration process of the present invention. FIGS. 5A and 5B shows the steps of configuring multiple wireless FPGAs of the present invention. FIG. 6 shows a combination of conventional and wireless FPGAs of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention relates to wireless communication with programmable logic devices. In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known features have not been described in detail in order to avoid obscuring the present invention. Fig. 1 is a block diagram showing a wireless programmable logic device 102 of the present invention connected to an antenna 104. Wireless programmable logic device 102 contains a programmable logic device die 106, a base band unit 108, a radio frequency (RF) transceiver 110, and an optional power amplifier 112. Programmable logic device die 106 could be a FPGA, PLA, CPLD, or PPROM die.Base band unit 108 and transceiver 110 may be fabricated into one RF die 114. In one embodiment, dies 106 and 114 and power amplifier 112 are combined in a multi-chip module (MCM). In another embodiment, CMOS process is used.Currently, both the programmable logic device die and base band unit 108 can be implemented using CMOS process.Recently, there are tremendous advances in implementing RF circuit using CMOS process. For example, a new IC built on 0.18 pm CMOS process, called the TC2000 and is marketed by Zeevo Inc., contains the radio, base band unit and interfaces. In this embodiment of wireless programmable logic devices, CMOS process is used to integrate as many functional blocks as possible into a single IC. It should be noted that the word"wireless"is not limited to RF. It includes optical, audio and other means of communication without the use of wired connection. Base band unit 108 performs data processing of wireless data sent and received by wireless programmable logic device 102. Examples of some of the operations performed by base band unit 108 are: error correction, data communication link control, digital offset cancellation and symbol synchronization, encryption, data buffering, etc. RF transceiver 110 preferably contains a voltage-controlled oscillator, a low noise amplifier, a modulator, a demodulator, filters, etc. Antenna 104 may be fabricated on the MCM package itself. Alternatively, it may be externally provided (e. g., in the form of a metal strip on a circuit board)The present invention can be used with different wireless communication protocols. An exemplary protocol isBluetooth. This protocol uses spread spectrum frequency hopping signals in the unlicensed 2.4 GHz ISM (Industrial,Science and Medical) band. The current specification defines a range of around 100 meters supporting data rate of up to 720 kb/s per channel. Other wireless communication protocols may provide for longer ranges and/or higher data rate. If wireless programmable logic device 102 is a FPGA, it needs to be configured by a configuration bitstream after power is turned on. In a conventional system, an external nonvolatile memory (not shown), such as a PROM (programmable read-only memory), is used to store the bitstream. The stored bitstream is transmitted to a configuration memory in the FPGA via dedicated pins on theFPGA. In one embodiment, this bitstream can be transmitted to a configuration memory 116 of device 102 using wireless means. As a result, there is no need to have dedicated pins for configuration. Further, there is no need to place an external nonvolatile memory on the circuit board. As a result, real estate on the circuit board can be better utilized. FIG. 2 shows a wireless based configuration system 130 of the present invention. It contains a configuration host 132 and a circuit board 136 having a plurality of ICs, such as ICs 139-143. Some of the ICs may be programmable logic devices, such as FPGAs 142 and 143. Host 132 contains memory (not shown) that stores the configuration bitstreams of FPGAs 142 and 143. The bitstreams are delivered to FPGAs 142 and 143 via an antenna 134. FIG. 3 is a block diagram of one embodiment of a configuration host 150 of the present invention. It comprises a processor 152 that controls its operation. Host 150 contains a configuration data input interface 154 that receives configuration bitstream from an external source (not shown). Processor 152 stores the bitstream in a memory 156. Whenever there is a need to configure a FPGA, processor 152 retrieves the bitstream from memory 156 and delivers the data to a serial interface 160. The serialized data is deliver to antenna 134 by a transceiver 162. An optional amplifier may be inserted between transceiver 162 and antenna 134. Memory 156 is preferably, but not necessarily, nonvolatile. In another embodiment, host 150 can be designed as a self-contained state machine. The interaction between host 132 and a single FPGA is now described. FIG. 4 shows a flow chart 170 of the interaction. In step 172, host 132 sends a query to search for a recognizable FPGA. This query is preferably a digital pattern encoded on an electromagnetic wave of a predetermined frequency and duration. An FPGA responds to the query by sending its identification to host 132. In step 174, host 132 determines whether the responding FPGA is a target FPGA. If no target is found, host 132 continues to search for a recognizable FPGA. If a target is found, host 132 performs two types of operations at the same time: (1) sending out configuration bitstream data and (2) determining whether the target FPGA is working properly. In step 176, host 132 determines whether the FPGA can continue to accept configuration data. In one embodiment, the FPGA sends a predetermined signal to host 132 if it cannot accept configuration data. If no such signal is received, host 132 assumes that it can continue to send configuration signal. If such a signal is received, host 132 sends a command to reset the target FPGA (step 178). In step 180, host 132 logs this failed operation. The information may be stored in nonvolatile memory 156 for later retrieval by a user who needs to know the status of the configuration.Additional information related to the failure (e. g., the time of failure) may also be logged. Flow chart 170 then stops (step 182). As mentioned above, host 132 sends out configuration data unless requested not to do so. In step 186, host 154 determines whether all configuration data stored in nonvolatile memory 156 has been sent. If not all the data has been sent, host 132 continues to send the data (step 188). If all the data has been sent, host 132 sends a command to configure the target FPGA (step 189). Host 132 waits for the FPGA to complete the configuration (step 190). If configuration is successful, host 132 logs a successful configuration operation in its nonvolatile memory 156 (step 192). Host 132 then sends a start command to the target FPGA to start normal operation (step 194).Flow chart 170 then ends (step 182). If configuration fails, host 132 logs a failed operation (step 202). It then sends a command to reset the target FPGA (step 204). The flow chart then terminates (step 182). It can be seen from the above that the FPGA does not need to have wired contact with a nonvolatile memory on the same circuit board. Further, it is possible to log more information using the system of the present invention. The information could be used to improve product manufacturing. The present invention can be extended to configure multiple programmable logic devices on the same circuit board. FIGS. 5A and 5B, combined, is a flow chart 230 showing the interaction between host 132 and two or moreFPGAs. In step 232, host 132 sends query to the FPGAs. In step 234, each FPGA delivers its ID to host 132. In step 236, host 132 compares the received ID with a list previously stored in its memory. If IDs match, flow chart 230 proceeds to the steps shown in FIG. 5B (delivering bitstream and configure the FPGAs). If there is no match, host 132 determines whether it needs to configure another set of FPGAs (step 238). If there is no need to do so, flow chart 230 terminates. If there is a need to do so, flow chart 230 branches back to step 232. In one embodiment, the ID could be used to uniquely identify a single programmable logic device. In this case, the ID serves to ensure that only the correct device is configured. In another embodiment, the ID could be a generic identification of a type of devices. One example of an ID is the IDCODE used in the so-called Boundary ScanDescription Language. This is a unique identification encoded in every FPGA of certain vendors, and is used to identify family members of products. An example of anIDCODE is shown below:Bits Description0 either 1 or 01-11 manufacturer ID12-27 part number28-31 revision This type of ID is preferably used in production situation when the same host is used to program a large number of identical circuit boards. The ID can be used to identify the different FPGAs on the circuit boards. After host 132 determines that the correct FPGAs are present, it performs the following operations at the same time: (1) sending out configuration data to each FPGA and (2) determining whether the target FPGAs are working properly. Turning now to FIG. 5B, host 132 determines whether the FPGAs can continue to accept configuration data (step 244). In one embodiment of the present invention, aFPGA sends a predetermined signal to host 132 if it cannot accept configuration data. If no such signal is received, host 132 assumes that it can continue to send configuration data. If such a signal is received, host sends a reset command to that particular FPGA (step 246). In step 248, host 132 logs this failed operation. The ID of the FPGA is preferably logged so that a user can identify the failedFPGA. Other information may also be logged. Flow chart 230 then terminates (step 250). Host 132 also monitors the bitstream to determine whether all the data for the current FPGA has been sent (step 252). If not all the data has been sent, host 132 continues to send data (step 254). If all the data has been sent, host 132 transmits a configuration command to the current FPGA (step 256). Host 132 waits for a reply from the FPGA to determine if there is a successful configuration (step 258). If configuration is successful, host 132 determines whether this FPGA should be started at this time or need to wait until another FPGA completes configuration (step 260). If configuration is not successful, host 132 sends a command to the FGPA requesting it to stop configuration (step 262). Host 132 then logs the failed operation (step 264). Flow chart 230 stops.Host 132 continues to check if all the data for all theFPGAs has been sent (step 270). If some of the data has yet to be sent, and the remaining FPGAs continue to indicate they would accept data, host 132 sends data to the appropriate FPGA (step 272). If all the data has been sent, host 132 determines whether all the'FPGAs indicate that configuration has been completed (step 274). If configuration has been completed, host 132 sends start commands to the FPGAs (step 276). In the case where different FPGAs need to start at different times, host 132 sends commands at appropriate times. At step 278, host 132 logs a successful operation. Flow chart 230 then terminates. If one or more FPGAs indicate problems in configuration, host 132 sends a command to stop configuration (step 262). Host 132 then logs the failed operation (step 264). The above-described invention may be modified to include a combination of wireless and regular FPGAs on a single circuit board. FIG. 6 shows such a combination 300.It contains a wireless FPGA 302 that functions as a master.A plurality of FPGAs, such as 304 and 306, are connected to wireless FPGA 302. Wireless FPGA 302 receives configuration data in the same way shown in FIG. 4. The configuration data is passed to the slave FPGAs 304 and 306. As a result, a single wireless FPGA can be used to configure a plurality of FPGAs. In a further embodiment, a target can send a request to a host to load a different set of configuration data into the target. An example is a handheld unit used to handle several jobs. The handheld unit contains a programmable logic device. A user can key in a job number, press a button, and the unit sends the job number to a host. The host then sends new data to reconfigures the programmable logic device inside the unit. In another embodiment, the programmable logic device may erase the information therein if it is not in wireless contact with a host for more than a predetermined time. This embodiment is useful to protect confidential data in the programmable logic device. It can be seen from the above description that a novel wireless programmable logic device and methods for using the same have been disclosed. Those having skill in the relevant arts of the invention will now perceive various modifications and additions which may be made as a result of the disclosure herein. Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents. |
Embodiments disclosed herein include electronic packages and methods of assembling such electronic packages. In an embodiment, an electronic package comprises a first layer comprising glass. In an embodiment, conductive pillars are formed through the first layer, and a buildup layer stack is on the first layer. In an embodiment, conductive routing is provided through the buildup layer stack. In an embodiment, a second layer is over a surface of the buildup layer stack opposite from the glass layer. |
An electronic package, comprising:a first layer comprising glass;conductive pillars through the first layer;a buildup layer stack on the first layer, wherein conductive routing is through the buildup layer stack; anda second layer over a surface of the buildup layer stack opposite from the first layer.The electronic package of claim 1, wherein the conductive routing comprises at least one via.The electronic package of claim 2, wherein the via is a tapered via.The electronic package of claim 3, wherein the tapered via has a first end with a first width and a second end with a second width that is smaller than the first width, and wherein a distance between the second end and the first layer is smaller than a distance between the first end and the first layer.The electronic package of claim 1, 2, 3 or 4, further comprising:a plurality of solder balls, wherein individual ones of the plurality of solder balls are provided over corresponding ones of the conductive pillars.The electronic package of claim 1, 2, 3, 4 or 5, wherein the conductive pillars have non-vertical sidewalls.The electronic package of claim 6, wherein the conductive pillars have an hourglass shaped cross-section.The electronic package of claim 1, 2, 3, 4, 5, 6 or 7, wherein a pitch of the conductive pillars is approximately 25µm or smaller.The electronic package of claim 1, 2, 3, 4, 5, 6, 7 or 8, wherein the first layer has a thickness that is approximately 200µm or smaller.A method of forming an electronic package, comprising:forming openings through a glass layer;attaching the glass layer to a carrier;filling the openings with a conductive material to form conductive pillars;forming a buildup layer stack with conductive routing over the glass layer; andremoving the carrier.The method of claim 10, further comprising:forming a solder resist layer over the buildup layer stack prior to removing the carrier.The method of claim 10 or 11, wherein the conductive routing comprises a via with a taper, wherein a first end of the via closest to the glass layer is narrower than a second end of the via.The method of claim 10, 11 or 12, wherein the glass layer has a thickness of approximately 200µm or less, and wherein a pitch of the conductive pillars is approximately 25µm or smaller. |
TECHNICAL FIELDEmbodiments of the present disclosure relate to electronic packages, and more particularly to package substrates with hybrid bonding contacts or solder bonding contacts embedded in a glass interposer.BACKGROUNDThe demand for miniaturization of form factor and increased levels of integration for high performance are driving sophisticated packaging approaches in the semiconductor industry. Die partitioning enables miniaturization of small form factor and high performance without yield issues seen with other methods, but needs fine die to die interconnects. Embedded multi-die interconnect bridges (EMIB) enabled a lower cost and simpler 2.5D packaging approach for very high-density interconnects between heterogeneous dies on a single package. Instead of an expensive silicon interposer with through silicon vias (TSVs), a small silicon bridge chip is embedded in the package, enabling very high density die to die connections only where needed. Standard flip-chip assembly is used for robust power delivery and to connect high-speed signals directly from chip to the package substrate.However, EMIB approaches suffer from a high cumulative bump thickness variation (BTV). Additionally, current bump-to-bump true position is challenging due to the poor dimensional stability of the organic core. A variety of solutions have been proposed including incorporating an organic patch on a temporary, rigid, glass carrier or permanent glass interposer embedded into the core of the substrate to reduce the total thickness variation (TTV) and reduce true position error to enable fine bump pitch connections.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1A is a cross-sectional illustration of an electronic package with hybrid bonding between a die and a package substrate with a glass layer on the package substrate side of the hybrid bond, in accordance with an embodiment.Figure IB is a cross-sectional illustration of an electronic package with a die coupled to a package substrate with a glass layer at the first level interconnect (FLI) location.Figure 1C is a cross-sectional illustration of an electronic package with hybrid bonding between a die and a package substrate with pillars through a glass layer that have non-vertical sidewalls, in accordance with an embodiment.Figure 2A is a cross-sectional illustration of a patterned glass layer over a carrier, in accordance with an embodiment.Figure 2B is a cross-sectional illustration of the glass layer after a conductive layer is disposed over the patterned glass layer and into the openings, in accordance with an embodiment.Figure 2C is a cross-sectional illustration of the glass layer after the conductive layer is planarized with a top surface of the glass layer to define conductive pillars in the glass layer, in accordance with an embodiment.Figure 2D is a cross-sectional illustration of the glass layer after pads are formed over the conductive pillars, in accordance with an embodiment.Figure 2E is a cross-sectional illustration of the glass layer after a buildup layer is formed over the glass layer, in accordance with an embodiment.Figure 2F is a cross-sectional illustration of the glass layer after additional buildup layers are formed over the glass layer, in accordance with an embodiment.Figure 2G is a cross-sectional illustration of the glass layer after a solder resist is disposed over the buildup layers, in accordance with an embodiment.Figure 2H is a cross-sectional illustration of the glass layer after the carrier is removed and the structure is flipped over, in accordance with an embodiment.Figure 2I is a cross-sectional illustration of the glass layer after being hybrid bonded to a die, in accordance with an embodiment.Figure 3A is a cross-sectional illustration of a package substrate with solder over a FLI layer that comprises a glass layer and conductive pillars, in accordance with an embodiment.Figure 3B is a cross-sectional illustration of the package substrate coupled to a die by the solder, in accordance with an embodiment.Figure 4 is a cross-sectional illustration of an electronic system that comprises a package substrate with an FLI layer that comprises a glass layer and conductive pillars that is hybrid bonded to a die, in accordance with an embodiment.Figure 5 is a schematic of a computing device built in accordance with an embodiment.EMBODIMENTS OF THE PRESENT DISCLOSUREDescribed herein are package substrates with hybrid bonding contacts embedded in a glass interposer, in accordance with various embodiments. In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present invention may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the present invention may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the present invention, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.As noted above, embedded multi-die interconnect (EMIB) architectures have allowed for some high density interconnect architectures for heterogeneous die integration in electronic packages. However, EMIB architectures may no longer be adequate as devices continue to scale to smaller and more dense interconnects. Hybrid bonding architectures may allow for further reduction in interconnect pitch. Generally, hybrid bonding includes a bonding layer that comprises a conductive pad that is coplanar with a dielectric layer. The opposing device also has a similar bonding layer. The two devices (e.g., a package substrate and a die) are brought into contact with each other. At room temperature, the two dielectric layers begin to bond together. At elevated temperatures, the opposing pads undergo interdiffusion and permanently bond to each other. However, hybrid bonding has its own limitations as well. Particularly, tight control of the planarity between the pad and the dielectric layer are needed. As such, thickness variations attributable to organic packaging can make hybrid bonding difficult to implement.One approach to improve hybrid bonding effectiveness is to use a first level interconnect (FLI) first assembly process. In such embodiments, the FLI layer is formed before the organic buildup layers. The FLI layer may be formed on a carrier. The buildup layers (including conductive routing) may then be built up from the FLI layer. However, when the carrier is ultimately removed, warpage may occur that negatively impacts the hybrid bonding.Accordingly, embodiments disclosed herein include a hybrid bonding process that utilizes a reinforced hybrid bonding layer on the package substrate. Particularly, the hybrid bonding layer includes a glass layer with conductive pillars through the glass layer. The top surfaces of the conductive pillars are substantially coplanar with the top surface of the glass layer. The use of a glass layer provides mechanical support to the package substrate and mitigates warpage, even after the carrier is removed. As such, fine pitch interconnects can be made with FLI first hybrid bonding approaches.In an embodiment, the glass layer is patterned before being attached to a carrier. It has been shown that laser assisted etching processes can be used to form high aspect ratio holes through the glass layer. The ability to form high aspect ratio features allows for thicker glass layers to be used. Using thicker glass increases the mechanical reinforcement of the package and improves the planarity of the hybrid bonding layer. For example, small pitch features (e.g., pitches of approximately 25µm or smaller) can be formed in thick glass layers (e.g., with thicknesses up to approximately 200µm). As used herein, "approximately" refers to a range that is within 10% of the stated value. For example "approximately 200µm" may refer to a range between 180µm and 220µm.Referring now to Figures 1A-1C , cross-sectional illustrations of electronic packages 100 are shown, in accordance with various embodiments. In Figure 1A a hybrid bonding approach is used. In Figure 1B , the conductive pillars are coupled to the die by a solder. In Figure 1C , the conductive pillars are shown with an hourglass shaped cross-section.Referring now to Figure 1A , a cross-sectional illustration of an electronic package 100 is shown, in accordance with an embodiment. In an embodiment, the electronic package 100 comprises a first hybrid bonding layer 101. The first hybrid bonding layer 101 may be over buildup layers 110 of the package substrate. The first hybrid bonding layer 101 may comprise a glass layer 105. The glass layer 105 may be any suitable glass formulation. In an embodiment, the glass layer 105 has a thickness that is up to approximately 200µm thick. However, it is to be appreciated that the glass layer 105 may be even thicker in some embodiments.The first hybrid bonding layer 101 may further comprise conductive pillars 106. For example, the conductive pillars 106 may be copper. In an embodiment, the conductive pillars 106 may extend substantially through an entire thickness of the glass layer 105. That is, a top surface of the conductive pillars 106 may be substantially coplanar with a top surface of the glass layer 105. As used herein, "substantially coplanar" may refer to two surfaces being within 5µm of being perfectly coplanar. In an embodiment, the conductive pillars 106 may have a pitch that is approximately 25 µm or smaller. In a particular embodiment, the pitch of the conductive pillars 106 may be approximately 10µm or smaller. While primarily directed to small pitch architectures, it is to be appreciated that embodiments also include pitches that are greater than 25µm.In an embodiment, a second hybrid bonding layer 125 of a die 120 is bonded to the first hybrid bonding layer 101. The second hybrid bonding layer 125 may comprise a dielectric layer 121 and conductive pads 122. The dielectric layer 121 may comprise a dielectric such as a silicon oxide (e.g., SiO2). During the hybrid bonding process, the dielectric layer 121 bonds with the glass layer 105. In an embodiment, the conductive pads 122 may pass through the dielectric layer 121. The conductive pads 122 (e.g., copper pads 122) may have a bottom surface that is substantially coplanar with a bottom surface of the dielectric layer 121. During the hybrid bonding process the conductive pads 122 bond with the conductive pillars 106 through interdiffusion bonding.In an embodiment, successful hybrid bonding between the first hybrid bonding layer 101 and the second hybrid bonding layer 125 is made possible due, at least in part, to the mechanical rigidity provided by the glass layer 105. The glass layer 105 serves as a package stiffener that counteracts any warpage that may be induced by the underlying buildup layers 110. As such, a highly planar interface is provided, which is a requirement of hybrid bonding architectures.In an embodiment, the glass layer 105 may be provided over a stack of one or more buildup layers 110. The buildup layers 110 may be dielectric layers typical of electronics packaging architectures. In an embodiment, conductive features (e.g., traces 111, vias 112, pads, and the like) may be fabricated in the buildup layers 110. The conductive features may electrically couple conductive pillars 106 to pads 116 on an opposite side of the buildup layers 110. The pads 116 may be covered by a solder resist 115 with openings 117 to expose portions of the pads 116.It is to be appreciated that the orientation of the conductive features in the buildup layers 110 are flipped 180 degrees relative to traditional orientations. That is, in a traditional package, the structures are fabricated from a bottom up process starting with the bottom second level interconnects and progressing up to the FLIs. However, in the electronic package 100, the structure is fabricated with an FLI first process. As such, the first hybrid bonding layer 101 is formed first and the buildup layers are formed over the first hybrid bonding layer 101. This results in via structures being flipped. As used herein a flipped via structure may refer to a via 112 that has a first end 113 that is closer to the glass layer 105 than a second end 114. The first end 113 has a width that is smaller than a width of the second end 114. In typical package structures, the wider end (i.e., the second end 114) would be closer to the FLI layer (e.g., the glass layer 105).Referring now to Figure 1B , a cross-sectional illustration of an electronic package 100 is shown, in accordance with an additional embodiment. In an embodiment, the electronic package 100 in Figure 1B is substantially similar to the electronic package 100 in Figure 1A , with the exception of the bonding architecture between the die 120 and the first hybrid bonding layer 101. Whereas the embodiment shown in Figure 1A is a hybrid bonding architecture, the embodiment shown in Figure 1B is a solder bonding architecture.As shown, solder bumps 131 may be provided over the conductive pillars 106. The solder bumps 131 may be coupled to the pads 122 on the die 120. Such an embodiment may sometimes be referred to as a flip-chip bonding architecture. However, due to the fine pitch of the conductive pillars 106, denser interconnect architectures than traditional flip-chip bonding can be achieved. As will be described in greater detail below, the solder bumps 131 may be fabricated with plating processes over the conductive pillars 106.Referring now to Figure 1C , a cross-sectional illustration of an electronic package 100 is shown, in accordance with an additional embodiment. In an embodiment, the electronic package 100 is substantially similar to the electronic package 100 in Figure 1A , with the exception of the structure of the conductive pillars 106. In Figure 1A , the conductive pillars 106 have substantially vertical sidewalls. In the embodiment shown in Figure 1C , the conductive pillars 106 have sloped sidewalls 107. As used herein, "substantially vertical" sidewalls may refer to sidewalls that are within 10° of being perfectly orthogonal relative to an underlying surfaces.The sloped sidewalls 107 may be the result of the laser assisted etching process used to pattern the glass layer 105. In the particular embodiment shown in Figure 1C , the sloped sidewalls 107 form an hourglass shaped cross-section. That is, a width of the conductive pillars 106 decreases towards the middle of the conductive pillars 106. Such an hourglass shaped cross-section may be formed when laser exposure is provided on both surfaces of the glass layer 105. Dual sided patterning may be useful to increase the attainable aspect ratio of the patterned features in the glass layer 105. For example, aspect ratios of approximately 10:1 or greater, or even 50:1 or greater are possible. Such high aspect ratios allow for low pitch (e.g., 25µm or smaller) features to be fabricated in thick glass layers 105 (e.g., up to approximately 200µm). As such, package substrates with improved planarity can be provided. In other embodiments, the laser exposure may be on a single surface of the glass layer 105, and the sidewall 107 may have a single slope through the height of the conductive pillar 106.Referring now to Figures 2A-2I , a series of cross-sectional illustrations depicting a process for assembling an electronic package is shown, in accordance with an embodiment. The electronic package assembled in Figures 2A-2I may be substantially similar to the electronic package 100 that is shown in Figure 1A .Referring now to Figure 2A , a cross-sectional illustration of a glass layer 205 is shown, in accordance with an embodiment. In an embodiment, the glass layer 205 may be secured to a carrier 240 by an adhesive 241. The adhesive 241 may be a temporary adhesive, such as a laser releasable bond film. As such, when the carrier 240 needs to be removed, a laser exposure through the carrier 240 can be used to release the glass layer 205. The carrier 240 may be a glass carrier in some embodiments.In an embodiment, the glass layer 205 may be patterned before being attached to the carrier 240. For example, holes 203 may be formed through the glass layer 205. The holes 203 may have a pitch P. In an embodiment, the pitch P may be approximately 25µm or less. The holes 203 may be high aspect ratio holes 203. For example, an aspect ratio (depth:width) maybe approximately 10:1 or greater, or approximately 50:1 or greater. The high aspect ratio holes may be provided using a laser assisted etching process. While shown as having substantially vertical sidewalls, it is to be appreciated that the holes 203 may have sloped sidewalls. For example, the sidewalls may form an hourglass shaped hole 203, similar to the embodiment shown in Figure 1C . In other embodiments, the glass layer 205 may be patterned after being attached to the carrier 240. In other embodiments, the pattern in the glass layer 205 may be formed without an additional carrier 240.Referring now to Figure 2B , a cross-sectional illustration of the glass layer 205 after deposition of a conductive layer 207 is shown, in accordance with an embodiment. In an embodiment, the conductive layer 207 fills the holes 203 and covers a top surface of the glass layer 205. In an embodiment, the conductive layer 207 may be any conductive material. For example, the conductive layer 207 may be a copper layer.Referring now to Figure 2C , a cross-sectional illustration of the glass layer 205 after the conductive layer 207 is recessed is shown, in accordance with an embodiment. In an embodiment, the conductive layer 207 is recessed with a planarizing process, such as chemical mechanical planarization (CMP) or the like. The planarizing process results in the formation of the conductive pillars 206 within the holes 203. As such, the conductive pillars 206 have the same pitch P as the holes 203. In an embodiment, the conductive pillars 206 have top surfaces that are substantially coplanar with the top surface of the glass layer 205 and bottom surfaces that are substantially coplanar with the bottom surface of the glass layer 205.Referring now to Figure 2D , a cross-sectional illustration of the glass layer after formation of pads 218 is shown, in accordance with an embodiment. In an embodiment, the pads 218 may be positioned over the conductive pillars 206. The pads 218 may be formed with a deposition and patterning process, or any other patterning process typical of electronic packaging process flows. The pads 218 may be copper pads or another conductive material.Referring now to Figure 2E , a cross-sectional illustration of the glass layer 205 after a buildup layer 210 is provided over the glass layer 205 is shown, in accordance with an embodiment. In an embodiment, a via 219 may be formed through the buildup layer 210 to provide a vertical connection to one of the pads 218. The via 219 may be formed with a lithographic process or a laser drilling process. In the illustrated embodiment, a lithographic process is shown, as indicated by the substantially vertical sidewalls of the via 219. The buildup layer 210 may be any suitable dielectric material typical of electronic packaging processes. For example, the buildup layer 210 may be a buildup film (BF), a photoimageable dielectric (PID), or the like. The via 219 may be a conductive material. For example, the via 219 may comprise copper.Referring now to Figure 2F , a cross-sectional illustration of the glass layer 205 after more buildup layers 210 are formed is shown, in accordance with an embodiment. As shown, a trace 211 may provide lateral translation of the connection to the conductive pillar 206. An additional vertical connection is provided by a via 212. The via 212 may be formed with a laser patterning process. As a result of the laser patterning process, the via 212 may have a first end 213 with a first width and a second end 214 with a second width. The first width of the first end 213 may be smaller than the second width of the second end 214. The first end 213 may be closer to the glass layer 205 than the second end 214. Such an arrangement is atypical of existing electronic packages. That is, having the narrow end of the via 212 being closest to the FLI layer (i.e., the glass layer 205), is the result of the FLI first patterning process. In typical electronic packages, the FLI layer is formed last, and the underlying vias 212 in the buildup layers 210 have the wider end closest to the FLI layer. In an embodiment, the via 212 may be a conductive material, such as copper or the like.In an embodiment, pads 216 may be provided over the topside surface of the buildup layers 210. The pads 216 may be used for second level interconnect (SLI) architectures. For example, the pads 216 may be suitable for solder ball interconnects, or the like. In an embodiment, the pads 216 have a pitch that is greater than the pitch P of the conductive pillars 206. The pads 216 may be a conductive material, such as copper or the like.While shown with several vertical vias 219 and 212, it is to be appreciated that any number of vertical vias, traces, etc. may be provided between the pads 216 and the conductive pillars 206. That is, the stack of buildup layers 210 may include any number of layers and routing. Additionally, it is to be appreciated that other components may be embedded within the buildup layers 210. For example, bridge dies or other features may be embedded in the buildup layers and electrically coupled to one or more of the conductive pillars 206.Referring now to Figure 2G , a cross-sectional illustration of the structure after a solder resist 215 is disposed over the buildup layers 210 is shown, in accordance with an embodiment. The solder resist 215 may include openings 217 that expose portions of the pads 216. In some embodiments, the pads 216 may have barrier layers (not shown) or the like provided over the exposed portions of the pads 216. In an embodiment, the openings 217 may have sloped sidewalls as is typical of laser drilled openings.Referring now to Figure 2H , a cross-sectional illustration of the structure after the carrier 240 is removed is shown, in accordance with an embodiment. In an embodiment, the carrier 240 may be removed by exposing the adhesive 241 to a laser through the back of the carrier 240. Removal of the carrier 240 results in the exposure of surfaces of the first hybrid bonding layer 201. As shown, first surfaces 252 of the conductive pillars 206 are substantially coplanar with a first surface 251 of the glass layer 205. In some embodiments, further polishing (e.g., CMP) or the like may be used to further modify the positioning of surfaces 252 with the surface 251. For example, in some embodiments, the first surfaces 252 of the conductive pillars 206 may be slightly recessed from the first surface 251 of the glass layer 205. For example, the recess may be on the order of one to several nanometers.It is to be appreciated that even after removal of the carrier 240, planarity of the structure is substantially maintained. This is because the glass layer 205 serves as a stiffener that prevents warpage of the buildup layers 210 from negatively impacting the planarity of the device. The thickness of the glass layer 205 may be increased to provide improved mechanical rigidity. For example, the glass layer 205 may have a thickness of up to approximately 200µm in some embodiments.Referring now to Figure 2I , a cross-sectional illustration of the structure after a die 220 is adhered to the first hybrid bonding layer 201 is shown, in accordance with an embodiment. In an embodiment, the die 220 comprises a second hybrid bonding layer 225. The second hybrid bonding layer 225 comprises a dielectric layer 221 and pads 222. In an embodiment, the dielectric layer 221 is a silicon oxide, and the pads 222 are a conductive material, such as copper. At substantially room temperature, the dielectric layer 221 begins to bond with the glass layer 205. At elevated temperatures, the pads 222 and the conductive pillars 206 begin to undergo interdiffusion bonding. In some embodiments, the interdiffusion bonding is such that there may not be a visible seam at the interface between the pads 222 and the conductive pillars 206.Referring now to Figures 3A and 3B , cross-sectional illustrations depicting a process flow for assembling an electronic package using solder at the FLI is shown, in accordance with an additional embodiment. The processing operations implemented up to Figure 3A are the same processing operations described with respect to Figures 2A-2H , and will not be repeated here in the interest of brevity.Referring now to Figure 3A , a cross-sectional illustration of an electronic package after the removal of the carrier and deposition of solder 355 is shown, in accordance with an embodiment. In an embodiment, the electronic package comprises an FLI layer that comprises a glass layer 305 with conductive pillars 306 through the glass layer 305. The glass layer 305 and the conductive pillars 306 may be substantially similar to the glass layer 205 and the conductive pillars 206 described above. In an embodiment, the solder 355 may be deposited with a plating and patterning process. For example, the process may include, a seed deposition, resist patterning, copper deposition, nickel deposition, and tin deposition. After depositing the tin, the resist may be stripped, and the seed layer etched. In some embodiments, a diffusion barrier layer (not shown) may also be provided between the pillars 306 and the solder 355.In an embodiment, a stack of buildup layers 310 are provided below the glass layer 305. Similar to above, the FLI first assembly process results in the narrow end of the via 312 being closer to the glass layer 305 than the wide end of the via 312. The via 312 may be coupled to a pad 316 that is exposed by an opening 317 through a solder resist 315.Referring now to Figure 3B , a cross-sectional illustration of the structure after a die 320 is attached to the solder 355 is shown, in accordance with an embodiment. The solder 355 may couple the pads 322 on the die 320 with the conductive pillars 306 in the glass layer 305. That is, embodiments disclosed herein are not limited to hybrid bonding processes, and may also be used to provide flip-chip bonding as well.Referring now to Figure 4 , a cross-sectional illustration of an electronic system 490 is shown, in accordance with an embodiment. In an embodiment, the electronic system 490 comprises a board 491, such as a printed circuit board (PCB) or the like. In an embodiment, the board 491 may be coupled to a package substrate by interconnects 492. The interconnects may be solder balls, sockets, or any other second level interconnect architecture.In an embodiment, the package substrate comprises a first hybrid bonding layer 401. The first hybrid bonding layer 401 comprises a glass layer 405 and conductive pillars 406. The conductive pillars 406 may be coupled to the interconnects 492 through conductive routing through buildup layers 410 in the package substrate. For example, conductive routing may include a via 412. As shown, a narrow end of the via 412 may be closer to the glass layer 405 than a wide end of the via 412.A die 420 may be bonded to the first hybrid bonding layer 401 by a second hybrid bonding layer 425. The second hybrid bonding layer 425 may include pads 422 that are bonded to the conductive pillars 406 by interdiffusion bonding. The second hybrid bonding layer 425 may also include a dielectric layer 425 that is bonded to the glass layer 405.Figure 5 illustrates a computing device 500 in accordance with one implementation of the invention. The computing device 500 houses a board 502. The board 502 may include a number of components, including but not limited to a processor 504 and at least one communication chip 506. The processor 504 is physically and electrically coupled to the board 502. In some implementations the at least one communication chip 506 is also physically and electrically coupled to the board 502. In further implementations, the communication chip 506 is part of the processor 504.These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 506 enables wireless communications for the transfer of data to and from the computing device 500. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 506 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 500 may include a plurality of communication chips 506. For instance, a first communication chip 506 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 506 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 504 of the computing device 500 includes an integrated circuit die packaged within the processor 504. In some implementations of the invention, the integrated circuit die of the processor may be part of an electronic package that comprises an FLI first package substrate that is hybrid bonded to the integrated circuit die, in accordance with embodiments described herein. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 506 also includes an integrated circuit die packaged within the communication chip 506. In accordance with another implementation of the invention, the integrated circuit die of the communication chip may be part of an electronic package that comprises an FLI first package substrate that is hybrid bonded to the integrated circuit die, in accordance with embodiments described herein.The above description of illustrated implementations of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.These modifications may be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific implementations disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.Example 1: an electronic package, comprising: a first layer comprising glass; conductive pillars through the first layer; a buildup layer stack on the first layer, wherein conductive routing is through the buildup layer stack; and a second layer over a surface of the buildup layer stack opposite from the first layer.Example 2: electronic package of Example 1, wherein the conductive routing comprises at least one via.Example 3: the electronic package of Example 2, wherein the via is a tapered via.Example 4: the electronic package of Example 3, wherein the tapered via has a first end with a first width and a second end with a second width that is smaller than the first width, and wherein a distance between the second end and the first layer is smaller than a distance between the first end and the first layer.Example 5: the electronic package of Examples 1-4, further comprising: a plurality of solder balls, wherein individual ones of the plurality of solder balls are provided over corresponding ones of the conductive pillars.Example 6: the electronic package of Examples 1-5, wherein the conductive pillars have non-vertical sidewalls.Example 7: the electronic package of Example 6, wherein the conductive pillars have an hourglass shaped cross-section.Example 8: the electronic package of Examples 1-7, wherein a pitch of the conductive pillars is approximately 25µm or smaller.Example 9: the electronic package of Examples 1-8, wherein the first layer has a thickness that is approximately 200µm or smaller.Example 10: an electronic package, comprising: a die, wherein the die comprises: a first hybrid bonding layer; and a package substrate, wherein the package substrate comprises: a second hybrid bonding layer, comprising: a third layer comprising glass; and conductive pillars through the third layer, wherein the first hybrid bonding layer is coupled to the second hybrid bonding layer.Example 11: the electronic package of Example 10, wherein the package substrate further comprises: a buildup layer stack on the third layer, wherein conductive routing is through the buildup layer stack.Example 12: the electronic package of Example 11, wherein the conductive routing includes a via.Example 13: the electronic package of Example 12, wherein the via has a first end with a first width and a second end with a second width that is smaller than the first width, and wherein the second end is closer to the third layer than the first end.Example 14: the electronic package of Examples 10-13, wherein the conductive pillars have non-vertical sidewalls.Example 15: the electronic package of Example 14, wherein the conductive pillars have an hourglass shaped cross-section.Example 16: the electronic package of Examples 10-15, wherein a pitch of the conductive pillars is approximately 25µm or smaller.Example 17: the electronic package of Examples 10-16, wherein the third layer has a thickness that is approximately 200µm or smaller.Example 18: the electronic package of Examples 10-17, wherein the first hybrid bonding layer comprises: conductive pads; and a dielectric layer around the conductive pads.Example 19: the electronic package of Example 18, wherein the dielectric layer is a silicon oxide.Example 20: a method of forming an electronic package, comprising: forming openings through a glass layer; attaching the glass layer to a carrier; filling the openings with a conductive material to form conductive pillars; forming a buildup layer stack with conductive routing over the glass layer; and removing the carrier.Example 21: the method of Example 20, further comprising: forming a solder resist layer over the buildup layer stack prior to removing the carrier.Example 22: the method of Example 20 or Example 21, wherein the conductive routing comprises a via with a taper, wherein a first end of the via closest to the glass layer is narrower than a second end of the via.Example 23: the method of Examples 20-22, wherein the glass layer has a thickness of approximately 200µm or less, and wherein a pitch of the conductive pillars is approximately 25µm or smaller.Example 24: an electronic system, comprising: a board; a package substrate coupled to the board, wherein the package substrate comprises a first hybrid bonding layer with a glass layer and conductive pillars; and a die coupled to the package substrate, wherein the die comprises a second hybrid bonding layer, wherein the first hybrid bonding layer is connected to the second hybrid bonding layer.Example 25: the electronic system of Example 24, wherein conductive routing in the package substrate comprises a via with a taper, wherein a first end of the via closest to the glass layer is narrower than a second end of the via. |
A non-volatile memory is described having memory cells with a gate dielectric. The gate dielectric is a multilayer charge trapping dielectric between a control gate and a channel region of a transistor to trap positively charged holes. The multilayer charge trapping dielectric comprises at least one layer of high-K. |
CLAIMS 1. A non- volatile memory, comprising: source and drain regions located in a transistor body region, the source and drain regions are laterally spaced apart to form a channel region therebetween; a control gate isolated from and located vertically above the channel region; a multilayer charge trapping dielectric between the control gate and the channel region to trap positively charged holes, wherein the multilayer charge trapping dielectric comprises at least one layer of high-K dielectric having a dielectric constant (K) greater than seven. 2. The non- volatile memory of claim 1, comprising program circuitry to program the multilayer charge trapping dielectric by injecting holes onto the layer of high-K dielectric. 3. The non- volatile memory of claim 2, wherein the multilayer charge trapping dielectric comprises the layer of high-K dielectric located between first and second layers of oxide. 5. The non- volatile memory of claim 1, wherein the multilayer charge trapping dielectric comprises an oxide layer, a nitride layer and the layer of high-K dielectric. 6. The non- volatile memory of claim 5, wherein the layer of high-K dielectric is selected from Al2O3, HfO2 or ZrO2. 7. The non- volatile memory of claim 6, wherein the layer of high-K dielectric is formed using an atomic layer deposition process. 8. The non- volatile memory of claim 1, wherein the multilayer charge trapping dielectric comprises first, second and third layers of high-K dielectric. 9. The non- volatile memory of claim 8, wherein the multilayer charge trapping dielectric comprises a layer OfTa2O5 located between first and second layers of HfO2. 10. The non- volatile memory of claim 8, wherein the multilayer charge trapping dielectric comprises a layer OfHfO2 located between first and second layers of La2O3. 11. The non- volatile memory of claim 8, wherein the multilayer charge trapping dielectric comprises a layer of ZrO2 located between first and second layers of HfO2. 12. The non- volatile memory of claim 8, wherein the multilayer charge trapping dielectric comprises a layer of ZrO2 located between first and second layers of Lanthanide Oxide. 13. The non- volatile memory of claim 8, wherein the multilayer charge trapping dielectric comprises a layer OfHfO2 located between first and second layers of Lanthanide Oxide. 14. The non- volatile memory of any of the preceding claims, further comprising a discrete bi-polar junction located below the channel region. 15. The non- volatile memory of any of the preceding claims, further comprising: an array of positive charge hole trapping memories; and write circuitry to write data to the memory cells during a write operation. 16. The non- volatile memory of any of the preceding claims, wherein injecting holes into the layer of high-K dielectric comprises hot hole injection from the channel region. 17. The non-volatile memory of any one of claims 1-16, wherein injecting holes into the layer of liigh-K dielectric comprises light generated holes accelerated in an electric field. 18. The non- volatile memory of any one of claims 1-16, wherein inj ecting holes into the layer of high-K dielectric comprises holes injected via a p-n junction located below the transistor channel region. 19. The non- volatile memory of any one of claims 1-16, wherein the layer of high-K dielectric comprises holes generated at an interface of the multi-layer dielectric and the control gate by electrons tunneling off of the control gate. 20. The non- volatile memory of any of the preceding claims, wherein the layer of high-K dielectric is selected from the group HfO2, ZrO2, ZrSnTiO, ZrON, ZrAlO, ZrTiO4, Al2O3, La2O3, LaAlO3, HfAlO3, HfSiON, Y2O3, Gd2O3, Ta2O5, TiO2, Pr2O3, CrTiO3 and YSiO. 21. A method of programming a non- volatile memory transistor comprising: injecting positively charged holes into a multilayer dielectric located between a control gate and a channel region of the transistor, the multilayer dielectric comprising at least one layer of high-K dielectric having a dielectric constant (K) greater than seven; and trapping the positively charged holes in the layer of high-K dielectric. 22. The method of claim 21, wherein the injecting positively charged holes comprises hot hole injection from the channel region. 23. The method of claim 21, wherein the injecting positively charged holes comprises light generated holes accelerated in an electric field. 24. The method of claim 21, wherein the injecting positively charged holes comprises injecting holes via a p-n junction located below the transistor channel region. 25. The method of claim 21, wherein the injecting positively charged holes comprises generating holes at an interface of the multi-layer dielectric and the control gate by electrons tunneling off of the control gate. |
MEMORY USING HOLE TRAPPING IN HIGH-K DIELECTRICSField of the InventionThe present invention relates to non- volatile memory devices, and more particularly to hole trapping memory devices.Background Flash memory is non- volatile, which means that it stores information on a semiconductor in a way that does not need power to maintain the information in the chip. Flash memory is based on the Floating-Gate Avalanche-Injection Metal Oxide Semiconductor (FAMOS transistor), which is essentially a Complimentary Metal Oxide Semiconductor (CMOS) Field Effect Transistor (FET) with an additional conductor suspended between the gate and source/drain terminals. Current flash memory devices are made in two forms: NOR flash and NAND flash. The names refer to the type of logic used in the storage cell array. Further, flash memory stores information in an array of transistors, called "cells," each of which traditionally stores one or more bits of information. A flash cell is similar to a standard Metal Oxide Semi-conductor FieldEffect Transistor (MOSFET) transistor, except that it has two gates instead of just one. One gate is the control gate (CG) like in other MOS transistors, but the second is a floating gate (FG) that is insulated all around by an oxide layer. The FG is between the CG and the substrate. Because the FG is isolated by its insulating oxide layer, any electrons placed on it get trapped there and thus store the information.When electrons are trapped on the FG, they modify (partially cancel out) an electric field coming from the CG, which modifies the threshold voltage (Vt) of the cell. Thus, when the cell is "read" by placing a specific voltage on the CG, electrical current will either flow or not flow between the cell's source and drain connections, depending on the Vt of the cell. This presence or absence of current is sensed and translated into l's and O's, reproducing the stored data.A different non- volatile memory, Nitrided Read Only Memory (NROM), utilizes inherent physical features of an oxide-nitride-oxide (ONO) gate dielectric and known mechanisms of program and erase operations to create two separate physical bits per cell. The NROM cell is based on localized negative charge trapping. The cell is an n-channel MOSFET device where the gate dielectric is replaced by an ONO stack. Two spatially separated narrow charge distributions are stored in the nitride layer above junction edges. The NROM cell is programmed by channel hot electron injection.The NROM memory devices have attracted much attention due to their advantages over the traditional floating-gate flash device, including lower programming voltage, better scalability, and improved cycling endurance. An advantage of the NROM cell is the negligible vertical retention loss due to inhibition of direct tunneling. Further, in floating gate technology the electron charge is stored in a conductive layer, and any minor oxide defect or oxide trapped charge under the gate might cause leakage and loss of all the stored charge. NROM technology, however, uses a nitride insulator as a retaining material, hence only a large defect in the oxide (comparable to the cell size) could degrade retention.Brief Description of the DrawingsFigure 1 is a block diagram of a memory according to one embodiment of the invention.Figure 2 is a cross-section of a prior art transistor. Figure 3 is a cross-section of a transistor of one embodiment with a buried P-N junction.Figure 4 is a cross-section of a transistor of one embodiment with a multi-layered dielectric.DescriptionIn the following detailed description of the invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown, by way of illustration, different embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention.The terms wafer and substrate used in the following description include any structure having an exposed surface onto which a layer is deposited according to the present invention, for example to form the integrated circuit (IC) structure. The term substrate is understood to include semiconductor wafers. The term substrate is also used to refer to semiconductor structures during processing, and may include other layers that have been fabricated thereupon. Both wafer and substrate include doped and undoped semiconductors, epitaxial semiconductor layers supported by a base semiconductor or insulator, as well as other semiconductor structures. The term conductor is understood to include semiconductors, and the term insulator is defined to include any material that is less electrically conductive than the materials referred to as conductors. As recognized by those skilled in the art, memory devices of the type described herein are generally fabricated as an integrated circuit containing a variety of semiconductor devices. The integrated circuit is supported by a substrate. Integrated circuits are typically repeated multiple times on each substrate. The substrate is further processed to separate the integrated circuits into dice as is well known in the art.Relative terms such as above, below, lateral and adjacent are not limited to a specific coordinate system. These terms are used to describe relative positions between components and are not intended to be limitations. As such, additional components can be positioned between components that are above, below, lateral and adjacent to each other. Further, the figures are provided to help facilitate an understanding of the detailed description, are not intended to be accurate in scale, and have been simplified.The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled.Figure 1 is a simplified block diagram of an integrated circuit memory device 100 in accordance with an embodiment of the invention. The memory device 100 includes an array of non- volatile memory cells 102, address circuitry 104, control circuitry 110, and Input/Output (I/O) circuitry 114.The memory device 100 can be coupled to a processor 120 or other memory controller for accessing the memory array 102. The memory device 100 coupled to a processor 120 forms part of an electronic system. Some examples of electronic systems include personal computers, peripheral devices, wireless devices, digital cameras, personal digital assistants (PDA's) and audio recorders.The memory device 100 receives control signals across control lines 122 from the processor 120 to control access to the memory array 102 via control circuitry 110. Access to the memory array 102 is directed to one or more target memory cells in response to address signals received across address lines 124. Once the array is accessed in response to the control signals and the address signals, data is written to or read from the memory cells across data, DQ, lines 126.It will be appreciated by those skilled in the art that additional circuitry and control signals can be provided, and that the memory device of Figure 1 has been simplified to help focus on the invention. It will be understood that the above description of a memory device is intended to provide a general understanding of the memory and is not a complete description of all the elements and features of a typical memory device. hi embodiments of the invention, a p-channel MOSFET with a high dielectric constant, high-K, gate insulator with hole trapping in the gate insulator is provided as a memory device. Programming can be achieved by hot hole injection from a transistor channel, light generated holes accelerated in an electric field, holes injected into the device by a buried p-n junction, or holes generated at the gate insulator-substrate interface by highly energetic electrons tunneling off of the gate. Data can be read by operating the transistor in the forward direction, or if holes are injected only near the drain by operating the transistor in the reverse direction.Different methods of programming holes in the high-K dielectric can be employed in the present invention. Many of the available programming techniques are well known in the art and briefly explained below. For purposes of simplicity, control circuitry 110 is referred to herein as encompassing program circuitry to program a multilayer charge trapping dielectric by injecting holes onto the at least one layer of high-K dielectric.Flash memories based on p-channel MOSFETs using hole trapping in gate oxides as a memory technique and hot hole injection are known. Further, hole trapping has been described for use in fuses and anti-fuse devices. In such memories and structures, holes from a silicon substrate are generated by large negative gate voltages, hot hole injection from the channel, or by light.Figure 2 depicts a simplified cross-section of a prior art metal oxide semiconductor field effect transistor (MOSFET) in a substrate 200. The 'MOSFET includes a source region 202, a drain region 204, and a channel region 206 in the substrate 200 between the source region 202 and the drain region 204. A gate 208 is separated from the channel region 206 by a gate oxide 210. A source line 212 is coupled to the source region 202. hi a memory device, a bitline conductor 214 is coupled to the drain region 204. A wordline conductor 216 is coupled to the gate 208. In conventional operation, a drain to source voltage potential (Vds) is set up between the drain region 204 and the source region 202. A negative voltage potential is then applied to the gate 208 via the wordline 216. Once the negative voltage potential applied to the gate exceeds the characteristic voltage threshold (Vt) of the MOSFET, the channel 206 forms in the substrate 200 between the drain region 204 and the source region 202. Formation of the channel 206 permits conduction between the drain region 204 and the source region 202, and a current (Ids) can be detected at the drain region 204. During operation of the conventional MOSFET of Figure 2, some change in the device drain current can be programmed for MOSFETs operated in the forward direction due to holes being trapped in the gate oxide 210 near the drain region 204. This can be accomplished by hot hole injection when the transistor is operated with a drain voltage, Vds, near the gate voltage, Vgs. Since in this case the holes are trapped near the drain region 204, however, they are not very effective in changing the characteristics of the MOSFET. They are only effective if the transistor is operated in the reverse direction during the read cycle as in reading an NROM device. As such, hot hole injection of the prior art can be used with embodiments of the present invention.Alternatively, a sufficiently large negative gate bias voltage can be applied to cause tunnel electrons from the gate to gain enough energy to exceed the band gap energy of the gate insulator. As a result, energetic hole-electron pairs are generated in the silicon substrate and the holes have enough energy to overcome the barrier at the insulator and substrate interface.The holes are then injected from the substrate into the gate dielectric, where they remain trapped. A large shift in the threshold voltage of the p- channel MOSFET results. The device can subsequently be reset by applying a positive gate bias voltage. It is known in the art that the positive charge generated in gate oxides by hot hole injection can be erased by avalanche electron injection.Another prior art method to inject holes is to generate electron hole pairs by providing incident light. The holes are accelerated towards the gate insulator or oxide and trapped in the gate insulator. Trapped positive charge results in a change in the device drain current and can be used as a memory effect or memory device. This is accomplished by hot hole injection when the transistor is operated with a drain voltage near Vgs. Erasure is achieved by hot electron injection by operation with a drain voltage, Vds, much larger than the gate voltage, Vgs.Figure 3 depicts a semiconductor device having a bipolar (pnp) transistor-like structure, according to one embodiment of the invention, which allows uniform injection of holes. The device includes a source region 302, a drain region 304, a back gate region 306, and a channel region 308 in the substrate 300 between the source region 302 and the drain region 304. A gate 310 is separated from the channel region 308 by a multi layer gate dielectric 312. The gate dielectric contains at least one layer of a high-K dielectric, as explained below. A source line 314 is coupled to the source region 302. A bitline conductor 316 is coupled to the drain region 304. A wordline conductor 318 is coupled to the gate 310. A terminal 320 is coupled to the back gate region 306. The back gate 306 forms a p-n junction with substrate 300. When a positive voltage Veb is applied to the back gate region 306 via the terminal 320 and a negative voltage is applied to the gate 310 via the wordline 318, holes are injected from the p-n junction in the back gate region into the gate insulator 312. This effect is depicted in Figure 3 and results in a change in the device threshold voltage.Regardless of the programming method employed, embodiments of the present invention use a high-K (high dielectric constant) dielectric in the gate dielectric to trap positive charged holes. For the present embodiments, high-K dielectrics are defined as those with a dielectric constant greater than that of silicon nitride (i.e., > k = 7).Figure 4 depicts a simplified cross-section of a metal oxide semiconductor field effect transistor (MOSFET) memory cell of the present invention. The memory cell is formed in a substrate 400. The cell includes a source region 402, a drain region 404, and a channel region 406 in the substrate 400 between the source region 402 and the drain region 404. A gate 408 is separated from the channel region 406 by a multi layer gate dielectric 410. The dielectric layers include one or more layers of high-K dielectric material.A source line 412 is coupled to the source region 402. hi a memory device, a bitline conductor 414 is coupled to the drain region 404. A wordline conductor 416 is coupled to the gate 408.High-K dielectrics have smaller bandgap energies, and less voltage is required to inject holes into the gate insulator 410. These high-K dielectrics can be composite layers, or nanolaminates, formed by oxidation, chemical vapor deposition (CVD), evaporation, or atomic layer deposition (ALD), depending on the material used. The band gap energy of high-K dielectrics becomes smaller as the dielectric constant increases.Example high-K dielectrics of the present invention gate dielectric include a high-K dielectric between two layers of an oxide. The high-K dielectric layer in the composite gate insulator can be selected from Table 1 and the associated fabrication techniques: TABLE lFurther examples of the present invention gate dielectric include an oxide- nitride - high-K dielectric composite layered gate insulator. The high-K dielectric layer in the composite gate insulator can be selected from ALD formed Al2O33 HfO2 OrZrO2.Further examples of the present invention gate dielectric include three stacked layers of high-K dielectrics. The high-K dielectric layers in the composite gate insulator can be selected from dielectrics of Table 2 formed by ALD. TABLE 2A further example of the present invention gate dielectric includes a high-K - high-K - high-K dielectric composite layered gate insulator formed comprising evaporated HfO2 between two layers of ALD formed Lanthanide (Pr, Ne, Sm, Gd and Dy) Oxide. |
An apparatus and method for closed loop dynamic resource allocation. For example, one embodiment of a method comprises: collecting data related to usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priority workloads associated with one or more guaranteed performance levels and best effort workloads not associated with guaranteed performance levels; analyzing the data to identify resource reallocations from one or more of the priority workloads to one or more of the best effort workloads in one or more subsequent time periods while still maintaining the guaranteed performance levels; reallocating the resources from the priority workloads to the best effort workloads for the subsequent time periods; monitoring execution of the priority workloads with respect to the guaranteed performance level during the subsequent time periods; and preemptively reallocating resources from the best effort workloads to the priority workloads during the subsequent time periods to ensure compliance with the guaranteed performance level and responsive to detecting that the guaranteed performance level is in danger of being breached. |
1.A method that includes:Collect data related to the usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including a priority workload associated with one or more guaranteed performance levels, and Guaranteed performance levels are not associated with best-effort workloads;analyzing the data to identify resource reallocations from one or more priority workloads to one or more best-effort workloads over one or more subsequent time periods while still maintaining guaranteed performance levels;Reallocate resources from priority workloads to best-effort workloads over subsequent time periods;monitor the execution of priority workloads with respect to guaranteed performance levels during subsequent time periods; andResponsive to detecting that the guaranteed performance level is in danger of being violated, resources are preemptively reallocated from the best-effort workload to the priority workload during a subsequent time period.2.The method of claim 1, wherein the guaranteed performance level includes a guaranteed latency and/or a guaranteed throughput.3.The method of claim 2, wherein the guaranteed performance level is specified as a key performance indicator (KPI) of a service level agreement (SLA).4.The method of any one of claims 1 to 3, wherein analyzing comprises:Reinforcement learning is performed using the data to identify resource reallocations while still maintaining guaranteed performance levels.5.The method of claim 4, wherein performing reinforcement learning further comprises:generating a first one or more reward values associated with resource allocation to the best-effort workload;generating a second one or more reward values and/or one or more penalty values in response to detecting the specified performance metric associated with the priority workload;adding the reward value and the penalty value to generate the final reward value; andReallocate resources to try to maximize the final reward value.6.5. The method of claim 5, wherein resource allocation includes cache allocation to best-effort workloads, wherein the increase in cache allocation is used to generate an increased reward value.7.6. The method of claim 6, wherein the second one or more reward values comprise performance reward values for maintaining consistency with one or more guaranteed performance levels.8.7. The method of any of claims 1 to 7, wherein the first set of resource allocations are to be performed by a resource management circuit of the processor.9.9. The method of claim 8, wherein the first set of resource allocations includes cache occupancy levels and cache or memory bandwidth.10.A device comprising:A telemetry data collector to collect data related to the usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priorities associated with one or more guaranteed performance levels Class workloads, and best-effort workloads not associated with guaranteed performance levels;A resource allocation controller to analyze the data to identify resources to be reallocated from one or more priority workloads to one or more best-effort workloads in one or more subsequent time periods, while still maintaining all A guaranteed level of performance that the resource allocation controller uses to reallocate resources from priority workloads to best-effort workloads over subsequent time periods;a telemetry data collector to monitor the execution of priority workloads with respect to guaranteed performance levels during subsequent time periods; andA resource allocation controller to preemptively reallocate resources from the best-effort workload to the priority workload during a subsequent time period in response to detecting that the guaranteed performance level is in danger of being violated.11.11. The apparatus of claim 10, wherein the guaranteed performance level includes a guaranteed latency and/or a guaranteed throughput.12.11. The apparatus of claim 11, wherein the guaranteed performance level is specified as a key performance indicator (KPI) of a service level agreement (SLA).13.The apparatus of any one of claims 10 to 12, wherein the resource allocation controller comprises:A machine learning engine to perform reinforcement learning using the data to identify resource reallocations while still maintaining guaranteed performance levels.14.14. The apparatus of claim 13, wherein performing reinforcement learning further comprises:generating a first one or more reward values associated with resource allocation to the best-effort workload;generating a second one or more reward values and/or one or more penalty values in response to detecting the specified performance metric associated with the priority workload;adding the reward value and the penalty value to generate the final reward value; andReallocate resources to try to maximize the final reward value.15.15. The apparatus of claim 14, wherein resource allocation includes cache allocation to best-effort workloads, wherein the increase in cache allocation is used to generate an increased reward value. |
Apparatus and method for closed-loop dynamic resource allocation control frameworkBackground technique.technical fieldEmbodiments of the invention relate generally to the field of computer processors. More particularly, embodiments relate to apparatus and methods for a closed-loop dynamic resource allocation control framework.Description of Related ArtQuality of service is an important mechanism for implementing priority-based fairness in computer systems, and can be achieved by allocating dedicated paths or slots in shared buffers and queues, packet-based virtual channels, and the like. Quality of service hooks (hooks) are used today in caches, memory subsystem queues, memory controllers, and switch cards.Intel® Resource Director Technology® (RDT) provides the ability to control how applications, virtual machines (VMs) and containers use shared resources such as last level cache (LLC) and memory bandwidth. RDT facilitates workload consolidation, performance consistency, and dynamic service delivery to help improve efficiency and flexibility across data center and networking domains while reducing overall total cost of ownership (TCO).Description of drawingsA better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, wherein:1A and 1B are block diagrams illustrating a generic vector friendly instruction format and an instruction template thereof according to an embodiment of the present invention;2A-C are block diagrams illustrating exemplary VEX instruction formats according to embodiments of the present invention;Figure 3 is a block diagram of a register architecture according to one embodiment of the invention; and4A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-order issue/execution pipeline in accordance with an embodiment of the present invention;4B is a block diagram illustrating both in-order fetch, decode, rollback cores and exemplary register renaming, out-of-order issue/execution architecture cores to be included in a processor in accordance with an embodiment of the present invention;5A is a block diagram of a single processor core along with its connections to an on-die interconnect network;5B illustrates an expanded view of a portion of the processor core in FIG. 5A according to an embodiment of the present invention;6 is a block diagram of a single-core processor and a multi-core processor with an integrated memory controller and graphics controller according to an embodiment of the present invention;Figure 7 illustrates a block diagram of a system according to one embodiment of the invention;8 illustrates a block diagram of a second system according to an embodiment of the present invention;Figure 9 illustrates a block diagram of a third system according to an embodiment of the present invention;10 illustrates a block diagram of a system-on-chip (SoC) according to an embodiment of the present invention;11 illustrates a block diagram comparing the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set in accordance with an embodiment of the present invention;12A-B illustrate an embodiment in which a set of rules specifies allocation of a first resource based on usage of a second resource;Figure 13 illustrates one embodiment including a resource monitor and implementation circuitry;Figure 14 illustrates a method according to one embodiment of the present invention;15A-B illustrate potential resource allocation to best-effort workloads;Figure 16 illustrates periodic throughput for high priority workloads;Figures 17A-C illustrate instructions/sec for a best-effort workload under different conditions including an implementation of one embodiment of the present invention (Figure 17B);18 illustrates an architecture for using machine learning for resource allocation in processors and/or systems;19 illustrates another architecture in which telemetry data is collected and evaluated for resource allocation optimization;20 illustrates a specific implementation in which reinforcement learning is performed based on packet loss data;Figure 21 illustrates a method according to one embodiment of the present invention;Figure 22 illustrates an example of a facial recognition pipeline;23 illustrates an architecture for performing facial recognition on a distributed node architecture;Figure 24 illustrates one embodiment in which resource management is performed based on performance markers;Figure 25 illustrates one embodiment of a scheduler/resource manager for allocating resources based on expected state and current state;Figure 26 illustrates a method according to one embodiment of the present invention; andFigure 27 illustrates a method according to one embodiment of the present invention.detailed descriptionIn the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. However, it will be apparent to those skilled in the art that embodiments of the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the basic principles of embodiments of the present invention.Example Processor Architectures, Instruction Formats, and Data TypesAn instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, bit positions) to specify, among other things, the operation to be performed (opcode) and the operand(s) on which the operation is to be performed. Some instruction formats are further broken down by the definition of instruction templates (or sub-formats). For example, an instruction template for a given instruction format can be defined to have different subsets of the fields of the instruction format (the fields included are usually in the same order, but at least some have different bit positions because fewer fields are included) and/or is defined as a given field with a different interpretation. Thus, each instruction of the ISA is expressed using a given instruction format (and, if defined, in a given instruction template of that instruction format), and includes fields for specifying operations and operands. For example, an exemplary ADD instruction has a specific opcode and instruction format that includes an opcode field to specify that the opcode and operand field select operands (source 1/destination and source 2); and the ADD instruction is in Occurrences in the instruction stream will have specific content in the operand field that selects specific operands.Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be implemented in such systems, architectures and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format suitable for vector instructions (eg, there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through a vector friendly instruction format, alternative embodiments use only the vector friendly instruction format for vector operations.1A-1B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the present invention. 1A is a block diagram illustrating a generic vector friendly instruction format and its category A instruction template according to an embodiment of the present invention; and FIG. 1B is a block diagram illustrating a generic vector friendly instruction format and its category B according to an embodiment of the present invention Block diagram of the directive template. Specifically, the generic vector friendly instruction format 100 defines for it Class A and Class B instruction templates, both of which include no memory access 105 instruction templates and memory access 120 instruction templates. The term generic in the context of a vector friendly instruction format means that the instruction format is not tied to any particular instruction set.Although an embodiment of the invention will be described in which the vector friendly instruction format supports the following: 64-byte vector operand length ( or size) (and thus, a 64-byte vector consists of 16 doubleword-sized elements or, alternatively, 8 quadword-sized elements); has 16 bits (2 bytes) or 8 bits (1 byte) 64-byte vector operand length (or size) of data element width (or size); with 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 bytes), or 8 bits (1 word) section) 32-byte vector operand length (or size) of data element width (or size); and 32-bit (4-byte), 64-bit (8-byte), 16-bit (2-byte), or 8-bit (1 byte) data element width (or size) of a 16-byte vector operand length (or size), but alternative embodiments may support having more, less, or different data element widths (e.g., 128 bits ( 16 bytes) data element width) more, less, and/or different vector operand sizes (eg, 256-byte vector operands).The category A instruction templates in FIG. 1A include: 1) within the no memory access 105 instruction template, the no memory access, full rounding control type operation 110 instruction template and the no memory access, data transformation type operation 115 instruction template are shown; and 2) within the memory access 120 instruction template, the memory access, transient 725 instruction template and the memory access, non-transient 730 instruction template are shown. The Class B instruction templates in Figure 1B include: 1) Within the no memory access 105 instruction template, the no memory access, write mask control, partial round control type operation 112 instruction template and the no memory access, write are shown mask control, vsize type operation 117 instruction template; and 2) within the memory access 120 instruction template, the memory access, write mask control 127 instruction template is shown.The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in FIGS. 1A-1B .Format field 140 - A specific value in this field (instruction format identifier value) uniquely identifies the vector friendly instruction format, and thus uniquely identifies the occurrence of an instruction in the vector friendly instruction format in the instruction stream. As such, this field is optional in the sense that it is not required for instruction sets that only have the generic vector friendly instruction format.Basic operation field 142 - its content distinguishes different basic operations.Register Index Field 144 - whose content specifies the location of the source and destination operands, either directly or through address generation, whether they are in registers or in memory. These include a sufficient number of bits to select N registers from the PxQ (eg 32x512, 16x128, 32x1024, 64x1024) register file. Although in one embodiment, N may be up to three source and one destination registers, alternative embodiments may support more or fewer source and destination registers (eg, may support up to two sources, where these One of the sources also acts as a destination, which can support up to three sources, where one of these sources also acts as a destination, which can support up to two sources and one destination).Modifier field 146 - whose content distinguishes the presence of instructions in the generic vector instruction format that specify memory accesses from those that do not; that is, between the no memory access 105 instruction template and the memory access 120 instruction template. distinguish between. Memory access operations read and/or write the memory hierarchy (in some cases using values in registers to specify source and/or destination addresses), while non-memory access operations do not (for example, the source and destination are register). Although in one embodiment, this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, fewer, or different ways to perform memory address calculations.Extended operation field 150 - whose content, in addition to the basic operation, distinguishes which of a number of different operations is to be performed. This field is context specific. In one embodiment of the invention, this field is divided into a category field 168 , an alpha field 152 and a beta field 154 . The extended operation field 150 allows a common group of operations to be performed not in 2, 3 or 4 instructions but in a single instruction.Scale field 160 - whose contents allow scaling of the contents of the index field for memory address generation (eg, for address generation using 2scale*index+base).Displacement field 162A - whose content is used as part of memory address generation (eg, for address generation using 2scale*index+base+displacement).Displacement factor field 162B (note that the concatenation of displacement field 162A directly above displacement factor field 162B indicates that one or the other is used) - its content is used as part of address generation; it specifies the The access size (N) - where N is the number of bytes in the memory access - is scaled by the displacement factor (eg, for address generation using 2scale*index + base + scale displacement). Redundant low-order bits are ignored, and thus the contents of the displacement factor field are multiplied by the total memory operand size (N) to generate the final displacement used in computing the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 174 (described later herein) and the data manipulation field 154C. The displacement field 162A and the displacement factor field 162B are optional in the sense that they are not used for the no memory access 105 instruction template and/or different embodiments may implement only one of the two or neither.Data Element Width Field 164 - its content distinguishes which of multiple data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some instructions). This field is optional in the sense that it is not required if only one data element width is supported and/or the data element width is supported using some aspect of the opcode.Write Mask Field 170 - whose content controls on a per data element position basis whether that data element position in the destination vector operand reflects the results of the base and augmentation operations. Class A instruction templates support merge-write masks, while class B instruction templates support both merge-write masks and zero-write masks. When merging, a vector mask allows to protect any set of elements in the destination from updating during the execution of any operation (specified by the base and augment operations); The old value of each element of the destination of 0. In contrast, when zeroed, a vector mask allows any set of elements in the destination to be zeroed during the execution of any operation (specified by the base and augment operations); in one embodiment, when the corresponding mask When the bit has a value of 0, the element of the destination is set to 0. A subset of this functionality is the ability to control the vector length of the operation being performed (ie, the range of elements being modified, from the first to the last); however, the elements being modified need not be contiguous. Thus, write mask field 770 allows partial vector operations, including loads, stores, arithmetic, logic, and the like. Although described where the content of the write mask field 170 selects one of a number of write mask registers that contains the write mask to be used (and thus the content of the write mask field 170 indirectly identifies the mask), but alternative embodiments instead or in addition allow the contents of the mask write field 170 to directly specify the mask to be performed.Immediate field 172 - whose content allows specification of immediates. This field is optional in the sense that it is absent in implementations of the generic vector-friendly format that do not support immediates and it is absent in instructions that do not use immediates.Category field 168 - whose content differentiates between different categories of instructions. Referring to Figures 1A-B, the content of this field selects between Category A and Category B instructions. In FIGS. 1A-B, rounded squares are used to indicate the presence of a particular value in a field (eg, category A 168A and category B 168B for category field 168 in FIGS. 1A-B, respectively).Instruction Templates for Category AIn the case of a class A non-memory access 105 instruction template, the ɑ field 152 is interpreted as the RS field 152A, the content of which distinguishes which of the different types of augmentation operations (eg, rounding 152A.1 and data transform 152A) are to be performed .2 are specified for the no memory access round type operation 110 and the no memory access data transform type operation 115 instruction template respectively), while the beta field 154 distinguishes which operation of the specified type is to be performed. In the no memory access 105 instruction template, scale field 160, displacement field 162A, and displacement scale field 162B are absent.No memory access instruction template - full rounding control type operationsIn the no memory access full rounding control type operation 110 instruction template, beta field 154 is interpreted as rounding control field 154A, the content of which provides static rounding. Although in the described embodiment of the invention the rounding control field 154A includes the suppress all floating point exception (SAE) field 156 and the rounding operation control field 158, alternative embodiments may support that these concepts may all be encoded into the same field or only one or the other of these concepts/fields (eg, may only have round operation control field 158).SAE field 156 - its content distinguishes whether exception event reporting is disabled; when the content of SAE field 156 indicates that suppression is enabled, a given instruction does not report any type of floating-point exception flag and does not raise any floating-point exception handler.Round operation control field 158 - its content distinguishes which of the group of round operations to perform (eg, round up, round down, round towards zero, and round to nearest). Thus, the round operation control field 158 allows the rounding mode to be changed on a per instruction basis. In one embodiment of the invention in which the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 150 override the register value.No memory access instruction template - data transformation type operationIn the no memory access data transformation type operation 115 instruction template, beta field 154 is interpreted as data transformation field 154B, the content of which distinguishes which of multiple data transformations is to be performed (eg, no data transformation, deployment, broadcast).In the case of a class A memory access 120 instruction template, the alpha field 152 is interpreted as an eviction hint field 152B, the content of which distinguishes which of the eviction hints is to be used (in Figure 1A, transient 152B. Transient 152B.2 is designated for memory access transient 125 instruction template and memory access non-transient 130 instruction template respectively), while beta field 154 is interpreted as data manipulation field 154C, the content of which distinguishes multiple data manipulation operations to be performed (also called primitives) which of (eg, no manipulation; broadcast; up-conversion to source; and down-conversion to destination). The memory access 120 instruction template includes a scale field 160 and optionally a displacement field 162A or a displacement scale field 162B.Vector memory instructions utilize translation support to perform vector loads from and store to memory. As with conventional vector instructions, vector memory instructions transfer data from/to memory on a data-element-by-data-element basis, where the actual elements transferred are specified by the contents of the vector mask selected as the write mask.Memory Access Instruction Templates - TransientTransient data is data that may be reused quickly enough to benefit from caching. However, this is a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-transientNon-transitory data is data that is unlikely to be reused soon enough to benefit from caching in the first level cache, and should be given priority for eviction. However, this is a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates for Category BIn the case of a class B instruction template, the alpha field 152 is interpreted as a write mask control (Z) field 152C, the content of which distinguishes whether the write mask controlled by the write mask field 170 should be merged or zeroed.In the case of a class B non-memory access 105 instruction template, the portion of the beta field 154 is interpreted as the RL field 157A, the contents of which distinguish which of the different types of augmentation operations are to be performed (eg, round 157A.1 and vector length (VSIZE) 157A.2 is designated for no memory access, write mask control, partial round control type operation 112 instruction template and no memory access, write mask control, VSIZE type operation 117 instruction template, respectively), and The remainder of the beta field 154 distinguishes which of the specified types of operations are to be performed. In the no memory access 105 instruction template, scale field 160, displacement field 162A, and displacement scale field 162B are absent.In the no memory access, write mask control, partial round control type operation 110 instruction template, the remainder of beta field 154 is interpreted as round operation field 159A, and exception reporting is disabled (the given instruction does not report any kind of floating-point exception flags and does not raise any floating-point exception handlers).Round operation control field 159A - like round operation control field 158, its content distinguishes which of the group of round operations to perform (eg, round up, round down, round towards zero, and round to the closest). Thus, the round operation control field 159A allows the rounding mode to be changed on a per instruction basis. In one embodiment of the invention in which the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 150 override the register value.In the no memory access, write mask control, VSIZE type operation 117 instruction template, the remainder of the beta field 154 is interpreted as a vector length field 159B, the content of which distinguishes which of the multiple data vector lengths to be Execute (for example, 128, 256, or 512 bytes).In the case of the class B memory access 120 instruction template, the portion of the beta field 154 is interpreted as a broadcast field 157B, the content of which distinguishes whether a broadcast type data manipulation operation is to be performed, and the remainder of the beta field 154 is interpreted as a vector length field 159B . The memory access 120 instruction template includes a scale field 160 and optionally a displacement field 162A or a displacement scale field 162B.With respect to the generic vector friendly instruction format 100, a full opcode field 174 including a format field 140, a base operation field 142, and a data element width field 164 is shown. Although one embodiment is shown in which full opcode field 174 includes all of these fields, in embodiments where all of these fields are not supported, full opcode field 174 includes less than all of these fields. The full opcode field 174 provides an operation code (opcode).The extended operation field 150, data element width field 164, and write mask field 170 allow these characteristics to be specified on a per-instruction basis under the generic vector friendly instruction format.The combination of the write mask field and the data element width field creates a type of instruction because they allow masks to be applied based on different data element widths.The various instruction templates found within Category A and Category B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both. For example, a high-performance general-purpose out-of-order core intended for general-purpose computing may only support class B, and a core primarily intended for graphics and/or scientific (throughput) computing may only support class A, and intended for both The author's cores may support both classes (of course, cores with some mix of templates and instructions from both classes, but not all templates and instructions from both classes are within the scope of the present invention). Additionally, a single processor may include multiple cores, all of which support the same class, or where different cores support different classes. For example, in a processor with separate graphics and general-purpose cores, one of the graphics cores primarily intended for graphics and/or scientific computing may only support category A, while one or more of the general-purpose cores may be A high-performance general-purpose core that supports out-of-order execution and register renaming for class B general-purpose computations only. Another processor that does not have a separate graphics core may include one or more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class can also be implemented in other classes in different embodiments of the invention. Programs written in a high-level language will be placed (eg, just-in-time or statically compiled) into a number of different executable forms, including: 1) having only the class(s) of instructions supported by the target processor to use or 2) have alternative routines written using different combinations of all classes of instructions and have a form of control flow code that selects the routine to execute based on the instructions supported by the processor currently executing the code .VEX instruction formatVEX encoding allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 28 bits. The use of the VEX prefix provides three (or more) operand syntax. For example, previous two-operand instructions performed operations such as A=A+B, which overwrite the source operand. The use of the VEX prefix enables operands to perform non-destructive operations such as A=B+C.FIG. 2A illustrates an exemplary AVX instruction format that includes a VEX prefix 202 , a real opcode field 230 , a Mod R/M byte 240 , a SIB byte 250 , a displacement field 262 , and an IMM8 272 . FIG. 2B illustrates which fields from FIG. 2A make up the full opcode field 274 and the base operation field 241 . FIG. 2C illustrates which fields from FIG. 2A make up the register index field 244 .The VEX prefix (bytes 0-2) 202 is encoded in three-byte form. The first byte is the format field 290 (VEX byte 0, bits [7:0]), which contains the explicit C4 byte value (a unique value used to distinguish C4 instruction formats). The second-third bytes (VEX bytes 1-2) include a number of bit fields that provide specific capabilities. Specifically, the REX field 205 (VEX byte 1, bits [7-5]) includes the VEX.R bit field (VEX byte 1, bits [7]-R), the VEX.X bit field (VEX byte 1 , bits[6]-X) and the VEX.B bit field (VEX byte 1, bits[5]-B). The other fields of the instruction encode the three lower bits of the register index (rrr, xxx and bbb) as known in the art so that Rrrr, Xxxx and VEX.B can be formed by adding VEX.R, VEX.X and VEX.B Bbbb. The opcode map field 215 (VEX byte 1, bits [4:0]-mmmmm) includes the content to encode the implied leading opcode bytes. The W field 264 (VEX byte 2, bits [7]-W) is represented by the tag VEX.W and provides different functions depending on the instruction. The effects of VEX.vvvv 220 (VEX byte 2, bits[6:3]-vvvv) may include the following: 1) VEX.vvvv encodes the first source register operand, which is in reversed (1's complement) form specified, and valid for instructions with two or more source operands; 2) VEX.vvvv encodes the destination register operand, which is specified in 1's complement for some vector shifts; Or 3) VEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. If the VEX.L 268 size field (VEX byte 2, bit[2]-L)=0, it indicates a 28-bit vector; if VEX.L=1, it indicates a 256-bit vector. Prefix encoding field 225 (VEX byte 2, bits[1:0]-pp) provides additional bits for the basic operation field.The real opcode field 230 (byte 3) is also referred to as the opcode byte. Specify the part of the opcode in this field.MOD R/M field 240 (byte 4) includes MOD field 242 (bits [7-6]), Reg field 244 (bits [5-3]), and R/M field 246 (bits [2-0]). The role of the Reg field 244 may include the following: encode the destination register operand or the source register operand (rrr in Rrrr), or be treated as an opcode extension and not used to encode any instruction operands. The role of the R/M field 246 may include encoding an instruction operand referencing a memory address or encoding a destination register operand or a source register operand.Scale, Index, Base (SIB) - The content of the scale field 250 (byte 5) includes SS252 (bits [7-6]), which is used for memory address generation. The contents of SIB.xxx 254 (bits [5-3]) and SIB.bbb 256 (bits [2-0]) have been previously quoted with respect to register indices Xxxx and Bbbb.Offset field 262 and immediate field (IMM8) 272 contain data.Exemplary Register ArchitectureFIG. 3 is a block diagram of a register architecture 300 according to one embodiment of the invention. In the illustrated embodiment, there are 32 vector registers 310 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. Overlay the lower order 256 bits of the lower 6 zmm registers on registers ymm0-15. Overlay the lower order 128 bits of the lower 6 zmm registers (the lower order 128 bits of the ymm registers) on registers xmm0-15.General Purpose Registers 325 - In the illustrated embodiment, there are sixteen 64-bit general purpose registers that are used in conjunction with existing x86 addressing modes to address memory operands. These registers are labeled by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 to R15.A scalar floating point stack register file (x87 stack) 345 on which is aliased the MMX packed integer plane register file 350 - in the illustrated embodiment, the x87 stack is used to use the x87 instruction set extensions to 32/64 /80-bit floating-point data to perform eight-element stacks of scalar floating-point operations; while MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between MMX and XMM registers.Alternative embodiments of the present invention may use wider or narrower registers. Additionally, alternative embodiments of the present invention may use more, fewer or different register files and registers.Exemplary Core Architecture, Processor and Computer ArchitectureProcessor cores may be implemented in different ways, for different purposes, and in different processors. For example, implementations of such cores may include: 1) general purpose in-order cores intended for general purpose computing; 2) high performance general purpose out-of-order cores designed for general purpose computing; 3) primarily intended for graphics and and/or special purpose cores for scientific (throughput) computing. Implementations of different processors may include: 1) CPUs including one or more general purpose in-order cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; and 2 ) includes a coprocessor of one or more special purpose cores primarily intended for graphics and/or science (throughput). Such different processors lead to different computer system architectures, which may include: 1) a coprocessor on a chip separate from the CPU; 2) a coprocessor on a separate die in the same package as the CPU; 3) A coprocessor on the same die as the CPU (in this case, such a coprocessor is sometimes called special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or special and 4) a system-on-a-chip, which may include the described CPU (sometimes referred to as application core(s) or application processor(s)) as described, on the same die, processor and additional functionality. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture. Circuits (units) including exemplary cores, processors, etc. are detailed herein.Exemplary Core Architecture4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline in accordance with an embodiment of the present invention. 4B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor in accordance with an embodiment of the present invention. The solid-line boxes in Figures 4A-B illustrate in-order pipelines and in-order cores, while optional additions of dashed-line boxes illustrate register renaming, out-of-order issue/execution pipelines and cores. Considering that the ordered aspect is a subset of the unordered aspect, the unordered aspect will be described.In Figure 4A, the processor pipeline 400 includes an instruction fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a rename stage 410, a scheduling (also known as dispatch or issue) stage 412, a register read/ Memory read stage 414 , execute stage 416 , write back/memory write stage 418 , exception handling stage 422 and commit stage 424 .FIG. 4B illustrates processor core 490 including front end unit 430 coupled to execution engine unit 450 , and both execution engine unit 450 and front end unit 430 are coupled to memory unit 470 . Cores 490 may be reduced instruction set computing (RISC) cores, complex instruction set computing (CISC) cores, very long instruction word (VLIW) cores, or mixed or alternative core types. As yet another option, the cores 490 may be special purpose cores such as, for example, network or communication cores, compression engines, coprocessor cores, general purpose computing graphics processing unit (GPGPU) cores, graphics cores, and the like.Front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434 that is coupled to an instruction translation lookaside buffer (TLB) 436 that is coupled to an instruction translation lookaside buffer (TLB) 436 Instruction fetch unit 438 , which is coupled to decode unit 440 . Decode unit 440 (or decoder) may decode the instruction and generate as output one or more micro-ops, microcode entry points, micro-instructions, other instructions or other control signals that are decoded from the original instruction, or otherwise reflect or originate from the original instruction. The decoding unit 440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. In one embodiment, core 490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (eg, in decode unit 440 or otherwise within front end unit 430). Decode unit 440 is coupled to rename/distributor unit 452 in execution engine unit 450 .Execution engine unit 450 includes a rename/distributor unit 452 coupled to a fallback unit 454 and a set of one or more scheduler units 456 . Scheduler unit(s) 456 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit(s) 456 are coupled to physical register file unit(s) 458 . Each of the physical register file unit(s) 458 represents one or more physical register files, wherein different physical register files store one or more different data types, such as scalar integer, scalar floating point, packed integer , packed floating point, vector integer, vector floating point, state (eg, instruction pointer which is the address of the next instruction to be executed), etc. In one embodiment, physical register file unit(s) 458 includes vector register units and scalar register units. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 458 is overlaid by fallback unit 454 to illustrate the various ways in which register renaming and out-of-order execution may be implemented (eg, using reorder buffer(s) and ( one or more) fallback register files; use of future file(s), history buffer(s), and fallback register file(s); use of register maps and register pools, etc. ). Fallback unit 454 and physical register file unit(s) 458 are coupled to execution cluster(s) 460 . Execution cluster(s) 460 includes a set 462 of one or more execution units and a set 464 of one or more memory access units. Execution unit 462 may perform various operations (eg, shift, add, subtract, multiply) and on various types of data (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units all of which perform all functions. Scheduler unit(s) 456, physical register file unit(s) 458, and execution cluster(s) 460 are shown as possibly plural as some embodiments create Separate pipelines for certain types of data/operations (e.g. scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline and/or memory access pipeline, each with its own Scheduler unit, physical register file unit(s) and/or execution cluster - and in the case of a separate memory access pipeline, the execution cluster implementing where only that pipeline has memory access unit(s) 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be issued/executed out-of-order and the rest issued/executed in-order.The set of memory access units 464 is coupled to a memory unit 470 that includes a data TLB unit 472 coupled to a data cache unit 474 , which is coupled to a level 2 (L2) cache unit 476 . In one exemplary embodiment, memory access unit 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 472 in memory unit 470 . Instruction cache unit 434 is further coupled to level 2 (L2) cache unit 476 in memory unit 470 . L2 cache unit 476 is coupled to one or more other levels of cache and ultimately to main memory.As an example, an exemplary register renaming, out-of-order issue/execution core architecture may implement pipeline 400 as follows: 1) instruction fetch 438 performs instruction fetch and length decode stages 402 and 404; 2) decode unit 440 performs decode stage 406; 3 ) rename/allocator unit 452 performs allocation phase 408 and rename phase 410; 4) scheduler unit(s) 456 performs scheduling phase 412; 5) physical register file unit(s) 458 and memory Unit 470 performs register read/memory read phase 414; execution cluster 460 performs execution phase 416; 6) memory unit 470 and physical register file unit(s) 458 perform write back/memory write phase 418; 7) Various units may be involved in the exception handling phase 422; and 8) the rollback unit 454 and the physical register file unit(s) 458 perform the commit phase 424.The core 490 may support one or more instruction sets (eg, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies, Sunnyvale, Calif.; Sunnyvale, Calif. The ARM instruction set of ARM Holdings (with optional additional extensions such as NEON), including the instruction(s) described in this article. In one embodiment, core 490 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.It should be understood that cores may support multithreading (performing two or more parallel collections of operations or threads) and may do so in a variety of ways, including time sliced multithreading, simultaneous multithreading (in a single A physical core provides a logical core for each of the threads, the physical core is simultaneously multi-threaded) or a combination thereof (eg, thereafter such as in Intel® Hyper-Threading Technology for time-sliced fetch and decode and simultaneous multi-threading). thread).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instruction and data, such as For example, a level 1 (L1) internal cache, or a multi-level internal cache. In some embodiments, the system may include a combination of internal cache and external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific Exemplary In-Order Core Architecture5A-B illustrate block diagrams of a more specific example in-order core architecture, which would be one of several logic blocks in a chip (including other cores of the same type and/or different types). Depending on the application, the logic blocks communicate with some fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high bandwidth interconnect network (eg, a ring network).5A is a block diagram of a single processor core along with its connection to the on-die interconnect network 502 and its local subset 504 of level 2 (L2) cache, according to an embodiment of the present invention. In one embodiment, instruction decoder 500 supports the x86 instruction set with packed data instruction set extensions. L1 cache 506 allows low latency access to cache memory into scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 508 and vector unit 510 use separate sets of registers (scalar registers 512 and vector registers 514, respectively), and the data transferred between them is written to memory, and is then read back from the level 1 (L1) cache 506, but alternative embodiments of the present invention may use a different approach (eg, use a single register set or include allowing data to be transferred between two register files without being communication path for writing and reading back).The local subset 504 of the L2 cache is part of the global L2 cache, which is divided into separate local subsets, one for each processor core. Each processor core has a direct access path to its own local subset 504 of the L2 cache. Data read by a processor core is stored in its L2 cache subset 504 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. Data written by a processor core is stored in its own subset of L2 cache 504 and flushed from other subsets if necessary. The ring network ensures consistency for shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. In some embodiments, each ring data path is 1024 bits wide in each direction.5B is an expanded view of a portion of the processor core in FIG. 5A, according to an embodiment of the present invention. FIG. 5B includes the L1 data cache 506A portion of the L1 cache 504 and more details about the vector unit 510 and the vector registers 514 . Specifically, vector unit 510 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 528) that executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports provisioning of register inputs with provisioning unit 520 , digital conversion by digital conversion units 522A-B, and duplication for memory inputs by duplication unit 524 .Processor with integrated memory controller and graphics controller6 is a block diagram of a processor 600, which may have more than one core, may have an integrated memory controller, and may have an integrated graphics controller, according to an embodiment of the present invention. The solid-line box in FIG. 6 illustrates the processor 600 having a single core 602A, a system agent 610, a set of one or more bus controller units 616, while the optional addition of dashed boxes illustrates having multiple cores 602A -N, a set of one or more integrated memory controller units 614 in the system agent unit 610 and a replacement processor 600 for the special purpose logic 608.Thus, different implementations of the processor 600 may include: 1) a CPU with special purpose logic 608 that is integrated graphics and/or scientific (throughput) logic (which may include one or more cores) , and the cores 602A-N are one or more general-purpose cores (eg, general-purpose in-order cores, general-purpose out-of-order cores, a combination of the two); 2) a coprocessor having cores 602A-N, the cores 602A -N is a large number of special purpose cores primarily intended for graphics and/or science (throughput); and 3) a coprocessor with cores 602A-N, which are a large number of general purpose in-order cores. Thus, processor 600 may be a general purpose processor, coprocessor or special purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (General Purpose Graphics Processing Unit), a high throughput multi-integrated core ( MIC) coprocessors (including 30 or more cores), embedded processors, etc. A processor may be implemented on one or more chips. Processor 600 may be part of and/or may be implemented on one or more substrates using any of a variety of processing technologies such as, for example, BiCMOS, CMOS, or NMOS .The memory hierarchy includes one or more levels of cache within cores 604A-N, a set of one or more shared cache units 606 , and external memory (not shown) coupled to a set of integrated memory controller units 614 . . The set of shared cache units 606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4) or other level caches, last level caches (LLC) and / or a combination thereof. Although in one embodiment the ring-based interconnect unit 612 interconnects the integrated graphics logic 608, the set of shared cache units 606, and the system agent unit 610/integrated memory controller unit(s) 614, alternative implementations Examples may use any number of known techniques for interconnecting such cells. In one embodiment, coherency is maintained between one or more cache units 606 and cores 602-A-N.In some embodiments, one or more of the cores 602A-N are capable of multithreading. System agent 610 includes those components that coordinate and operate cores 602A-N. The system agent unit 610 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or include the logic and components required to regulate the power states of the cores 602A-N and integrated graphics logic 608 . The display unit is used to drive one or more externally connected displays.The cores 602A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of the cores 602A-N may be capable of executing the same instruction set, while others may be capable of only Execute a subset of this instruction set or a different instruction set.Exemplary Computer Architecture7-10 are block diagrams of exemplary computer architectures. For laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics equipment, video game equipment Other system designs and configurations known in the art of , set-top boxes, microcontrollers, cellular telephones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices having the ability to incorporate processors and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 7, shown is a block diagram of a system 700 according to one embodiment of the present invention. System 700 may include one or more processors 710 , 715 coupled to controller hub 720 . In one embodiment, controller hub 720 includes graphics memory controller hub (GMCH) 790 and input/output hub (IOH) 750 (which may be on separate chips); GMCH 790 includes memory 740 and coprocessor 745 Coupled memory and graphics controller; IOH 750 couples input/output (I/O) devices 760 to GMCH 790. Alternatively, one or both of the memory and graphics controller are integrated into the processor (as described herein), the memory 740 and coprocessor 745 are directly coupled to the processor 710, and the controller hub 720 is coupled to IOH 750 in a single chip.Optional properties of the additional processor 715 are indicated in FIG. 7 with dashed lines. Each processor 710 , 715 may include one or more of the processing cores described herein, and may be some version of processor 600 .Memory 740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of both. For at least one embodiment, the controller hub 720 communicates with the processor(s) 710, 715 via a multidrop bus, such as a front side bus (FSB), point-to-point interface, or similar connection 795.In one embodiment, coprocessor 745 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like. In one embodiment, the controller hub 720 may include an integrated graphics accelerator.There may be many differences between physical resources 710, 7155 in terms of a range of value metrics including architectural characteristics, micro-architectural characteristics, thermal characteristics, power consumption characteristics, and the like.In one embodiment, processor 710 executes instructions that control general types of data processing operations. Embedded within the instructions may be coprocessor instructions. The processor 710 identifies these coprocessor instructions as being of a type that should be executed by the attached coprocessor 745 . Accordingly, processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 745 over a coprocessor bus or other interconnect. The coprocessor(s) 745 accept and execute the received coprocessor instructions.Referring now to FIG. 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in FIG. 8 , the multiprocessor system 800 is a point-to-point interconnect system and includes a first processor 870 and a second processor 880 coupled via a point-to-point interconnect 850 . Each of processors 870 and 880 may be some version of processor 600 . In one embodiment of the invention, processors 870 and 880 are processors 710 and 715, respectively, while coprocessor 838 is coprocessor 745. In another embodiment, processors 870 and 880 are processor 710, coprocessor 745, respectively.Processors 870 and 880 are shown to include integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes point-to-point (P-P) interfaces 876 and 878 as part of its bus controller unit; similarly, second processor 880 includes P-P interfaces 886 and 888 . The processors 870 , 880 may exchange information via a point-to-point (P-P) interface 850 using P-P interface circuits 878 , 888 . As shown in Figure 8, IMCs 872 and 882 couple the processors to respective memories, namely memory 832 and memory 834, which may be portions of main memory locally attached to the respective processors.Processors 870, 880 may each exchange information with chipset 890 via separate P-P interfaces 852, 854 using point-to-point interface circuits 876, 894, 886, 898. Chipset 890 may optionally exchange information with coprocessor 838 via high performance interface 892 . In one embodiment, coprocessor 838 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) can be included in either processor or in addition to both processors, and can also be connected to the processors via a PP interconnect so that if the processor is placed in a low power mode, then Local cache information for either or both processors can be stored in the shared cache.Chipset 890 may be coupled to first bus 816 via interface 896 . In one embodiment, the first bus 816 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another I/O interconnect bus, although the scope of the invention is not so limited.As shown in FIG. 8 , various I/O devices 814 may be coupled to the first bus 816 along with a bus bridge 818 that couples the first bus 816 to the second bus 820 . In one embodiment, a processor such as a co-processor, a high-throughput MIC processor, a GPGPU, an accelerator (such as, for example, a graphics accelerator or a digital signal processing (DSP) unit), a field programmable gate array, or any other processor One or more additional processors 815 are coupled to the first bus 816 . In one embodiment, the second bus 820 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 820 including, for example, a keyboard and/or mouse 822 , communication devices 827 , and other large devices such as disk drives or other large devices that may include instructions/code and data 830 . Storage unit 828 such as a capacity storage device. Additionally, audio I/O 824 may be coupled to second bus 816 . Note that other architectures are also possible. For example, instead of the point-to-point architecture of FIG. 8, the system may implement a multi-drop bus or other such architecture.Referring now to FIG. 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Like elements in FIGS. 8 and 9 bear like reference numerals, and certain aspects of FIG. 8 have been omitted from FIG. 9 to avoid obscuring other aspects of FIG. 9 .9 illustrates that processors 870, 880 may include integrated memory and I/O control logic ("CL") 972 and 982, respectively. Thus, the CLs 972, 982 include integrated memory controller units and include I/O control logic. 9 illustrates that not only memory 832, 834 is coupled to CL 872, 882, but I/O device 914 is also coupled to control logic 872, 882. Conventional I/O devices 915 are coupled to chipset 890 .Referring now to FIG. 10, shown is a block diagram of an SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 bear similar reference numerals. Also, the dotted box is an optional feature on more advanced SoCs. In Figure 10, interconnect unit(s) 1002 are coupled to: an application processor 1010, which includes one or more cores 102A-N, cache units 604A-N, and shared cache unit(s) set of cache units 606; system proxy unit 610; bus controller unit(s) 616; integrated memory controller unit(s) 614; set of one or more coprocessors 1020, the one The coprocessor(s) 1020 may include integrated graphics logic, image processors, audio processors, and video processors; a static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; Display unit 1040 for one or more external displays. In one embodiment, coprocessor(s) 1020 comprise special purpose processors such as, for example, network or communication processors, compression engines, GPGPUs, high throughput MIC processors, embedded processors, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation methods. Embodiments of the present invention can be implemented as a computer program or program code executing on a programmable system including at least one processor, a storage system (including volatile and nonvolatile memory and/or storage elements) ), at least one input device, and at least one output device.Program code, such as code 830 illustrated in Figure 8, may be applied to input instructions to perform the functions described herein and to generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor, such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high-level procedural programming language or an object-oriented programming language to communicate with the processing system. The program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language can be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium representing various logic within a processor that, when read by a machine, cause the machine to be fabricated to perform the techniques described herein logic. Such representations, referred to as "IP cores," can be stored on tangible machine-readable media and supplied to various customers or manufacturing facilities for loading into the manufacturing machines that actually make the logic or processors.Such machine-readable storage media may include, without limitation: non-transitory tangible devices of articles manufactured or formed by machines or equipment, including storage media, such as hard disks, any other type of magnetic disks (including floppy disks, optical disks) , compact disk read only memory (CD-ROM), compact disk rewritable (CD-RW) and magneto-optical disk), semiconductor devices, such as read only memory (ROM), random access memory (RAM) (such as dynamic random access memory) Access Memory (DRAM), Static Random Access Memory (SRAM)), Erasable Programmable Read Only Memory (EPROM), Flash Memory, Electrically Erasable Programmable Read Only Memory (EEPROM), Phase Change Memory ( PCM), magnetic or optical cards, or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as a hardware description language (HDL), that define the structures, Circuit, device, processor and/or system features. Such an embodiment may also be referred to as a program product.Simulation (including binary translation, code warping, etc.)In some cases, an instruction translator may be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction translator may translate the instruction (eg, using static binary translation, dynamic binary translation including dynamic compilation), warp, emulate, or otherwise convert the instruction into one or more other instructions to be processed by the core. Instruction translators can be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on-processor, off-processor, or partially on-processor and partially off-processor.11 is a block diagram comparing the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to an embodiment of the present invention. In the illustrated embodiment, the instruction translator is a software instruction translator, although the instruction translator may alternatively be implemented in software, firmware, hardware, or various combinations thereof. 11 shows that a program in a high-level language 1102 can be compiled using a first compiler 1104 to generate a first binary code (eg, x86) 1106 that can Natively executed by the processor 1116 having at least one first instruction set core. In some embodiments, a processor 1116 having at least one first instruction set core represents substantially the same performance as an Intel processor having at least one x86 instruction set core by mutually compatible execution or otherwise processing Any processor capable of: (1) a substantial portion of the instruction set of an Intel x86 instruction set core or (2) an object code version of an application or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve basically the same result as an Intel processor with at least one x86 instruction set core. The first compiler 1104 represents a compiler operable to generate binary code 1106 (eg, object code) of the first instruction set that can be processed with or without additional linking In the case of processing is executed on processor 1116 having at least one first instruction set core. Similarly, FIG. 11 shows that a program in the form of a high-level language 1102 can be compiled using an alternative instruction set compiler 1108 to generate an alternative instruction set binary 1110 that can be generated natively by not having at least one first An instruction set core 1114 processor to execute (eg, a processor with a core executing the MIPS instruction set of MIPS Technologies, Sunnyvale, CA and/or a core executing the ARM instruction set of ARM Holdings, Sunnyvale, CA) . An instruction converter 1112 is used to convert the first binary code 1106 into code that can be natively executed by the processor 1114 without the first instruction set core. This translated code is unlikely to be the same as the alternate instruction set binary code 1110, since instruction converters capable of doing so are difficult to manufacture; however, the translated code will perform general operations and be composed of instructions from the alternate instruction set. Thus, instruction translator 1112 represents software, firmware, hardware, or other electronic device that allows a processor or other electronic device that does not have a first instruction set processor or core to execute first binary code 1106 through emulation, emulation, or any other process. combination.Apparatus and method for conditional quality of service in a processorQuality of service is an important mechanism for implementing priority-based fairness in computer systems, and can be achieved by allocating dedicated paths or slots in shared buffers and queues, packet-based virtual channels, and the like. Quality of service hooks (hooks) are used today in caches, memory subsystem queues, memory controllers, and switch cards. However, with a few notable exceptions, quality of service as a feature finds little traction when it comes to real-world deployments in the industry.One of the reasons for this is that in real world deployments it is often challenging to know, for example, that process X will always require dedicated bandwidth, or that thread Y will always require a dedicated set of queue slots in controller queue Z. This is because these processes or threads often rely on external events to trigger the need for dedicated resources.For example, process X may be implementing functionality that involves receiving high-priority client data streams from a network card. When this data arrives, it is critical to process the data with the highest priority and add it to the database. However, such high-priority bursts of customer data may arrive only once an hour. During the rest of the time, process X works in the background to add more data from disk and build some indexes against the same dataset. In this case, the process of doing all of the above is the same, but the key insight is that only data from the NIC has high priority and only arrives once an hour for a few minutes. A static QoS assignment for process X to have dedicated paths and queue slots in the system would be wasteful, inefficient, and prevent other processes from doing useful work.Embodiments of the present invention include conditional quality of service (QoS) techniques that alleviate this problem, wherein quality of service rules are applied only when a certain condition or set of conditions is met. In this case, the rule could be a burst of network traffic above 1 GB/s for X (ie, events per hour). When this happens, apply QoS rules to Process X. Thus, using an intelligently tailored set of rules, the detected use of a first processing resource by a process may alter the allocation of a second resource to that process. Dynamically adjusting processing resources in this manner reduces the likelihood that one particular resource will become a performance bottleneck for the overall process.12A-B illustrate a processor 1250 for implementing the techniques described herein. In one embodiment, the condition monitoring circuitry/logic 1200 of the processor 1250 performs conditional QoS operations according to the set of QoS rules 1205 . The conditional QoS circuitry/logic may be implemented in hardware (eg, control registers/MSR sets) or using a combination of hardware and software/firmware.The illustrated conditional QoS circuitry/logic 1200 includes condition monitoring circuitry/logic 1201 for monitoring resource usage for multiple processes, threads, logical processors, and/or other logical groupings of instructions. Thus, although the following discussion focuses on the allocation of resources to "processes," the underlying principles of the present invention may be applied to any logical grouping of program code. For illustrative purposes, two processes are shown in Figures 12A-B: Process A and Process B.Monitoring circuitry/logic 1201 may include, for example, one or more registers to specify the resources to be monitored for each process. These resources may include, but are not limited to, memory bandwidth usage, cache footprint (eg, within LLC, L2 cache, etc.), and network bandwidth. Various other metrics can be tracked while still complying with the basic principles of the present invention.In one embodiment, a process identifier, such as a process address space ID (PASID), is associated with each process and is used to track resource usage for that process. Additionally, a class of service (CLOS) value can be associated with a particular process (eg, Process A) by mapping the process's PASID value to a CLOS value. In some embodiments, process groups may be assigned a Resource Monitoring ID (RMID) value that may be associated with a particular class of service that is conditionally applied to each process group.One embodiment of conditional QoS circuitry/logic 1200 also includes conditional QoS enforcement circuitry/logic 1202 for enforcing resource usage constraints established by QoS rules 1205 and designated service classes. In particular, enforcement circuitry/logic 1202 may evaluate resource usage data tracked by monitoring circuitry/logic 1201 to present enforcement decisions in conjunction with QoS rules 1205 (and potentially other values, such as class of service). For example, if Process A has reached the maximum L3 cache occupancy (eg, 1/3 of the cache), the QoS enforcement circuitry/logic 1202 will require Process A to evict the L3 cache entry before adding a new cache entry ( For example, based on least recently used (LRU) or other cache management strategies).In one embodiment, conditional QoS circuitry/logic 1200 operates according to a set of conditional QoS rules 1205 that describe quality of service monitoring for specific process IDs and enforcement of specific resources (eg, cache, memory bandwidth, etc.) . In this case, a number of rules may be defined to identify when certain service level agreements (SLAs) for other processor resources must be changed based on telemetry data obtained from other processor resources (eg, obtained by condition monitoring circuitry 1201 ). In some embodiments, SLA is just another term for a class of service (CLOS) associated with a particular process. Alternatively, the SLA can be different from the defined service class and/or can be assigned to a specific CLOS. In one embodiment, a first QoS rule 1205 that matches current performance data associated with a set of rules (for a particular resource and PASID) is used to identify the SLA to be enforced.In one embodiment, quality of service enforcement circuitry/logic 1202 interacts with condition monitoring circuitry/logic 1201 to apply a specific SLA for a particular resource and PASID when a particular rule is triggered from the set of QoS rules 1205. One embodiment of QoS enforcement circuitry/logic generates an interrupt (eg, a software interrupt to the software stack) if a particular SLA cannot be enforced.The illustrated processor 1250 includes a network interface controller 1230 having different amounts of network bandwidth allocated to Process A (10 MBps in Figure 12A) and Process B (100 MBps in Figure 12A). Additionally, an indication of the number of LLC paths 1220 allocated to Process A and Process B is shown. Both the network interface controller bandwidth and the number of allocated cache ways can be dynamically adjusted for each process based on currently monitored conditions and QoS rules 1205 being implemented. Two cores 1210-1211 are also shown in Figures 12A-B, although the basic principles of the invention may be implemented on a processor with any number of cores.As indicated by the different modes in Figure 12A, process A executing on core 1211 consumes 100 Mbps of bandwidth on network interface controller 1230 and is allocated three LLC cache paths. Process B executing on core 1210 consumes 10MBps of NIC 1230 and has two LLC cache paths allocated. In this example, the QoS rules 1205 specify that the number of LLC paths 1220 allocated to a process depends on the traffic volume of the process measured at the integrated network controller 1230. For example, conditional telemetry data collected by QoS monitoring circuit 1201 may indicate that traffic used by Process B jumps from 10MBps in Figure 12A to 70MBps in Figure 12B. The specific rules 1209 shown in these two figures specify a threshold bandwidth value of 50 MBps above which a third cache way will be allocated to Process B, as indicated in Figure 12B.Similarly, multiple rules for other resources may be triggered by increasing network traffic (and/or other types of traffic) to a specified threshold (or multiple different thresholds). Thus, with these embodiments, different resources may be dynamically adapted to Process B's new SLA depending on detected changes to Process B's resource consumption.Figure 13 illustrates additional details of one embodiment of processor 1250, including new conditional circuitry/logic for registering and implementing quality of service rules 1320 including telemetry-based rules. The various components illustrated in Figure 13 may be implemented as circuits or using a combination of circuits and software/firmware.Resource monitor 1305 (which is an SLA-based monitor in one embodiment) monitors various processor resources as specified by current rule set 1320, including IO resources 1331 (eg, IO bandwidth consumed by each process), memory Resources 1332 (eg, memory bandwidth and/or space consumed by each process), and accelerator resources 1333 (eg, a fraction of the accelerator consumed by the process). These specific processor resources are highlighted for illustration purposes; various other forms of processor resources can also be monitored.In one embodiment, the resource monitor 1305 may focus its monitoring on a specific set of resources according to the current set of rules 1320 (eg, monitor memory bandwidth when a specific rule is based on memory bandwidth). Although illustrated as a single element in FIG. 13 , resource monitor 1305 may include differential circuits for monitoring different sets of resources, some of which may be implemented in different areas of processor 13 .Resource monitor 1305 may trigger a response from implementing circuit/logic 1302 when a rule-based threshold has been reached. For example, resource monitor 1305 may notify implementing circuitry/logic 1302 when resource usage for a particular process exceeds or falls below a threshold specified in rules 1320. Alternatively or additionally, implementing circuitry/logic 1302 may operate independently or semi-independently from resource monitor 1305 to dynamically adjust resource allocation based on current set of rules 1320 (e.g., when a process's network bandwidth exceeds a threshold) , increasing the number of L3 cache ways). For example, implementation circuitry 1302 may read various processor counters that reflect the current resource usage of a process, and implement any corrective actions to comply with current set of rules 1320 . For example, in Figure 13, implementing circuitry/logic 1302 may adjust one or more of compute engines 1341, accelerators, accelerators based on notifications received from resource monitor 1305 and/or resource monitoring data read directly from registers of the processor 1342 and the process utilization of the network interface controller 1343.In one embodiment, interface manager 1315 provides access to rulesets 1320 , resource monitors 1305 , and/or implementation circuits/logic 1302 . For example, privileged software (eg, an OS or hypervisor) can update the rule set 1320 to monitor/enforce new resources and/or new resource variables or thresholds. Interface manager 1315 may expose a programming interface to software to perform these updates.An example set of rules 1321 is shown in FIG. 13 and includes a table data structure with a separate row associated with each rule and a value to specify a procedure address space ID (PASID) (identifying a procedure), a rule And a separate column for the SLA definition associated with the rule.In one implementation, each rule includes one or more of the following elements:a. The Process Address Space ID (PASID) associated with the application or service to which the conditional SLA is attached.b. Unique Rule Identifier (RID) used to identify the rule.c. A description of the rule. In one embodiment, a rule may refer to any processor facility used to monitor processor activity, including but not limited to sets of performance counters (eg, to count LLC misses, NIC bandwidth, memory bandwidth, etc.). Rules can specify specific resource thresholds and/or more complex relationships between resources. For example, a rule may specify allocation or deallocation (or vice versa) that is performed when network interface bandwidth drops below a specified multiple of memory bandwidth. The rules may also include Boolean expressions defined using performance counters (eg, X=0 if the NIC bandwidth is less than 50MBps; and X=1 if the NIC bandwidth is greater than or equal to 50MBps).d. Service Level Agreement definitions associated with the rules. The service level agreement may include a specific resource with a specific resource ID, and the amount of that resource to be allocated to the corresponding PASID when the rule is triggered and selected. In one embodiment, the SLA definition may also include a priority for that particular rule.In one embodiment, resource monitor 1305 is responsible for enforcing rules 1320 and identifying those rules that need to be implemented by implementing circuit/logic 1302. In one implementation, every N time units (N may be configured using control registers such as MSR or any other CPU interface), resource monitor 1305 and/or implementing circuit/logic 1302 perform the following operations:a. Execute each of the rules 1320 to collect the required data from the relevant performance counters. Then, based on the results, a Boolean formula is executed to generate a Boolean value associated with each rule.b. Search the rule set 1321 and select the first rule or the first rule set with a Boolean value set to 1. A priority scheme may be implemented to determine the order of searches (eg, search rules associated with higher priority PASID values take precedence over search rules with lower priority PASID values). If priority is not used, registration order (eg, the order in which each rule is inserted into rule set 1321) may be followed.c. Per each of the selected Boolean rules, the resource monitor 1305 and/or the implementing circuit/logic 1302 configure the SLAs registered for the particular PASID and selected resource. For example, implementing circuitry/logic 1302 may configure relevant resources to ensure compliance with the rule. In one embodiment, if the SLA cannot be achieved, the implementing circuitry/logic 1302 may generate a software interrupt to notify that a particular SLA has not been configured due to lack of resources. For example, an interrupt may be generated if there is an insufficient number of LLC lanes to be allocated to the PASID or if there is insufficient I/O bandwidth.A method according to one embodiment of the present invention is illustrated in FIG. 14 . The method can be implemented on the above architectures, but is not limited to any particular architecture.At 1401, for each process, one or more rules are registered within a rule set. At least some of the rules specify conditional resource allocations for associated processes. As described above, for example, a conditional resource allocation may specify one or more thresholds to trigger a new resource allocation.At 1402, resource usage values are monitored for each process as specified in the rule set. For example, the set of counters may be configured to count based on the resource usage of each respective process. In one embodiment, telemetry data from remote devices may be collected and used in conjunction with local resource usage values as input to a set of rules.At 1403, a determination is made as to whether a rule-based threshold has been reached. For example, one or more counter values may be compared to one or more threshold values. When the threshold is reached, at 1404, one or more resource allocations associated with the process can be adjusted according to associated rules. For example, for a given process or group of processes, if the resource usage associated with the first resource exceeds a threshold, the resource allocation for the second resource is updated for that process or group of processes.Apparatus and method for conditional quality of service in a processorQuality of service is an important mechanism for implementing priority-based fairness in computer systems, and can be achieved by allocating dedicated paths or slots in shared buffers and queues, packet-based virtual channels, and the like. Quality of service hooks (hooks) are used today in caches, memory subsystem queues, memory controllers, and switch cards. However, with a few notable exceptions, quality of service as a feature finds little traction when it comes to real-world deployments in the industry.One of the reasons for this is that in real world deployments it is often challenging to know, for example, that process X will always require dedicated bandwidth, or that thread Y will always require a dedicated set of queue slots in controller queue Z. This is because these processes or threads often rely on external events to trigger the need for dedicated resources.For example, process X may be implementing functionality that involves receiving high-priority client data streams from a network card. When this data arrives, it is critical to process the data with the highest priority and add it to the database. However, such high-priority bursts of customer data may arrive only once an hour. During the rest of the time, process X works in the background to add more data from disk and build some indexes against the same dataset. In this case, the process of doing all of the above is the same, but the key insight is that only data from the NIC has high priority and only arrives once an hour for a few minutes. A static QoS assignment for process X to have dedicated paths and queue slots in the system would be wasteful, inefficient, and prevent other processes from doing useful work.Embodiments of the present invention include conditional quality of service (QoS) techniques that alleviate this problem, wherein quality of service rules are applied only when a certain condition or set of conditions is met. In this case, the rule could be a burst of network traffic above 1 GB/s for X (ie, events per hour). When this happens, apply QoS rules to Process X. Thus, using an intelligently tailored set of rules, the detected use of a first processing resource by a process may alter the allocation of a second resource to that process. Dynamically adjusting processing resources in this manner reduces the likelihood that one particular resource will become a performance bottleneck for the overall process.12A-B illustrate a processor 1250 for implementing the techniques described herein. In one embodiment, the condition monitoring circuitry/logic 1200 of the processor 1250 performs conditional QoS operations according to the set of QoS rules 1205 . The conditional QoS circuitry/logic may be implemented in hardware (eg, control registers/MSR sets) or using a combination of hardware and software/firmware.The illustrated conditional QoS circuitry/logic 1200 includes condition monitoring circuitry/logic 1201 for monitoring resource usage for multiple processes, threads, logical processors, and/or other logical groupings of instructions. Thus, although the following discussion focuses on the allocation of resources to "processes," the underlying principles of the present invention may be applied to any logical grouping of program code. For illustrative purposes, two processes are shown in Figures 12A-B: Process A and Process B.Monitoring circuitry/logic 1201 may include, for example, one or more registers to specify the resources to be monitored for each process. These resources may include, but are not limited to, memory bandwidth usage, cache footprint (eg, within LLC, L2 cache, etc.), and network bandwidth. Various other metrics can be tracked while still complying with the basic principles of the present invention.In one embodiment, a process identifier, such as a process address space ID (PASID), is associated with each process and is used to track resource usage for that process. Additionally, a class of service (CLOS) value can be associated with a particular process (eg, Process A) by mapping the process's PASID value to a CLOS value. In some embodiments, process groups may be assigned a Resource Monitoring ID (RMID) value that may be associated with a particular class of service that is conditionally applied to each process group.One embodiment of conditional QoS circuitry/logic 1200 also includes conditional QoS enforcement circuitry/logic 1202 for enforcing resource usage constraints established by QoS rules 1205 and designated service classes. In particular, enforcement circuitry/logic 1202 may evaluate resource usage data tracked by monitoring circuitry/logic 1201 to present enforcement decisions in conjunction with QoS rules 1205 (and potentially other values, such as class of service). For example, if Process A has reached the maximum L3 cache occupancy (eg, 1/3 of the cache), the QoS enforcement circuitry/logic 1202 will require Process A to evict the L3 cache entry before adding a new cache entry ( For example, based on least recently used (LRU) or other cache management strategies).In one embodiment, conditional QoS circuitry/logic 1200 operates according to a set of conditional QoS rules 1205 that describe quality of service monitoring for specific process IDs and enforcement of specific resources (eg, cache, memory bandwidth, etc.) . In this case, a number of rules may be defined to identify when certain service level agreements (SLAs) for other processor resources must be changed based on telemetry data obtained from other processor resources (eg, obtained by condition monitoring circuitry 1201 ). In some embodiments, SLA is just another term for a class of service (CLOS) associated with a particular process. Alternatively, the SLA can be different from the defined service class and/or can be assigned to a specific CLOS. In one embodiment, a first QoS rule 1205 that matches current performance data associated with a set of rules (for a particular resource and PASID) is used to identify the SLA to be enforced.In one embodiment, quality of service enforcement circuitry/logic 1202 interacts with condition monitoring circuitry/logic 1201 to apply a specific SLA for a particular resource and PASID when a particular rule is triggered from the set of QoS rules 1205. One embodiment of QoS enforcement circuitry/logic generates an interrupt (eg, a software interrupt to the software stack) if a particular SLA cannot be enforced.The illustrated processor 1250 includes a network interface controller 1230 having different amounts of network bandwidth allocated to Process A (10 MBps in Figure 12A) and Process B (100 MBps in Figure 12A). Additionally, an indication of the number of LLC paths 1220 allocated to Process A and Process B is shown. Both the network interface controller bandwidth and the number of allocated cache ways can be dynamically adjusted for each process based on currently monitored conditions and QoS rules 1205 being implemented. Two cores 1210-1211 are also shown in Figures 12A-B, although the basic principles of the invention may be implemented on a processor with any number of cores.As indicated by the different modes in Figure 12A, process A executing on core 1211 consumes 100 Mbps of bandwidth on network interface controller 1230 and is allocated three LLC cache paths. Process B executing on core 1210 consumes 10MBps of NIC 1230 and has two LLC cache paths allocated. In this example, the QoS rules 1205 specify that the number of LLC paths 1220 allocated to a process depends on the traffic volume of the process measured at the integrated network controller 1230. For example, conditional telemetry data collected by QoS monitoring circuit 1201 may indicate that traffic used by Process B jumps from 10MBps in Figure 12A to 70MBps in Figure 12B. The specific rules 1209 shown in these two figures specify a threshold bandwidth value of 50 MBps above which a third cache way will be allocated to Process B, as indicated in Figure 12B.Similarly, multiple rules for other resources may be triggered by increasing network traffic (and/or other types of traffic) to a specified threshold (or multiple different thresholds). Thus, with these embodiments, different resources may be dynamically adapted to Process B's new SLA depending on detected changes to Process B's resource consumption.Figure 13 illustrates additional details of one embodiment of processor 1250, including new conditional circuitry/logic for registering and implementing quality of service rules 1320 including telemetry-based rules. The various components illustrated in Figure 13 may be implemented as circuits or using a combination of circuits and software/firmware.Resource monitor 1305 (which is an SLA-based monitor in one embodiment) monitors various processor resources as specified by current rule set 1320, including IO resources 1331 (eg, IO bandwidth consumed by each process), memory Resources 1332 (eg, memory bandwidth and/or space consumed by each process), and accelerator resources 1333 (eg, a fraction of the accelerator consumed by the process). These specific processor resources are highlighted for illustration purposes; various other forms of processor resources can also be monitored.In one embodiment, the resource monitor 1305 may focus its monitoring on a specific set of resources according to the current set of rules 1320 (eg, monitor memory bandwidth when a specific rule is based on memory bandwidth). Although illustrated as a single element in FIG. 13 , resource monitor 1305 may include differential circuits for monitoring different sets of resources, some of which may be implemented in different areas of processor 13 .Resource monitor 1305 may trigger a response from implementing circuit/logic 1302 when a rule-based threshold has been reached. For example, resource monitor 1305 may notify implementing circuitry/logic 1302 when resource usage for a particular process exceeds or falls below a threshold specified in rules 1320. Alternatively or additionally, implementing circuitry/logic 1302 may operate independently or semi-independently from resource monitor 1305 to dynamically adjust resource allocation based on current set of rules 1320 (e.g., when a process's network bandwidth exceeds a threshold) , increasing the number of L3 cache ways). For example, implementation circuitry 1302 may read various processor counters that reflect the current resource usage of a process, and implement any corrective actions to comply with current set of rules 1320 . For example, in Figure 13, implementing circuitry/logic 1302 may adjust one or more of compute engines 1341, accelerators, accelerators based on notifications received from resource monitor 1305 and/or resource monitoring data read directly from registers of the processor 1342 and the process utilization of the network interface controller 1343.In one embodiment, interface manager 1315 provides access to rulesets 1320 , resource monitors 1305 , and/or implementation circuits/logic 1302 . For example, privileged software (eg, an OS or hypervisor) can update the rule set 1320 to monitor/enforce new resources and/or new resource variables or thresholds. Interface manager 1315 may expose a programming interface to software to perform these updates.An example set of rules 1321 is shown in FIG. 13 and includes a table data structure with a separate row associated with each rule and a value to specify a procedure address space ID (PASID) (identifying a procedure), a rule And a separate column for the SLA definition associated with the rule.In one implementation, each rule includes one or more of the following elements:a. The Process Address Space ID (PASID) associated with the application or service to which the conditional SLA is attached.b. Unique Rule Identifier (RID) used to identify the rule.c. A description of the rule. In one embodiment, a rule may refer to any processor facility used to monitor processor activity, including but not limited to sets of performance counters (eg, to count LLC misses, NIC bandwidth, memory bandwidth, etc.). Rules can specify specific resource thresholds and/or more complex relationships between resources. For example, a rule may specify allocation or deallocation (or vice versa) that is performed when network interface bandwidth drops below a specified multiple of memory bandwidth. The rules may also include Boolean expressions defined using performance counters (eg, X=0 if the NIC bandwidth is less than 50MBps; and X=1 if the NIC bandwidth is greater than or equal to 50MBps).d. Service Level Agreement definitions associated with the rules. The service level agreement may include a specific resource with a specific resource ID, and the amount of that resource to be allocated to the corresponding PASID when the rule is triggered and selected. In one embodiment, the SLA definition may also include a priority for that particular rule.In one embodiment, resource monitor 1305 is responsible for enforcing rules 1320 and identifying those rules that need to be implemented by implementing circuit/logic 1302. In one implementation, every N time units (N may be configured using control registers such as MSR or any other CPU interface), resource monitor 1305 and/or implementing circuit/logic 1302 perform the following operations:a. Execute each of the rules 1320 to collect the required data from the relevant performance counters. Then, based on the results, a Boolean formula is executed to generate a Boolean value associated with each rule.b. Search the rule set 1321 and select the first rule or the first rule set with a Boolean value set to 1. A priority scheme may be implemented to determine the order of searches (eg, search rules associated with higher priority PASID values take precedence over search rules with lower priority PASID values). If priority is not used, registration order (eg, the order in which each rule is inserted into rule set 1321) may be followed.c. Per each of the selected Boolean rules, the resource monitor 1305 and/or the implementing circuit/logic 1302 configure the SLAs registered for the particular PASID and selected resource. For example, implementing circuitry/logic 1302 may configure relevant resources to ensure compliance with the rule. In one embodiment, if the SLA cannot be achieved, the implementing circuitry/logic 1302 may generate a software interrupt to notify that a particular SLA has not been configured due to lack of resources. For example, an interrupt may be generated if there is an insufficient number of LLC lanes to be allocated to the PASID or if there is insufficient I/O bandwidth.A method according to one embodiment of the present invention is illustrated in FIG. 14 . The method can be implemented on the above architectures, but is not limited to any particular architecture.At 1401, for each process, one or more rules are registered within a rule set. At least some of the rules specify conditional resource allocations for associated processes. As described above, for example, a conditional resource allocation may specify one or more thresholds to trigger a new resource allocation.At 1402, resource usage values are monitored for each process as specified in the rule set. For example, the set of counters may be configured to count based on the resource usage of each respective process. In one embodiment, telemetry data from remote devices may be collected and used in conjunction with local resource usage values as input to a set of rules.At 1403, a determination is made as to whether a rule-based threshold has been reached. For example, one or more counter values may be compared to one or more threshold values. When the threshold is reached, at 1404, one or more resource allocations associated with the process can be adjusted according to associated rules. For example, for a given process or group of processes, if the resource usage associated with the first resource exceeds a threshold, the resource allocation for the second resource is updated for that process or group of processes.Apparatus and method for closed-loop dynamic resource allocation control frameworkHigh-priority and latency-sensitive applications, such as packet processing and web searches, do not fully utilize all available resources of the processor or system. For example, in an online environment, to optimize total cost of ownership (TCO), service providers typically launch best-effort (BE) workloads on the same server(s) running high-priority applications so that they can more fully Use resources (and profit) efficiently.One challenge when this mix occurs is to implement dynamic adaptive sharing, in which high-priority workloads do not miss their SLAs due to resource shortages, while at the same time not targeting high-priority and latency-sensitive workloads. Sensitive applications isolate excessive resources to ensure that servers are more fully utilized.Embodiments of the present invention automatically, efficiently, and transparently balance resources between these best-effort and guaranteed performance categories. A particular embodiment includes a framework to dynamically control Resource Director Technology (RDT) resource allocation at fine granularity in an Intel Architecture (IA) server platform based on business load. However, it is to be noted that the underlying principles of the present invention are not limited to RDT or any specific resource management architecture. The underlying goal is to ensure that high-priority workloads meet latency/throughput or other user-defined key performance indicator (KPI) goals, while maximizing the performance of BE tasks, thereby further improving server utilization and reducing costs.By way of overview, embodiments of the present invention may include one or more of the following frameworks and methods for performing fine-grained dynamic resource allocation:a) Dominant factor using . In addition to using business models and other dynamic factors, one embodiment of the framework identifies and uses one or more "dominant factors" to adjust decisions. Dominance factors can be determined for compute, memory, network load, and other resources so that actions can be taken periodically and proactively to prevent regressions on non-discretionary key performance indicators while dedicating more resources toward best-effort tasks . Thus, for example, this embodiment can take action to prevent potential packet loss from occurring, for example, in contrast to existing schemes that only react after the fact.b) Take Preemptive Action. Even with periodic preemptive action as described above, if hardware-based detection identifies regressions on important goals (eg, SLAs for high-priority workloads), it preemptively allocates resources from best-effort tasks. Therefore, this embodiment may not wait for the start of the next time window to be reallocated by the agent of the resource controller.c) Adjust in combination with software strategy. In one embodiment, the software policy and hardware controller work in conjunction to prevent packet loss or violation of SLA key performance indicators for high priority workloads, while maximizing performance for best effort workloads. One embodiment of the platform utilizes "progress markers" (also known as "mile markers") provided by the software stack for different stages of the application. Using mile markers, you can specify your current progress and what goals the app has for the current period. The hardware utilizes these mile markers in order to correlate with current resource usage and resource configuration parameters. Additional embodiments described below enable mile markers to work with improved granularity for all the techniques described herein.d) Utilize Graduated SLA. Typically, SLA key performance indicators (KPIs) for high-priority tasks are multi-level, such that while strict SLA KPIs apply under normal conditions, moderately degraded SLA KPIs appear when there is a momentary spike in demand from high-priority tasks conditions apply. In one embodiment of the invention, this condition is explicitly modeled such that the assignment of high priority tasks is designed/trained to fit SLAs specific to demand patterns.Initially, one embodiment of the invention will be described via an experimentally verified example. Figure 15A shows how a certain amount of preferred resource allocation 1501 is traditionally provided to high priority workloads. Give the rest to any best-effort "BE" workload. The curve 1502 for the high priority workload shows the actual portion of the allocated resources being utilized.FIG. 15B shows an allocation 1510 of time interval-based variation (ie, the upper boundary of the curve) that can be safely provided to the BE workload. One embodiment of the present invention dynamically tracks resource utilization and provides this additional resource allocation to the BE workload, allowing the BE workload to utilize available resources that would otherwise be left unused.The controlled resources represented by the curves in Figures 15A-B can be any shared component within a processor or computing system, including but not limited to: CPU cycles, LLC utilization, memory bandwidth, memory utilization, socket-level power consumption, IO Bandwidth and page cache capacity, to name a few. Furthermore, the curves do not need to represent a single resource. Rather, the area in which the dynamic resource capacity can be redirected may be a multi-dimensional volume that represents a collection of related and/or unrelated resources. This is the wasted capacity that embodiments of the present invention restore in order to improve performance, efficiency, TCO and power consumption. The challenge is to maintain desired quality of service metrics for high-priority workloads whose demands on resource(s) are dynamic and may not be anticipated in advance.Embodiments of the present invention include novel techniques associated with dynamic closed-loop resource control. In one embodiment, prior training is used, embodied as a trained model that continuously steers resource allocation towards complex satisfiability regions (indicated by line 1511 below in Figure 15B ) area representation). In doing so, this embodiment includes techniques to identify forward satisfiability regions at each point in time based on one or more leading indicators.One implementation of this embodiment is built with an Intel Dual Socket Xeon Platinum 8176 CPU Server (SKX) running IPv4 forwarding from DPDK as a high priority workload and running omnetpp from SPEC CPU 2006 as best effort workload. This configuration includes eleven LLC paths. The default setting is 9 lanes assigned to high-priority IPv4 workloads and 2 lanes assigned to best-effort workloads. Policies are derived by dynamically allocating LLC paths between high-priority workloads and best-effort workloads. Figure 16 illustrates an incoming business model 1601 for simulating a 24 hour network business model.17A-C illustrate runtime performance of BE workloads measured in instructions per second. In the baseline static resource allocation, as shown in Figure 17A, the high-priority workload has the smallest packet drop rate (<0.01%), while the BE workload has the lowest performance. In this case, resource allocation consists of allocating 9 of the LLC's 11 lanes to high-priority workloads, and using the remaining 2 lanes for best-effort workloads.17B illustrates the performance of a best-effort workload under a dynamic scheme that uses model-based control as described herein to manipulate the LLC's number of paths differently in successive time segments. In this embodiment, model-based control predicts (anticipates) demand peaks for the high-priority workloads of Figure 16, thereby maintaining more cache paths for the high-priority workloads in the third and fifth time segments , and maintain the target (<0.01%) packet drop. With an average of more LLC paths available to the BE workload in Figure 17B, overall higher performance was achieved in all time segments, with a small drop in segments 3 and 5; In these experiments, the average BE workload gain was 37%, while meeting the <0.01% packet drop criterion for the baseline workload for high-priority workloads. Figure 17B can be compared to the best possible behavior shown in Figure 17C (representing a theoretical "ideal" allocation).In one embodiment, a trained reinforcement learning (RL) model is used to manipulate the resource allocation in Figure 17B. Additional details of the RL model are provided below. In summary, when used in a concrete implementation, the model:(1)For high-priority workloads, associate higher-level penalties with higher-level packet drops, and associate rewards for maintaining packet drops below SLA thresholds;(2)Correlate varying levels of incentives for dynamically higher-level LLC allocations for best-effort workloads to conditionally meet SLA thresholds for high-priority workloads;(3)generating a total reward that includes a combination of reward and penalty functions for both the high-priority workload and the best-effort workload in (1), (2); and(4)Resource allocation change actions and states are initialized based on incoming traffic rates for the past N time windows, packet processing latency for the past N time windows, and current resource allocation for high priority workloads.One reason for the successful formulation of the RL model is to use the ingress rate and latency of the last N time windows as dominant metrics that predict what will happen in the next time step for the currently arriving packet. Therefore, the teach control model is trained to expect peaks in the test entry function of FIG. 16 . By integrating this information as feedback from future (predicted) states, closed-loop resource control schemes are effectively proactive rather than reactive.While one implementation described in this paper uses an RL training scheme, any other feedback control scheme can be used to track and adjust actions, including actions based on dominant metric analysis, models, simulations, and formulas. Resource allocation modifications are made using composite feedback that includes the current distance to the target based on current metrics and the target's expected future trajectory. For the example of Figures 17A-C, the RL-trained control model follows an action-state-reward interaction.One embodiment of an architecture for control interaction for RL training is illustrated in FIG. 18 . Resource allocator 1802 , implemented in hardware, software, or a combination thereof, specifies resource allocations based on input from dynamic resource controller 1801 . For example, resource allocator 1802 may signal reallocations of particular sets of processor/system resources, such as memory bandwidth allocations and cache capacity allocations (eg, from best-effort workloads to high-priority workloads, or vice versa). After delay 1803, reward determination logic 1804 evaluates the results of the resource allocation action (including penalty and reward values for different monitored variables) to determine a total reward value, as described above. Based on the reward value and the current state 1800, the dynamic resource controller 1801 performs reward value-based reinforcement learning to control the resource allocator 1802, which requests more efficient resource allocation for best effort and high priority workloads.Figure 19 illustrates one embodiment to implement any closed loop control logic, including but not limited to reinforcement learning logic. This embodiment of the framework includes a telemetry data collector 1910 for collecting and evaluating telemetry data related to different types of workloads including, but not limited to: best effort workloads, high priority work workloads, multi-threaded workloads, and single-threaded workloads. In one embodiment, telemetry data collector 1910 filters telemetry data, identifies/generates telemetry data/events 1911, and/or monitoring data/events 1912 for telemetry data related to performance metrics of interest. By way of example and not limitation, this may include memory latency data, number of instructions backed out per unit time, cache miss rate, and cache allocation level. Note, however, that any type of performance metric can be tracked in this way. For example, for networking workloads such as IPV4 workloads, the metrics tracked may include packet loss and packet processing latency.In one embodiment, resource allocation controller 1920 analyzes telemetry data/events 1911 and/or monitoring data/events 1912 to determine platform optimization 1905 for simultaneous execution of best-effort workload 1901 and high priority on platform hardware 1904 Workload 1902. One implementation of resource allocation controller 1920 performs control functions, such as the reinforcement learning implementation shown in FIG. 18 . Alternatively or additionally, resource allocation controller 1920 may implement other machine learning, deep learning, or any other strategies for intelligently manipulating resource allocation on platform hardware 1904 . Thus, although certain embodiments described herein focus on reinforcement learning, the underlying principles of the invention are not limited to any particular form of dynamic optimization engine. Certain probabilistic techniques that quantify uncertainty in the achievement satisfaction region can also effectively adapt resource allocation between less conservative and more conservative levels depending on the quantified uncertainty.In a given implementation, multiple workloads can run on the server platform at various priority levels. For example, one implementation may include three levels, high priority, medium priority, and best effort, while another implementation may include only high priority and best effort priority levels. In one embodiment, priority levels are specified as numbers in the range 0 to N, where 0 includes the lowest priority and N includes the highest priority (ie, where the priority increases based on increasing priority values).Regardless of how priorities are defined, the SLAs for each priority class can be defined in terms of different key performance indicators (KPIs) such as packet loss rate, latency, throughput, and jitter. With concurrent execution of workloads, it is possible to have complex satisfiability regions where different KPIs interfere with each other.One embodiment of the resource allocation framework allows each application to register multiple KPIs so that it can make intelligent allocation decisions. In this implementation, telemetry collection subsystem 1910 periodically collects telemetry data, filters and stores it in a database, and provides an interface for visualization via various monitoring tools. Filtered/processed telemetry data, which may include telemetry data/events 1911 and monitoring data/events 1912, is consumed by resource allocation controller 1920, which evaluates the telemetry data for the next time window (or the next two or three). time window to reduce oscillation) to optimize resource allocation 1905. In one embodiment, the granularity of decision making for these optimizations 1905 may be configurable by the user.In one implementation, through heuristics obtained from analyzing experimental workloads, from continuous learning using probabilistic models, and/or from periodic and automatic learning using field machine learning methods (eg, reinforcement learning), or using hierarchical structures method to derive optimizations performed by the resource allocation controller 1920. In one embodiment, a holistic model is used where coarse-grained corrections are performed dynamically and quickly, and finer-grained corrections are applied over time on coarse-grained corrections, with less effort and less risk.As mentioned, one embodiment of the resource allocation controller 1920 uses the dominant metrics for optimization 1905 in each time window. In this embodiment, resource allocation controller 1920 incorporates analytics and telemetry data 1911-1912 for dominance observation; where telemetry collector 1910 captures the dominance of traffic load and resource usage so that it can be based on past circumstances state to proactively take action. By way of example and not limitation, using these techniques, resource allocation controller 1920 may prevent packet loss from occurring, rather than taking reactive actions only after observing that packet loss has occurred, such as using a proportional-integral-derivative (PID) method. That's the case. Various forms of telemetry data/events 1911 and monitoring data/events 1912 may be collected, such as queue occupancy, cache miss rate, incoming traffic rate for past N time windows, packet latency for past M time windows, hardware resources Utilization and application performance.As mentioned, one embodiment of resource allocation controller 1920 reallocates resources from region 1510 in Figure 15B to best-effort workloads. One embodiment of resource allocation controller 1920 implements a risk mitigation strategy to allow higher priority workloads to preempt resources without waiting for the next time window, given that the allocation is performed on behalf of non-critical best-effort workloads boundary. For example, if the packet loss rate is increasing and is within a threshold for the current time period (eg, due to a rapidly rising ingress rate), the resource allocation controller 1920 dynamically relocates ahead of the best-effort workload upon detecting this condition Allocate one or more resources.This implementation can dampen oscillations by overcorrecting immediately and then relaxing so that more resources become available again to be allocated to the best-effort workload. Sometimes the only negative impact on a best-effort workload may be increased latency, which may be more acceptable than absorbing hits on high-priority workloads. Thus, in the example of Figures 17A-C, if the packet loss rate increases, hardware can preemptively increase cache allocation for high-priority workloads, rather than waiting for software to make decisions at the beginning of each decision window. This hybrid hardware and software coordination prevents packet loss for high-priority workloads by acting more preemptively, while still maximizing performance for best-effort workloads.Typically, as the rate of demand for resources with limited availability approaches saturation levels, response times grow without bounds. For example, in M/M/1-PS (exponentially distributed inter-arrival and service-delay distribution with processor sharing), the average response time for the arrival of a demanded service can be shown as, where λ and μ are respectively arrival rate and service rate, which means that as λ approaches μ, the response time will grow unbounded. The implication of this observation is that, regardless of how much capacity is reserved for high-priority workloads, too large a burst of arrivals can cause a skew in response time and thus violate strict SLAs. As a result, the SLA must be accompanied by a condition on the arrival rate (ie the demand curve) under which the SLA can be satisfied. As a simple example, if the peak in Figure 16 is too large (eg, such as in the form of an impulse function), even if all 11 lanes of the LLC are allocated to the IPV4 workload, an SLA violation will result.In one embodiment, the SLA satisfaction condition for high priority workloads is accompanied by an upper limit on how big a burst of arrivals can be. The SLA may be a gradient SLA in which some specified percentile of the inverse of the inter-arrival time (eg, 95th, 99th percentile, etc.) must be below a threshold, and the threshold is a function of the SLA. As a result, instead of trying to meet an absolute SLA that is invariant with respect to the inter-arrival time histogram, an embodiment of an SLA is defined that is defined as the Nth percentile of the peak inter-arrival rate within each decision interval while bending. This feature is referred to herein as sliding SLA or gradient SLA, where the allowed KPI values change. Thus, some packet drops can be forgiven (ie, not considered in reward/penalty determination) when too many arrive in a given decision interval (eg, exceed a threshold).With this arrangement in mind, one embodiment of the present invention includes a model for manipulating resources that is trained to satisfy a dynamic satisfaction criterion that varies with the instantaneous arrival rate within a time window. Additionally, one embodiment adjusts from a strict SLA to a more flexible SLA under certain conditions. For example, when a strict SLA (referred to herein as "S1") cannot be met for a high priority workload, the less stringent SLA "S2" defined is met such that the total number of transitions from S1 to S2 is Bounds are permissible. For example, S1 can be defined as 0 packets dropped and max latency < 15 microseconds, and S2 can be defined as 0 packets dropped and max latency < 20 microseconds, or 1 packet dropped and max latency < 15 microseconds. The SLA may specify no more than two S1→S2 transitions within a 10ms interval. In this case, the target is initially set to S2, and then if S1 is exceeded twice while S2 is satisfied, the target is reset to S1 for the remainder of the interval, and preemption is implemented as described above ( For example, to preempt lower priority workloads).The benefit of the above features is that they allow a small amount of slack to be built into the SLA as a function of demand, and exploit this slack to improve beneficial utilization by dedicating more resources to lower priority workloads as a function of available slack.A specific implementation for determining resource allocation policies using IPv4 forwarding as a high priority workload and omnetpp as a best-effort workload will be described with respect to FIG. 20 . Details of the platform are provided above (eg Intel Dual Socket Xeon Platinum 8176 CPU Server (SKX), etc.). In this embodiment, a reinforcement learning (RL) method is used to manage packet loss. The RL controller 2001 continuously interacts with the system to collect relevant data and learn a policy that maximizes the cost function. In this embodiment, the components used in resource learning include: Action (A), State (S), Policy (P), and Reward (R). The RL controller 2001 implementing the current policy outputs a "Q value" for each possible action based on the current state 2000. In this implementation, the RL controller 2001 may perform Q-learning or other reinforcement learning that generates Q based on learned policies (eg, SARSA, deep Q-networks, deep deterministic policy gradients, etc.) value.In one embodiment, the action with the largest Q value is applied to the resource allocator 2002 that performs the action in the form of a resource allocation. After each action is implemented, after delay element 2003, reward determination logic 2004 determines a reward value based on the measured packet processing data (eg, packet loss metrics). The RL controller 2001 then uses the new state 2000 and the reward value to refine the policy, potentially specifying a new assignment action based on the state 2000.When used in the specific context of packet loss, action A may be the number of last-level cache (LLC) paths assigned to the next time window's high-priority and best-effort workload. State S is the incoming traffic rate for the past N time windows, the packet delay for the past M time windows, and the current last level cache allocation.The reward R reflects the goal of assigning the fewest possible LLC paths to high priority workloads with the lowest possible packet loss, and assigning the remaining LLC paths to best-effort workloads to improve server utilization.In one embodiment, the designed reward function is:Here, pkt_loss is the number of packets dropped during the current time window. Rpkt_loss is the reward for packet loss. If the packet is smaller than a predefined acceptable threshold (eg, zero packet loss or low packet loss, depending on usage), a positive reward +m4 is assigned to Rpkt_loss. If the packet drop is above this threshold th3, a negative reward is assigned as a penalty for Rpkt_loss. The larger the pkt_loss, the larger the penalty (m1>m2>m3).Rrdt is the reward assigned to the LLC pathway. When there are no packet drops, a higher reward is provided for the case where fewer LLC paths are used for high-priority workloads. Provides higher rewards for using more LLC paths for high-priority workloads when there are packet drops.The total reward Rtotal is the sum of Rpkt_loss and Rrdt, which takes into account both packet loss and LLC path allocation. When training the model, Rtotal will take the current software and platform parameters as input, and output the resource allocation strategy to the resource allocator 2002, which implements resource allocation in the next time window.Figure 21 illustrates a method according to one embodiment of the present invention. The method may be implemented within the context of the processor and system architectures described herein, but is not limited to any particular architecture.At 2101, usage of multiple execution resources by multiple workloads is monitored. As mentioned above, workloads may include high-priority workloads associated with guaranteed performance levels and best-effort workloads not associated with guaranteed performance levels. In other embodiments, three or more priority levels may be specified.At 2102, data related to usage of the plurality of allocated resources by the plurality of workloads over one or more time periods (eg, specified seconds, minutes, hours, days, etc.) is collected. For example, the collected data may be similar to the data shown in Figures 16 and 17A-B.At 2103, the collected data is analyzed to identify resources that may be reallocated from one or more high priority workloads to one or more best effort workloads in subsequent time periods. For example, the surplus resource shown at 1510 in Figure 15B can be identified. Additionally, the periodic nature of high priority workloads can be determined, as shown in Figure 16, so that the system can anticipate when additional resources will be available for reallocation, as shown in Figure 17B. As mentioned above, the analysis can be performed by a machine learning engine. In one specific implementation, reinforcement learning is performed to evaluate data collected over various time periods, generate reward values to modify resource allocation, and continue to collect data and update reward values based on learned information about workload characteristics.At 2104, the identified resources are reallocated to the best-effort workload during one or more subsequent time periods. In one embodiment, based on any periodicity detected in the high priority workload, a first amount of resources is allocated for a first time period, a second amount of resources is allocated for a second time period, and so on.At 2105, best-effort and high-priority workloads are performed with the new resource allocation. High priority workloads are continuously monitored for guaranteed performance levels. For example, in one embodiment, a resource allocation manager (eg, a cluster manager or node-level manager as described below) monitors certain guaranteed performance metrics to ensure that high-performance workloads meet specified key performance indicators (KPIs) ). This may include, for example, guaranteed latency and throughput values.At 2106, if it is determined that the guaranteed performance level of the high priority workload is close to the violation condition (or if the performance level has already been violated), then at 2107, resources are preemptively allocated from the best effort workload to the high priority work load to ensure that the guaranteed level of performance is maintained (or that the violation is corrected as quickly as possible).Using the techniques described above, workloads are monitored and surplus resources are dynamically allocated to perform best-effort workloads more efficiently. At the same time, monitor high-priority workloads to ensure compliance with existing key performance indicators (KPIs). In one embodiment, the mile markers described below are also used to continuously monitor high-priority workloads, even when executing on a node-based distributed pipeline architecture.Expressive workload performance metrics for performance monitoring and dynamic resource allocationOne of the key elements of edge, cloud, and other emerging architectures (eg, functions-as-a-service) is how to maximize service density per platform while maintaining a certain quality of service or service level agreement (SLA). In this sense, there are tradeoffs for designing a platform to meet latency or bandwidth instances.Increasing the number of services translates directly into increases in both throughput and latency. For example, in the case of gender facial recognition, it has been observed that the throughput increases from 20 fps (per service) at latency below 5 ms for a single core up to at 20 cores each Request 140 fps with 40 ms latency (per service).Therefore, depending on the metrics to be optimized, the platform should be populated in different ways and with the appropriate quality of service knobs. In terms of quality of service or service level agreements, several types of potential models can be applied:1)No Quality of Service or Service Level Agreement (SLA) for that particular service. This may mean that no fixed amount of private or shared resources is attached.2)Soft Service Level Agreement. A service provides the allocation of a set of private resources (eg, cores, logical processors, etc.) and shared resources (eg, LLC, memory, etc.). In this model, services will provide throughput and latency to users depending on the amount of computation that private and shared resources can provide. However, the level of computation may be limited to the amount of service that uses shared resources. Therefore, in this scenario, the guaranteed latency of 99% may not be possible when the number of services and the pressure on the platform increases.3)Hard service level agreements. The service is using all resources in private mode. In this case, the expected jitter should be minimal. To achieve this SLA, two approaches can be employed: (a) the platform is adequately allocated to services; and (b) all shared resources can be hard-hardened using allocation schemes such as cache allocation techniques or memory bandwidth allocation techniques Partitioned and assigned to individual threads/tasks.It is impractical to implement true end-to-end hardware partitioning (including L1, L2 allocation, memory bandwidth allocation, I/O, etc.) on current systems. Additionally, end-to-end partitioning will become more challenging as multi-tenancy and multi-service consolidation increases with core counts. On such a system, it would be difficult to identify the resources a particular service is utilizing. This problem is particularly exacerbated when a workload is decomposed into linked sets of microservices or functions, where appropriate consideration must be given to other cluster-level resources such as network, memory, storage, and accelerators manage.Today, there is no general technology for expressing and monitoring service level agreements through contracts between software and the hardware on which the software runs. Service Level Agreements are currently expressed in terms of layers, which are used to provide differentiated treatment of resource allocation. However, there is no guarantee of performance because it is difficult to link the efficacy of these resources to the assigned application and its impact on performance.Figure 22 illustrates one specific example of an end-to-end application for facial recognition, and the application is broken down into stages that run on different nodes within a data center. Briefly, a compressed video stream 2201 is decoded by a video decoding component 2205 (eg, using H.264/H.265), and the resulting uncompressed video stream (and uncompressed video stream 2202 received directly into the system) by Video preprocessing component 2210 to preprocess. For example, the video processing component 2210 performs normalization operations such as resizing, slicing, and color space conversion. The face detection/extraction component 2215 performs face detection operations (eg, using a multi-task cascaded convolutional neural network (MTCNN), YOLO face, single shot (SSD), etc.), and the feature extraction component 2220 extracts relevant image features, thereby reducing storage Require. Finally, the classification component 2225 performs facial classification using binary classification techniques, such as support vector machines (SVMs). All components in this execution pipeline (compute, network, storage) need to be properly monitored and managed to have real-time or predictable end-to-end performance guarantees.Embodiments of the present invention allow applications/workloads to communicate progress by generating progress markers (also known as "mile markers") that hardware uses to monitor and dynamically adapt to work load changes. In particular, progress markers provide an efficient mechanism to allow hardware (and associated software, if applicable) to monitor workload progress and allocate resources accordingly to meet service level agreements between workloads and hardware. Require.One embodiment of the present invention associates application performance objectives, such as service level objectives (SLOs), with resource allocations in order to achieve the SLOs. Traditionally, SLOs and SLAs have been managed at the software level. However, with increasing core counts and high-rate dynamic application requirements, software-managed solutions have failed to provide meaningful SLA consistency. With embodiments of the present invention, hardware mechanisms use mile markers to express SLAs in terms of mile markers/time (eg, in terms of throughput and/or latency), providing significantly improved resource allocation for SLA consistency.In one embodiment, local hardware establishes various SLAs based on SLO policies provided by the service via performance mile marking. Whenever the policy cannot be enforced, the global loop is notified in order to implement system-level policies (for example, migrating services to another node or edge location). One embodiment of the present invention includes the following components:1)Workload instrumentation to express the concept of progress in its execution, sometimes referred to as "mile markers";2)Service provider definition and mapping of SLAs to workload performance (e.g. speed can be expressed in miles per second and time delay between mile markers);3)Hardware at various levels (eg, cluster level, node level, component level, etc.) to measure workload progress (velocity monitoring).An example of a video analysis pipeline for facial recognition will be used to illustrate the operation of one embodiment of the present invention. It should be noted, however, that the underlying principles of the present invention are not limited to any particular type of application pipeline or set of workloads.Figure 23 illustrates the end-to-end flow of a compressed video stream entering a data center and being processed to detect and recognize faces. In this implementation, the data center supports a distributed streaming platform, such as Apache Kafka, that includes processing node groups 2305, 2310, 2320, and 2330, and each group is configured as A specific set of operations is performed at a stage in the overall processing pipeline.In the video analytics example shown in Figure 23, the stream captured from the camera 2301 is transmitted to an ingestion and decoding node 2305 (eg, at 30fps, 60fps, etc.), which includes a device to receive the video stream 2305A, decode the video stream 2305B (eg, via an H.264/H.265 decoder), and transmit the decoded video 2305C to the circuitry/logic of the set of messaging nodes 2310.Messaging nodes 2310 are specifically designed to store data received from data producers (eg, ingest and decode nodes 2305) to persistent and/or volatile storage 2315, and to respond to requests from data consumers Instead, the data is transferred from persistent storage 2316. In this case, the consumers are a set of detection nodes 2320 that receive video frame 2321, detect faces in video frame 2322, and send detection results 2323 back to messaging node 2310.The messaging node 2310 receives and stores the results to persistent/volatile storage 2317 and transfers the data from persistent storage 2318 in response to requests from a set of inference nodes 2330 . The inference node receives the detection results 2331, performs inference operations 2332 on the results (such as facial recognition) to generate facial recognition results, and transmits the facial recognition results back to the messaging node 2310 from which the facial recognition results can be distributed To computing devices that have requested facial recognition.The overall performance of the facial recognition process depends on the progress of the workload through all stages of ingestion, detection, and inference, including the messaging stage supported by the messaging node 2310, which in one embodiment is powered by Apache Kafka accomplish. The workload performance indicator (KPI) for this example is the number of faces recognized per second, which depends on all components/nodes 2305, 2310, 2320, 2330 in the process. Allocating compute, network and storage resources to support the required KPIs (faces per second) as service level agreements is very difficult. For example, face/second KPIs are not generic enough for hardware or resource orchestrators to understand and act upon. Additionally, it does not translate directly into resource allocation at the different stages 2305, 2310, 2320, 2330 of the cluster execution pipeline.One embodiment of the present invention includes software and/or circuitry to support monitoring of each workload as it progresses between different nodes in the cluster. As mentioned above, this can be accomplished by adding progress markers (also referred to herein as "mile markers") in the code and specifying performance expectations against the progress markers.24A illustrates the facial recognition example of FIG. 23 using instrumented mile markers MM1-MM9 to track execution flow between various processing nodes 2305, 2310, 2320, and 2330. Use mile markers to announce the progress of the workload through the pipeline as different stages are initiated or completed. For example, MM1 may be generated when a frame is first received, MM2 may be generated when decoding is initiated or completed, and MM3 may be generated when a decoded frame is transmitted to messaging node 2310. In a similar manner, MM4 can be generated when a frame is received from messaging node 2310, MM4 can be generated when a face detection operation is initiated or completed, and MM6 can be generated when facial recognition data is transmitted back to messaging node 2310. Mile markers MM7, MM8, and MM9 are similarly generated based on initiation or completion of processing by the receive 2331, inference 2332, and transmit 2333 components, respectively.In one embodiment, each mile marker includes a timestamp indicating when its pipeline stage was initiated and/or completed. Additionally, a mile marker may indicate a processing stage and/or a specific node on which the stage is implemented.In one implementation, node-level resource managers 2405, 2420, and 2430 associated with each set of nodes 2304, 2320, and 2330, respectively, evaluate the performance data contained in the various mile markers MM1-MM9, and take as necessary Corrective actions to ensure compliance with relevant KPIs. For example, if the mile marker indicates performance below a threshold, the node resource manager 2405 associated with the ingest and decode node 2305 may reallocate processing resources to improve performance during the receive, decode and/or transmit phases 2305A-C.In one embodiment, the resource manager operates under the control of a cluster manager 2450, which evaluates mile markers (and potentially other monitoring data) from the entire processing cluster to make cluster-wide resource allocation decisions. Cluster manager 2450 may identify one or more groups of nodes 2305, 2310, 2320, and 2330 that represent performance bottlenecks for the overall processing pipeline. For example, based on timing data contained in the mile markers, the cluster manager 2450 may determine that the detection node 2320 is increasing the latency of the pipeline to reach or approach a threshold. In response, cluster manager 2450 may initialize additional nodes to detect node group 2320 to improve latency.Thus, in this embodiment, node-level resource managers 2405, 2420, 2430 can quickly and efficiently perform local resource allocation operations based on mile markers, and cluster manager 2450 can perform cluster-wide resource allocation to ensure that Load, KPIs such as throughput and latency are met. Additionally, the service provider manager 2460 may perform service-wide analysis and resource allocation based on mile markers (and other data) received from the plurality of cluster managers 2450 .Using the above techniques, service provider SLAs can be provided as a measure of the mile mark, as throughput and latency related protocols to be implemented by the cluster manager 2450 and/or node resource managers 2405, 2420, 2430. Additionally, each cluster manager 2450 can use these techniques to optimize scheduling for different tenants as well as compute, network and storage components to ensure the required SLA consistency while reducing costs.At the platform and/or processor level, one or more node-level resource managers 2405, 2420, 2430 can monitor different stages of execution and manage internal resource allocation. The allocation and enforcement of resource allocation can be achieved by adjusting the number of cores or logical processors allocated to the workload, the frequency of cores (or groups of cores), the amount of cache resources allocated to each workload ( For example, the number of cache ways, the amount of cache storage), and the memory bandwidth allocated to each workload. Once resource allocations have been made, node-level resource managers 2405, 2420, 2430 may implement resource allocation via cache allocation enforcement, memory bandwidth enforcement, etc., or any other processor-level resource management circuitry to provide cluster manager 2450 The requested performance guarantee.As illustrated in Figure 24B, the various techniques described above for adjusting resource allocation based on mile markers may be implemented in the local resource manager 2405 within the processor 1350. In particular, resource monitoring circuitry 1305 (described previously with respect to FIG. 13 ) may evaluate mile markers that are triggered in response to workload execution across various service provider nodes. If the resource monitoring circuit 1305 determines that the performance guarantees of the SLA are not being met, it may determine a reallocation of resources on one or more of the nodes 2305, 2320, 2330. Implementing circuitry 1302 may then implement the new resource allocation on each individual node (eg, within each processor 1350 of the node).Using the above techniques, service provider SLAs can specify workload metrics in an application-specific manner, reflecting the final service provided by that application. For example, for facial recognition, workload metrics can be expressed in faces/sec. Additionally, key performance indicators can be expressed using mile markers. For example, the throughput KPI may be specified as mile marks 1-9/sec, and the latency KPI may be specified as the time between mile marks 1 and 9.Cluster manager 2450 extensions can be implemented to specify the number of nodes per mile marking phase, placement of mile marking nodes for SLA guaranteed routing, and dynamic allocation of accelerators to meet SLA requirements.25 illustrates a representation of mile marker monitoring and resource management to support application level SLA. Expected state 2510 is shown in table 2501, and current state 2530 is specified in table 2503, both based on the set of mile markers MM1-MM7. The SLA throughput value is specified in mile marks per second, and the SLA latency value is specified as the time for all mile marks to be generated.The scheduler/resource manager 2520 dynamically adjusts resource allocation based on detected differences between the expected state 2510 and the current state 2530. For example, in one embodiment, the scheduler/resource manager 2520 will adjust resource allocations in an attempt to bring the current state 2530 into line with the expected state 2510.A method according to one embodiment of the present invention is illustrated in FIG. 26 . The method may be implemented on the architectures described herein, but is not limited to any particular processor or system architecture.At 2601, telemetry data related to execution of multiple workloads at different priority levels is collected. For example, in one embodiment, workloads are classified as high-priority workloads and best-effort workloads. At 2602, machine learning is performed using the telemetry data to determine whether it is possible to perform a more efficient allocation of resources. As described above, for example, in one embodiment, a reinforcement learning engine is used to determine whether to perform resource allocation in accordance with a reward system (ie, where rewards and penalties are generated based on measured performance metrics).At 2603, if more efficient resource allocation is possible, at 2604, one or more resource allocations associated with the plurality of workloads are modified. If a more efficient allocation is not possible or guaranteed, the process returns to 2601.A method according to one embodiment of the present invention is illustrated in FIG. 27 . The method may be implemented on the architectures described herein, but is not limited to any particular processor or system architecture.At 2701, multiple workloads are executed on a distributed computing pipeline including multiple nodes to execute multiple pipeline stages. One such embodiment was described above with respect to FIG. 24 .At 2702, mile markers are generated in response to initiation or completion of one or more pipeline stages. For example, in response to initiation of the first and second pipeline stages associated with the first set of nodes, a mile marker may be generated and provided to a local resource manager as described above.At 2703, the execution of multiple distributed workloads is monitored using mile markers to determine whether throughput and/or latency requirements (eg, as specified in the SLA/KPI) are being met for one or more workloads of). If so, at 2704, the current resource allocation is not modified and the process returns to 2701. If not, at 2704, a decision is made to change one or more resources. At 2705, one or more resource allocations are modified according to the current resource allocation policy.ExampleThe following are example implementations of different embodiments of the invention.Example 1. A method comprising: collecting data related to usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priorities associated with one or more guaranteed performance levels Workloads, and best-effort workloads not associated with guaranteed performance levels; analyze the data to identify changes from one or more priority workloads to one or more best-effort workloads over one or more subsequent time periods Resource reallocation of workloads while still maintaining guaranteed performance levels; reallocating resources from priority workloads to best-effort workloads during subsequent time periods; monitoring with respect to guaranteed performance levels during subsequent time periods execution of the priority workload; and preemptively reallocating resources from the best-effort workload to the priority workload during a subsequent time period in response to detecting that the guaranteed performance level is in danger of being violated.Example 2. The method of example 1, wherein the guaranteed performance level includes guaranteed latency and/or guaranteed throughput.Example 3. The method of example 2, wherein the guaranteed performance level is specified as a key performance indicator (KPI) of a service level agreement (SLA).Example 4. The method of example 1, wherein analyzing comprises using the data to perform reinforcement learning to identify resource reallocations while still maintaining a guaranteed level of performance.Example 5. The method of example 4, wherein performing the reinforcement learning further comprises: generating the first one or more reward values associated with resource allocation to the most-effort workload; generating a second one or more reward values and/or one or more penalty values specifying the performance metric; adding the reward value and the penalty value to generate a final reward value; and reallocating resources to attempt to maximize the final reward value.Example 6. The method of example 5, wherein resource allocation includes cache allocation to best-effort workloads, wherein the increase in cache allocation is used to generate an increased reward value.Example 7. The method of example 6, wherein the second one or more reward values comprise performance reward values for maintaining consistency with one or more guaranteed performance levels.Example 8. The method of example 1, wherein the first set of resource allocations are to be performed by resource management circuitry of the processor.Example 9. The method of example 8, wherein the first set of resource allocations includes cache occupancy levels and cache or memory bandwidth.Example 10. An apparatus comprising: a telemetry data collector to collect data related to the usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including data related to one or more guaranteed Priority workloads associated with performance levels, and best-effort workloads not associated with guaranteed performance levels; a resource allocation controller to analyze the data to identify changes to be made in one or more subsequent time periods Reassignment of resources from one or more priority workloads to one or more best-effort workloads, while still maintaining guaranteed performance levels, used by a resource allocation controller to reallocate resources from priority workloads during subsequent time periods to a best-effort workload; a telemetry data collector to monitor execution of the prioritized workload with respect to guaranteed performance levels during subsequent time periods; and a resource allocation controller to respond to detecting that the guaranteed performance level is in violation At risk, resources are preemptively reallocated from best-effort workloads to priority workloads during subsequent time periods.Example 11. The apparatus of example 10, wherein the guaranteed performance level includes guaranteed latency and/or guaranteed throughput.Example 12. The apparatus of example 11, wherein the guaranteed performance level is specified as a key performance indicator (KPI) of a service level agreement (SLA).Example 13. The apparatus of example 10, wherein the resource allocation controller includes a machine learning engine to perform reinforcement learning using the data to identify resource reallocations while still maintaining a guaranteed level of performance.Example 14. The apparatus of example 13, wherein performing reinforcement learning further comprises: generating the first one or more reward values associated with resource allocation to the most-effort workload; generating a second one or more reward values and/or one or more penalty values specifying the performance metric; adding the reward value and the penalty value to generate a final reward value; and reallocating resources to attempt to maximize the final reward value.Example 15. The apparatus of example 14, wherein the resource allocation includes a cache allocation to a best-effort workload, wherein the increase in the cache allocation is used to generate an increased reward value.Example 16. The apparatus of example 15, wherein the second one or more reward values comprise performance reward values for maintaining consistency with one or more guaranteed performance levels.Example 17. The apparatus of example 10, wherein at least a portion of the resource allocation controller includes resource management circuitry within the processor to perform the first set of resource allocations.Example 18. The apparatus of example 17, wherein the first set of resource allocations includes cache occupancy levels and cache or memory bandwidth.Example 19. A machine-readable medium having program code stored thereon, the program code, when executed by a machine, causes the machine to: using data about the workloads, including priority workloads associated with one or more guaranteed performance levels, and best-effort workloads not associated with guaranteed performance levels; analyzing the data to identify Reallocate resources from one or more priority workloads to one or more best-effort workloads during one or more subsequent time periods, while still maintaining guaranteed performance levels; redistributing the priority workload to the best-effort workload; monitoring the execution of the priority workload with respect to the guaranteed performance level during a subsequent time period; and in response to detecting that the guaranteed performance level is in danger of being violated, Preemptively reallocate resources from best-effort workloads to priority workloads during subsequent time periods.Example 20. The machine-readable medium of example 19, wherein the guaranteed performance level includes guaranteed latency and/or guaranteed throughput.Example 21. The machine-readable medium of example 20, wherein the guaranteed performance level is specified as a key performance indicator (KPI) of a service level agreement (SLA).Example 22. 19. The machine-readable medium of claim 19, wherein analyzing comprises using the data to perform reinforcement learning to identify resource reallocations while still maintaining a guaranteed level of performance.Example 23. The machine-readable medium of example 22, wherein performing the reinforcement learning further comprises: generating the first one or more reward values associated with resource allocation to the most-effort workload; generating a second one or more reward values and/or one or more penalty values for the associated specified performance metric; adding the reward value and the penalty value to generate a final reward value; and reallocating resources to attempt to achieve the final reward value maximize.Example 24. The machine-readable medium of example 23, wherein the resource allocation includes a cache allocation to a best-effort workload, wherein the increase in the cache allocation is used to generate an increased reward value.Example 25. The machine-readable medium of example 24, wherein the second one or more reward values comprise performance reward values for maintaining consistency with performance levels guaranteed by one or more examples.Example 26. The machine-readable medium of example 19, wherein the first set of resource allocations are to be performed by resource management circuitry of the processor.Example 27. The machine-readable medium of example 26, wherein the first set of resource allocations includes cache occupancy levels and cache or memory bandwidth.In the foregoing specification, embodiments of the present invention have been described with reference to specific exemplary embodiments of the present invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.Embodiments of the present invention may include the steps that have been described above. The steps may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, the steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.Instructions, as described herein, may refer to a specific configuration of hardware, such as a dedicated function configured to perform certain operations, or to have predetermined functions or software instructions stored in a memory embodied in a non-transitory computer-readable medium. Integrated Circuit (ASIC). Accordingly, the techniques shown in the various figures may be implemented using code and data stored and executed on one or more electronic devices (eg, end stations, network elements, etc.). Such electronic devices store and transmit (with other electronic devices internally and/or over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media ( For example, magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase change memory) and transitory computer machine readable communication media (eg, electrical, optical, acoustic, or other forms of propagated signals - such as carrier waves, infrared signal, digital signal, etc.). Furthermore, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (such as keyboards, touchscreens, and/or displays), and network connections. The coupling of the set of processors and other components is typically through one or more buses and bridges (also known as bus controllers). The storage devices and signals carrying network traffic represent one or more machine-readable storage media and machine-readable communication media, respectively. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of the embodiments of the invention may be implemented using various combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without some of these specific details. In some instances, well-known structures and functions have not been described in detail in order to avoid obscuring the subject matter of the present invention. Accordingly, the scope and spirit of the present invention should be judged in light of the following claims. |
A memory can have a stacked memory array that can have a plurality of levels of memory cells. Each respective level of memory cells can be commonly coupled to a respective access line. A plurality of drivers can be above the stacked memory array. Each respective driver can have a monocrystalline semiconductor with a conductive region coupled to a respective access line. |
1.A memory, which includes:A stacked memory array including multiple levels of memory cells, each corresponding level of the memory cells being commonly coupled to a corresponding access line; andA plurality of drives located above the stacked memory array;Each of the corresponding drivers includes a single crystal semiconductor including a conductive area coupled to the corresponding access line.2.The memory according to claim 1, wherein each corresponding access line forms a corresponding step of a ladder structure and the plurality of drivers are directly above the ladder structure.3.The memory according to claim 1, further comprising a dielectric between the stacked memory array and the single crystal semiconductor.4.The memory according to any one of claims 1 to 3, whereinThe conductive region is the first source/drain in the first part of the single crystal semiconductor;The single crystal semiconductor further includes:A second source/drain located in the second part of the single crystal semiconductor; andA channel region located in the third part of the single crystal semiconductor between the first part and the second part; andEach corresponding driver includes a gate on the channel region.5.The memory according to claim 4, wherein the gate is shared by the plurality of drivers.6.The memory according to claim 4, wherein the single crystal semiconductor is a fin and the gate surrounds the channel region.7.4. The memory of claim 4, wherein the second source/drain is coupled to receive an access signal to access the memory cells commonly coupled to the corresponding access line.8.4. The memory of claim 4, wherein the second source/drain is coupled to a logic circuit system below the stacked memory array or above the stacked memory array.9.The memory of claim 4, wherein each corresponding driver includes a gate dielectric between the gate and the channel region.10.The memory according to claim 4, wherein:The gate includes at least one of polysilicon and metal; andThe single crystal semiconductor includes a plurality of fins.11.A memory, which includes:A stacked memory array, which includes a ladder structure including corresponding access line ladders that are respectively commonly coupled to corresponding levels of memory cells of multiple levels of memory cells;A plurality of single crystal semiconductor fins are located at a level above the stepped structure, each corresponding single crystal semiconductor fin is directly above the corresponding step, and each corresponding single crystal semiconductor fin includes:A first source/drain, which is coupled to the corresponding step;A second source/drain coupled to receive a signal for accessing the corresponding level of the memory cell; andA channel region, which is between the first source/drain and the second source/drain; andGates, which are commonly coupled to the channel region.12.11. The memory of claim 11, further comprising a gate dielectric between each channel region and the gate, the gate dielectric coupling the gate to each channel region;Wherein the gate and the dielectric surround a portion of the fin.13.The memory according to any one of claims 11 to 12, whereinThe first source/drain and the second source/drain have a first conductivity level; andEach corresponding single crystal semiconductor fin further includes a conductive region between the first source/drain and the channel region and between the second source/drain and the channel region, so The conductive region has a second conductivity level lower than the first conductivity level.14.A memory, which includes:A stacked memory array includes a plurality of blocks, each corresponding block includes a plurality of levels of memory cells, each corresponding level of the memory cells is commonly coupled to a corresponding access line of the plurality of access lines, and each access line forms a Corresponding steps of the stepped structure of the corresponding blocks, so that the stepped structures of the corresponding blocks respectively have steps at a common level;A plurality of single crystal semiconductors located at a level above the stepped structure, such that the corresponding single crystal semiconductor structure is shared by the steps at each of the common levels;Wherein each corresponding single crystal semiconductor includes: a first source/drain coupled to each of the steps at the common level; and a second source/drain between the common level Between said steps at the location; andA channel region located above each of the steps at the common level between the second source/drain and the first source/drain, such that the channel region is located at each Above each corresponding step of a corresponding block; andThe corresponding gates are commonly coupled to the channel region above each corresponding step of each corresponding block.15.The memory according to claim 14, further comprising a dielectric between the step structure and the plurality of single crystal semiconductors;Wherein the dielectric includes oxide.16.A method for forming a memory, which includes:A stacked memory array is formed, the stacked memory array includes a plurality of levels of memory cells, each corresponding level of the memory cells is commonly coupled to a corresponding access line of the plurality of access lines, and each access line forms a corresponding step of a ladder structure ;Forming a first dielectric on the stacked memory array;Attaching a single crystal semiconductor to the first dielectric such that the single crystal semiconductor is on the first dielectric;Dividing the single crystal semiconductor into a plurality of segments;Forming a second dielectric on the plurality of segments;Forming a first conductor on the second dielectric;Forming a source/drain in each corresponding segment of the single crystal semiconductor; andA plurality of second conductors passing through the first dielectric are formed such that each corresponding second conductor couples the source/drain in each corresponding segment to the corresponding step of the stepped structure.17.The method of claim 16, wherein each corresponding segment is directly above the corresponding step.18.The method according to any one of claims 16 to 17, whereinThe source/drain in each corresponding segment is the first source/drain; andThe method further includes:A second source/drain is formed in each corresponding segment such that each corresponding segment includes the second dielectric between the first source/drain and the second source/drain And the first conductor;Wherein forming the first source/drain and the second source/drain in each corresponding segment includes:Remove parts of the second dielectric and the first conductor from the corresponding section to expose the corresponding section of the corresponding section and form the first in the corresponding section of the corresponding section, respectively Source/drain and the second source/drain.19.The method of claim 18, whereinThe method further includes forming a first conductive region and a second conductive region in the corresponding portions of the corresponding segment before forming the first source/drain and the second source/drain, respectively ;Forming the first source/drain and the second source/drain in the respective portions of the corresponding segment respectively includes forming the first source/drain in the first conductive region and the second conductive region, respectively A source/drain and the second source/drain; andWherein the first conductive region and the second conductive region have a first conductivity level, and the first source/drain and the second source/drain have a greater than the first conductivity level The second conduction level.20.19. The method of claim 19, further comprising before forming the first conductive region and the second conductive region:Forming a third dielectric on the first conductor; andA dielectric spacer is formed on the side of the third dielectric and the side of the first conductor.21.The method of claim 16, further comprising forming the single crystal semiconductor before attaching the single crystal semiconductor to the first dielectric. |
Drive placement in memory with stacked memory arrayTechnical fieldThe present disclosure generally relates to electronic systems (e.g., memory systems), and more particularly, the present disclosure relates to drive placement in memories having stacked memory arrays.Background techniqueThe memory system may be implemented in electronic systems such as computers, cellular phones, handheld electronic devices, and so on. Some memory systems (such as solid state drives (SSD), embedded multimedia controller (eMMC) devices, universal flash storage (UFS) devices, and the like) may include non-volatile memory for storing host (e.g., user) data from the host. Volatile storage memory. Non-volatile storage memory provides persistent data by saving stored data when it is not powered, and can include NAND flash memory, NOR flash memory, read-only memory (ROM), electrically erasable programmable ROM (EEPROM), Erase programmable ROM (EPROM) and resistance variable memory (such as phase change random access memory (PCRAM), three-dimensional cross-point memory (such as 3D XPoint), resistance random access memory (RRAM), ferroelectric random access memory (FeRAM), Magnetoresistive Random Access Memory (MRAM)) and programmable conductive memory and other types of memory.The memory may include a memory array, which may include groups of memory cells, such as blocks, sub-blocks, strings, and so on. In some examples, the memory array may be a stacked memory array, which may be referred to as a three-dimensional memory array, such as a three-dimensional NAND memory array. For example, memory cells at a common location (for example, at a common vertical level) in a stacked memory array may form a hierarchy of memory cells, sometimes referred to as a hierarchy of memory cells. The memory cells at each corresponding level may be commonly coupled to a corresponding common access line, such as a word line, at the corresponding level. In some examples, corresponding access lines at corresponding levels may form steps of a ladder structure. Memory cells from different levels may be coupled in series to form a series-coupled memory cell string (e.g., NAND string) between a select transistor coupled to the source and a select transistor coupled to a data line (e.g., bit line).Description of the drawingsFigure 1 illustrates a device according to several embodiments of the present disclosure.Figure 2 illustrates a portion of a memory according to several embodiments of the present disclosure.Figure 3 illustrates a portion of a memory according to several embodiments of the present disclosure.Figure 4A is a top view of a portion of a memory according to several embodiments of the present disclosure.4B to 4D are various cross-sectional views associated with FIG. 4A according to several embodiments of the present disclosure.5A to 5C are various views corresponding to specific processing stages associated with forming a memory according to several embodiments of the present disclosure.6A to 6I are various views corresponding to specific processing stages associated with forming a memory according to several embodiments of the present disclosure.Figure 7A is a top view of a portion of a memory according to several embodiments of the present disclosure.Figure 7B is a top view of a portion of a memory according to several embodiments of the present disclosure.Fig. 7C is a cross section viewed along the line 7C-7C in Figs. 7A and 7B.Figure 8A is a top view of a portion of a memory according to several embodiments of the present disclosure.8B and 8C are various cross-sectional views associated with FIG. 8A according to several embodiments of the present disclosure.Figure 9 is a cross-sectional view of a portion of a memory according to several embodiments of the present disclosure.Figure 10 is a block diagram of a device according to several embodiments of the present disclosure.Detailed waysDrivers (e.g. string drivers) can be used to selectively supply access signals (e.g., programming signals (e.g., programming voltage)) to access lines at specific levels of the stacked array to access (e.g., program) memory cells coupled to the access lines . The corresponding string driver can be coupled to each corresponding access line in the memory array. For example, the corresponding string driver may be coupled to each corresponding rung corresponding to the corresponding access line. It should be noted that this type of driver may sometimes be referred to as an access line (such as a word line) driver. Various current methods place corresponding string drivers under the array so that there are corresponding string drivers for each corresponding level of the array under the stacked array.In order to meet the demand for higher-capacity memory, designers continue to strive to increase memory density (for example, the number of memory cells in a given bottom area of an integrated circuit die). The way to increase the density of memory devices in a stacked array is to increase the number of levels of memory cells and therefore the number of access lines and the number of string drivers. However, without increasing the bottom area (eg, occupied area) of the integrated circuit die, there may not be enough space under the stacked memory array to accommodate the increased number of string drivers. In addition, placing the string driver under the memory array causes the wiring in the stacked array to become more complicated as the number of levels increases.The present disclosure solves the problem of accommodating the increased number of string drivers under the stacked memory array by moving the string drivers above the memory array. Each driver may have a single crystal semiconductor with a conductive area coupled to the corresponding access line. The single crystal semiconductor can be used to reduce the resistance of the driver and the current leakage in the driver compared with the previous method that usually uses polycrystalline semiconductor (such as polysilicon). For example, the higher resistance and current leakage associated with the use of polycrystalline semiconductors can degrade the performance of the driver and therefore the performance of the memory employing the driver.In some instances, a single crystal semiconductor is formed and then a transfer technique that avoids the formation of a single crystal semiconductor on the surface of the dielectric above the memory array (for example, by using various deposition techniques) is used to transfer the single crystal semiconductor to the surface of the dielectric . For example, it may be difficult to form a single crystal semiconductor on a dielectric.Figure 1 illustrates a portion of a device according to several embodiments of the present disclosure, such as a portion of a memory 100 (e.g., a NAND memory). The memory 100 may include a stacked memory array 106, such as a stacked NAND memory array. The array 106 may include a memory cell area 101 and a ladder structure 103 adjacent to the memory cell area 101.The array 106 may include a stack of dielectrics 102 alternating with conductors 104 in the z-direction (eg, the vertical direction) in the reference of FIG. 1. The semiconductor structure 105 (eg, semiconductor pillar) may pass through the stack in the memory cell region 101 in the z direction and terminate at the upper surface of the semiconductor 107 or in the semiconductor 107. The selection transistor 108 may be adjacent to each semiconductor structure 105 at a level corresponding to the uppermost conductor 104, and the selection transistor 109 may be adjacent to each semiconductor structure 105 at a level corresponding to the lowermost conductor 104.The memory cell 110 may be adjacent to each semiconductor structure 105 at a level corresponding to the conductor 104 between the uppermost and lowermost conductors 104. The memory cells 110 at each corresponding level are commonly coupled to the conductor 104 at the corresponding level. For example, the memory cells 110 at a level in the array 106 may be referred to as a level of memory cells, such as a level of memory cells. The memory cells 110 adjacent to the semiconductor structure 105 at different levels may be coupled in series to form a series-coupled memory cell string (for example, a vertical string), such as a NAND string of memory cells.The uppermost and lowermost conductors 104 may be select lines 112 that form the gates of the select transistors 108 and 109 or are coupled to the gates of the select transistors 108 and 109, respectively. The conductor 104 between the uppermost and lowermost conductors 104 may be an access line 114, which may be referred to as a word line and forms the control gate of the memory cell 110 or is coupled to the control gate of the memory cell 110. It should be noted that the memory cells 110 at each corresponding level share the access line 114 coupled to the corresponding level.The stepped structure 103 includes uppermost and lowermost steps 116, which may each include a portion of a corresponding selection line 112 on an adjacent dielectric 102. The corresponding contact 118 is coupled to the corresponding selection line 112 of each corresponding step 116. The corresponding contact point 118 (for example, the vertical contact point) is coupled to the activation circuit system through the corresponding line 120. The data line 122 is coupled to the semiconductor structure 105 through a data line contact 124.In some examples, the stepped structure 103 includes steps 127-1 to 127-N between the uppermost and lowermost steps 116, and the uppermost and lowermost steps 116 may each include a portion of the corresponding access line 114 on the adjacent dielectric 102. The corresponding contact 129 (eg, vertical contact) is coupled to the corresponding access line 114 of each corresponding step 127. For example, a step (e.g., step 127) that includes an access line (e.g., access line 114) may be referred to as an access line step.In some examples, the corresponding contact 129 is coupled to the corresponding string driver 140, which may be a field effect transistor (FET) and located on the ladder structure 103 and therefore the array 106 (eg, above). The corresponding string driver 140 may be various string drivers disclosed herein. The string driver may be configured to selectively couple the access line 114 to an access signal to access the memory cells 110 commonly coupled to the access line. For example, the access signal may be a programming signal for programming the memory cell 110, such as a programming voltage.The corresponding string driver 140 may include the stepped structure 103 and therefore the corresponding single crystal semiconductor 130 (e.g., single crystal silicon (Si), single crystal silicon germanium (SiGe), single crystal germanium (Ge), or the like on (e.g., above) on the array 106 Of the single crystal semiconductor). For example, the upper side may be the stepped structure 103 and therefore the array 106, which may be between the string driver 140 and the semiconductor 107. The corresponding string driver 140 may include a gate formed on the corresponding single crystal semiconductor 130 and coupled to the corresponding single crystal semiconductor 130 (not shown in FIG. 1). The corresponding conductive contact 129 may be coupled to a conductive region that may be formed in the corresponding single crystal semiconductor 130, such as a source/drain (not shown in FIG. 1). In some examples, the corresponding single crystal semiconductor 130 may be directly above the corresponding step 127 (for example, vertically above the corresponding step 127 or horizontally aligned with the corresponding step 127) and may be formed on a dielectric (not shown in FIG. 1) (which may be formed On the memory cell area 101 and the ladder structure 103).It should be noted that the single crystal semiconductor 130 is distributed along the x direction and extends along the y direction in the reference of FIG. 1. In some examples, the gate may extend along the x direction and be commonly coupled to single crystal semiconductors 130 distributed along the x direction.As discussed further herein, each single crystal semiconductor 130 may form part of at least one string driver such that the string driver is located above the array. For example, the string driver may include a control gate formed on the corresponding single crystal semiconductor 130 (not shown in FIG. 1). The string driver may be configured to selectively couple the access line 114 to an access signal to access the memory cells 110 commonly coupled to the access line. For example, the access signal may be a programming signal for programming the memory cell 110, such as a programming voltage.In other examples, each corresponding single crystal semiconductor 130 may be replaced by a corresponding wire (eg, wire 120 (not shown in FIG. 1)), and the corresponding wire may be coupled to the corresponding contact 129 so that the corresponding wire may be coupled to the corresponding step 127. The corresponding line coupled to the corresponding step 127 may be coupled to the corresponding string driver (not shown in FIG. 1) that may be formed directly above the memory cell area 101. For example, a string driver may be formed on the data line 122.The array 106 may be divided into blocks 135 of memory cells 110, which may sometimes be referred to as sub-blocks. For example, a block of memory cells may refer to a group of memory cells that are collectively erased. A dielectric (not shown in FIG. 1) may be formed in the opening 137 to electrically isolate the blocks 135 from each other. It should be noted that the blocks 135 are distributed along the y direction in the reference of FIG. 1.Figure 2 illustrates a portion of a memory 200 (which may be the memory 100) according to several embodiments of the present disclosure. The memory 200 may include a string driver 240 on a stacked memory array 206 (which may be the array 106). The array 206 may be located on the logic circuit system 242, and the logic circuit system 242 may be located on the semiconductor 207. For example, the string driver 240 may be various string drivers disclosed herein. In some examples, there may be additional logic circuitry below the array 206 (eg, below the semiconductor 207) that may facilitate the operation of the memory 200.The string driver 240 may be referred to as a high voltage string driver because the string driver 240 can operate at about 30 volts, and the logic circuit system 242 may be referred to as a low voltage logic circuit system because the logic circuit system 242 can operate at about 3 volts. In some examples, the string driver 242 may include a single crystal semiconductor, such as the single crystal semiconductor 130. The logic circuit system 242 may be coupled to the gate of the string driver 240 to activate the string driver 240. In some examples, the logic circuitry 242 may include complementary metal oxide semiconductor (CMOS) circuitry.Figure 3 illustrates a portion of a memory 300 (which may be the memory 100) according to several embodiments of the present disclosure. The memory 300 may include a string driver 340 on the stacked memory array 306 (which may be the array 106), such as a high voltage string driver. In some examples, the string driver 340 may include a single crystal semiconductor, such as a single crystal semiconductor 130. The logic circuit system 342 (for example, a low-voltage CMOS circuit system) may be located at the same level as the string driver 340 and may be located on the memory array 306. The logic circuit system 342 may be coupled to the control gate of the string driver 340 to activate the string driver 340.4A is a top view of a portion of a memory 400 (which may be various memories described herein) according to several embodiments of the present disclosure. 4B to 4D are various cross-sectional views associated with FIG. 4A according to several embodiments of the present disclosure. Figure 4B is a cross-sectional view in the yz plane viewed along the line 4B-4B in Figure 4A; Figure 4C is a cross-sectional view in the xz plane viewed along the line 4C-4C in Figure 4A; and Figure 4D is a cross-sectional view along the line 4D- in Figure 4A A cross-sectional view in the xz plane viewed in 4D.In FIG. 4A, the blocks 435-1 and 435-2 in the memory cell area 401 correspond to the corresponding ladder structures 403-1 and 403-2, respectively. For example, blocks 435-1 and 435-2 may be coupled to stepped structures 403-1 and 403-2, respectively. The ladder structures 403-1 and 403-2 each include steps 427-(N-2) to 427-N, which respectively include access lines 414-(N-2) to 414-N, as shown in FIGS. 4C and 4D . Each of the corresponding access lines 414-(N-2) to 414-N is located on the corresponding dielectric 402. Each of the corresponding access lines 414-(N-2) to 414-N are commonly coupled to a corresponding level of memory cells in the corresponding block 435.The string drivers 440-(N-2) to 440-N may be directly above the stepped structures 403-1 and 403-2 and may be directly located on the steps 427-( N-2) to above 427-N, as shown for the stepped structure 403-2 in Figure 4D. Each of the string drivers 440-(N-2) to 440-N may include a single crystal semiconductor. For example, the string drivers 440-(N-2) to 440-N may include portions of single crystal semiconductors 430-(N-2) to 430-N, respectively.Each string driver 440 may include a corresponding conductive area, such as a corresponding source/drain 444, coupled to the corresponding access line 414 of the corresponding step 427 in its corresponding single crystal semiconductor 430. For example, as shown in FIG. 4C, the corresponding single crystal semiconductors 430-(N-2) to 430-N of the string drivers 440-(N-2) to 440-N include respectively coupled to the access lines 414-(N-2) To source/drain 444-(N-2) to 444-N of 414-N.Each of the respective string drivers 440 may include a respective source/drain 445 in its respective single crystal semiconductor 430, which may be coupled to receive may be selectively coupled to the respective access line in response to activating the respective string driver 440 Access signal. For example, as shown in FIG. 4B, the source/drain 445 may be shared by adjacent string drivers (eg, adjacent string drivers 440-N). Thus, adjacent string drivers can share source/drain 445. It should be noted that the source/drain 445 may be between the stepped structures 403-1 and 403-2 and therefore between the blocks 435-1 and 435-2. In some examples, the string driver 440 may be a field effect transistor (FET).As shown in FIGS. 4A, 4B, and 4D, each of the respective string drivers 440 may include a portion of the common gate 447. For example, the string drivers 440-(N-2) to 440-N of each of the corresponding blocks 435-1 and 435-2 may be commonly coupled to the corresponding gate 447. As shown in FIGS. 4B and 4D, the portion of the corresponding gate 446 may be adjacent to the corresponding gate dielectric 448 (e.g., gate oxide) (e.g., on the corresponding gate dielectric 448), and the gate dielectric 448 may be located on a single crystal. Semiconductors 430-(N-2) to 430-N (for example, in direct physical contact with single crystal semiconductors 430-(N-2) to 430-N) and are composed of single crystal semiconductors 430-(N-2) to 430-N Total. For example, the gate 447 may be coupled to the gate dielectric 448 (e.g., through direct physical contact with the gate dielectric 448).Each of the corresponding string drivers 440 may include a channel region 449 between the source/drain electrodes 444 and 445 in its corresponding single crystal semiconductor 430, as shown in FIG. 4B for the string driver 440-N, the single crystal semiconductor 430 -N and source/drain 444-N and 445 are shown. The gate dielectric 448 may be located on the channel region 449 (eg, and in direct physical contact with the channel region 449). The conductive channel may be formed in the channel region 449 in response to activating the string driver 440.The source/drain electrodes 444 and 445 may be conductively doped to have an N+ conductivity level. In some examples, a portion 450 of each respective single crystal semiconductor 430 is between the channel region 449 and the source/drain 444 (eg, the source/drain 444-N in FIG. 4B). The conductive region 451 (e.g., N-conductive implant) can be doped by doping the portion of each corresponding single crystal semiconductor 430 between the channel region 449 and the source/drain 445 to have an N-conductive level (which has a ratio of N+ conduction level with a low conduction level) is formed in the portion.The single crystal semiconductors 430-(N-2) to 430-N are directly above the stepped structures 403-1 and 403-2 and directly on the steps 427-(N-2) to 427 of the stepped structures 403-1 and 403-2, respectively -N above, as shown for the stepped structure 403-2 in Figure 4D. The dielectric 456 (which may be an oxide, a nitride, or the like) may be formed adjacent to each of the stepped structures 403-1 and 403-2 (e.g., those located in the stepped structures 403-1 and 403-2) Each of them), as shown in Figures 4B to 4D. Next, a dielectric 458 (which may be an oxide, nitride, or the like) may be formed on the dielectric 456. Thus, the dielectric 458 may be directly above the stepped structures 403-1 and 403-2, as shown in FIGS. 4B to 4D. In some examples, the dielectric 458 may extend over the memory cell area 401 (not shown in Figures 4A-4D). For example, the dielectric 458 may be located on the data line 122 in FIG. 1 (not shown in FIG. 1).The single crystal semiconductors 430-(N-2) to 430-N are located on and attached to the dielectric 458. For example, the single crystal semiconductors 430-(N-2) to 430-N may be bonded in direct physical contact with the upper surface of the dielectric 458 such that the single crystal semiconductors 430-(N-2) to 430-N are located above the dielectric 458. The gate dielectric 448 is formed on the single crystal semiconductors 430-(N-2) to 430-N (as shown in FIGS. 4B and 4D) so that the gate dielectric 448 is commonly coupled to the single crystal semiconductor 430-(N-2) To 430-N. For example, the gate dielectric 448 may be in direct physical contact with each of the single crystal semiconductors 430-(N-2) to 430-N. It should be noted that the gate dielectric 448 may surround a portion of each of the single crystal semiconductors 430-(N-2) to 430-N to be adjacent to the single crystal semiconductors 430-(N-2) to 430-N. The upper surface and side of each of them.The gate 447 may be adjacent to the gate dielectric 448, as shown in Figures 4B and 4D. The gate 447 is commonly coupled to each of the single crystal semiconductors 430-(N-2) to 430-N through the gate dielectric 448. In some examples, the gate 447 may be coupled to a logic circuit system (eg, logic circuit system 242 or 342) to receive a control signal such as an activation signal to activate a string driver 440 commonly coupled thereto.The corresponding contact 460 may be coupled to each corresponding source/drain 445, for example, to the upper surface of each corresponding source/drain 445. Thus, the contact 460 may be between the steps of the stepped structures 403-1 and 403-2 and therefore between the blocks 435-1 and 435-2. In some examples, the contact 460 may be coupled to receive an access signal.The corresponding (e.g., vertical) contact 464 may pass through each corresponding source/drain 444 (e.g., each of the corresponding source/drain 444-(N-2) to 444-N in FIGS. 4B and 4C). form. For example, each corresponding contact 464 may pass through a portion of the dielectric 458 and may be coupled to a corresponding conductor (e.g., corresponding conductive offset 466) formed on the upper surface of the dielectric 456 (e.g., in direct physical contact with the upper surface of the dielectric 456) ( For example, through direct physical contact with the corresponding conductor).A corresponding conductor (e.g., a corresponding conductive plug 468) can couple each corresponding conductive offset 466 to each of the corresponding access lines 414-(N-2) through 414-N. For example, the corresponding (e.g., vertical) conductive plug 468 can be coupled to the corresponding access line 414 and the corresponding conductive offset 466 (e.g., by direct physical contact with the corresponding access line 414 and the corresponding conductive offset 466) and can pass through the dielectric 456 .It should be noted that the corresponding conductive offset 466 may be a lateral offset, which may extend laterally from the corresponding contact 464 to the corresponding conductive plug 468 on the upper surface of the dielectric 456 with respect to the z direction (for example, along the x direction), so that the corresponding contact 464 It can be offset laterally from the corresponding conductive plug 468. In some examples, the corresponding contact 464, the corresponding conductive offset 466, and the corresponding conductive plug 468 may collectively be referred to as a corresponding conductor that can couple the corresponding source/drain 444 to the corresponding access line 414 and therefore to the corresponding step 427.5A to 5C are various views corresponding to specific processing stages associated with forming a memory according to several embodiments of the present disclosure. In some examples, the process described in conjunction with FIGS. 5A to 5C may be referred to as a transfer technique, during which a single crystal semiconductor (such as single crystal silicon) may be formed and subsequently transferred to the surface of a dielectric . For example, it may be difficult to form a single crystal semiconductor in contact with a dielectric (for example, using various deposition techniques).In FIG. 5A, hydrogen gas (H2) is implanted in the single crystal semiconductor 530 to form a hydrogen implant 570 in the single crystal bulk semiconductor 530. In FIG. 5B, the single crystal bulk semiconductor 530 containing the hydrogen implant 570 is coupled (eg attached) to the dielectric formed on the step structure 503 (which may be the step structure 103, 403-1, or 403-2) 558 (which can be a dielectric 458). For example, the single crystal bulk semiconductor 530 may be reversed and then attached to the dielectric 558 by bonding the single crystal bulk semiconductor 530 in direct physical contact with the upper surface of the dielectric 558.After bonding the single crystal bulk semiconductor 530 to the dielectric 558, the structure in FIG. 5B is annealed (e.g., at about 400°C) to remove hydrogen and create relatively fragile (e.g., brittle) regions at the site where the hydrogen is removed . In FIG. 5C, the single crystal bulk semiconductor 530 is split at the fragile region so that a part of the single crystal bulk semiconductor 530 is bonded to the dielectric 530. It should be noted that it may be difficult to form a single crystal semiconductor in contact with the dielectric, and for this reason, for example, the single crystal semiconductor 530 is formed according to the process described in FIGS. 5A to 5C and then bonded to the dielectric 558.6A to 6I are various views corresponding to specific processing stages associated with forming a memory according to several embodiments of the present disclosure. FIG. 6A may be a cross-section in the x-z plane or the y-z plane corresponding to a specific processing stage. In some examples, the processing stage may include several steps that may have several sub-steps.In FIG. 6A, a stacked memory array 606 is formed, which can be the various memory arrays disclosed herein. A dielectric 658 may be formed over the memory array 606, which may be a dielectric 458 or 558. A single crystal semiconductor 629 (for example, single crystal silicon) (which may be a single crystal semiconductor 530) may be attached to the upper surface of the dielectric 658 (for example, as previously described in conjunction with FIGS. 5A to 5C), such that the single crystal semiconductor 629 is located on the side of the dielectric 658 Above the upper surface (e.g. and in direct physical contact with the upper surface of the dielectric 658). For example, a single crystal semiconductor 629 may be formed and then transferred to the upper surface of the dielectric 658 using the transfer technique described in conjunction with FIGS. 5A to 5C to avoid being associated with the formation of the single crystal semiconductor 629 on the upper surface of the dielectric 658 Difficulties.Fig. 6B is a cross-section in the x-z plane corresponding to a specific processing stage after the processing stage of Fig. 6A. For example, a mask (e.g., photoresist) may be formed on the semiconductor 629 in FIG. 6A and the mask may be patterned to expose the portion of the semiconductor 629 for removal. Subsequently, a portion can be removed (for example by etching), which stops at the upper surface of the dielectric 658 to form single crystal semiconductor segments 630-(N- 2) To 630-N.Fig. 6C is a cross-section in the x-z plane corresponding to a specific processing stage after the processing stage of Fig. 6B. Fig. 6D is a cross-section in the y-z plane viewed along any of the lines D-D in Fig. 6C corresponding to the specific processing stage of Fig. 6C. Thus, the element symbol 630 may be used in the y-z plane of FIG. 6D and subsequent views to generally refer to each or any of the single crystal semiconductor segments 630-(N-2) to 630-N. For example, the structures in FIGS. 6C and 6D can be formed at the same time.In FIGS. 6C and 6D, a dielectric, such as a gate dielectric 648 (which may be a gate dielectric 448) is simultaneously formed on the structure of FIGS. 6C and 6D. For example, the gate dielectric 648 may be formed on each of the single crystal semiconductor segments 630-(N-2) to 630-N and may surround the single crystal semiconductor segments 630-(N-2) to 630- A part of each of N is adjacent to the upper surface and side of each of the single crystal semiconductor segments 630-(N-2) to 630-N.Next, a conductor 672 (for example, polysilicon) is simultaneously formed on the gate dielectric 648 of FIGS. 6C and 6D (for example, in direct physical contact with the gate dielectric 648), so that the conductor 672 surrounds the single crystal semiconductor segment 630-(N-2) To part of each of 630-N. For example, the conductor 672 may be adjacent to the upper surface and sides of the gate dielectric 648, which is adjacent to the upper surfaces and sides of the semiconductor segments 630-(N-2) to 630-N.Next, a conductor 673 (for example, metal) is simultaneously formed on the conductor 672 of FIGS. 6C and 6D (for example, in direct physical contact with the conductor 672), so that the conductor 673 surrounds the single crystal segments 630-(N-2) to 630-N Part of each of them. For example, the conductor 673 may be adjacent to the upper surface and sides of the conductor 672, and the conductor 672 is adjacent to the upper surface and sides of the gate dielectric 648. In some examples, the conductor 672 and the conductor 673 may jointly form the gate 647, which may be the gate 447.Next, a dielectric 674 that may be different from the dielectric 658 (for example, direct physical contact with the conductor 673) is formed on the conductor 673 of FIGS. 6C and 6D at the same time, so that the dielectric 674 surrounds the semiconductor segments 630-(N-2) to 630-N Part of each of them. For example, the dielectric 674 may be adjacent to the upper surface and sides of the conductor 673, and the conductor 673 is adjacent to the upper surface and sides of the gate conductor 673. In some examples, the dielectric 674 may be a nitride when the dielectric 658 is an oxide and an oxide when the dielectric 658 is a nitride.Fig. 6E is a cross section in the y-z plane viewed along any of the lines D-D in Fig. 6C, which corresponds to a specific processing stage after the processing stage corresponding to Figs. 6C and 6D. For example, a mask (eg, photoresist) may be formed on the dielectric 674 in FIG. 6D and the mask may be patterned to expose portions of the dielectric 674, the conductor 673, and the conductor 672 for removal. Subsequently, portions of the dielectric 674, the conductor 673, and the conductor 672 can be removed (eg, by etching), which stops in the gate dielectric 648 to leave some of the gate dielectric 648 on the single crystal semiconductor segment 630.The removal process forms a stack 675 on the single crystal semiconductor segment 630 that includes a gate dielectric 648, a conductor 672 on the gate dielectric 648, a conductor 673 on the conductor 672, and a dielectric 674 on the conductor 673. Subsequently, a dielectric spacer 677 is formed on the (for example, vertical) side of the stack 677. For example, a dielectric spacer 677 may be formed on the (eg, vertical) side of a portion of the dielectric 674, the conductor 673, and the conductor 672 and the gate dielectric 648. In some examples, the dielectric spacer 677 may be the same dielectric as the dielectric 674. The spacer 677 may facilitate the formation of the secondary aligned conductive implant in the single crystal semiconductor segment 630 in a subsequent processing stage.Fig. 6F is a cross section in the y-z plane viewed along any of the lines D-D in Fig. 6C, which corresponds to a specific processing stage after the processing stage corresponding to Fig. 6E. In FIG. 6F, the dielectric 674 and the dielectric spacer 677 act as a mask to protect the stack 675, while removing the unprotected portion of the gate dielectric 648 from the single crystal semiconductor segment 630. Subsequently, a conductive region 651 (for example, an N-conductive implant) (which may be a conductive region 451) is implanted in the single crystal semiconductor segment 630. For example, the conductive area 651 may be aligned due to the spacer 677.Fig. 6G is a cross-section in the y-z plane viewed along any of the lines D-D in Fig. 6C, which corresponds to a specific processing stage after the processing stage corresponding to Fig. 6F. In FIG. 6G, a mask element 679 (e.g., photoresist) is formed on the portion of the stack 675 and the conductive region 651. Subsequently, source/drain 644 and source/drain 645 (for example, N+ source/drain) (which may be source/drain 444 and source/drain 445) are implanted into the unmasked element The portion of the conductive region 651 covered by 679 extends to the portion of the single crystal semiconductor segment 630 under the portion of the conductive region 651 not covered by the mask element 679. The channel region 649 (which may be the channel region 449) may be between the portions of the conductive region 651 covered by the mask element and therefore between the source/drain 644 and the source/drain 645.The adjacent string drivers 640 (which may be string drivers 440) in FIG. 6G may each include a corresponding portion of the single crystal semiconductor segment 630 (which includes corresponding source/drain 644 and shared source/drain 645) and direct The stack 675 is located on the corresponding channel region 649. Each corresponding string driver 640 may include a corresponding conductive area 651 between the corresponding channel 649 and the corresponding source/drain 644 and a corresponding conductive area 651 between the corresponding channel 649 and the source/drain 645.Fig. 6H is a cross-section in the x-z plane corresponding to a specific processing stage after the processing stage of Fig. 6G. Fig. 6I is a cross-section in the y-z plane viewed along any of the lines I-I in Fig. 6H corresponding to the specific processing stage of Fig. 6H. Thus, the element symbol 630 may be used in FIG. 6I to generally refer to each or any of the single crystal semiconductor segments 630-(N-2) to 630-N. For example, the structures in FIGS. 6H and 6I can be formed at the same time.A dielectric 681, such as a spin-on dielectric, may be formed on the dielectric 674 in FIG. 6H and the string driver 640 in 6I at the same time. Subsequently, a portion of the dielectric 681 may be removed, for example, by chemical mechanical planarization (CMP), so that the upper surface of the dielectric 681 and the uppermost surface of the dielectric 674 are coplanar.Then, a dielectric 683, such as tetraethyl orthosilicate (TEOS), oxide, or the like, may be formed on the upper surface of the dielectric 681 and the uppermost surface of the dielectric 674. A mask (not shown in the figure) may be formed on the dielectric 683 and the mask may be patterned to expose portions of the dielectric 683 and the dielectric 681 for removal. Subsequently, portions can be removed (eg, by etching) to form openings that can stop at the conductor 673 and the source/drain 645 or in the conductor 673 and the source/drain 645.A conductive contact 660 (which can be a contact 460) can be formed in an opening that can stop at the source/drain 645 or in the source/drain 645, so that the contact 660 is in direct physical contact with the source/drain 645. The conductive contact 684 may be formed in an opening that can stop at or in the conductor 673 so that the contact 684 is in direct physical contact with the conductor 673. Then, conductive lines 685 and 686 that are in direct physical contact with the contacts 660 and 684 can be formed on the dielectric 683, respectively. The conductive line 685 may be coupled to a circuit system that is configured to supply an access signal to the string driver 640 via the source/drain 645. Conductive line 686 may be coupled to a logic circuit system (eg, logic circuit system 242 or 342), which is configured to supply control signals to conductor 673 and therefore to gate 647 to activate the string driver 640 commonly coupled to it.In some examples, the source/drain 644 may be coupled to the access lines of the steps of the corresponding ladder structure, as previously described in connection with FIGS. 4B and 4C. It should be noted that FIG. 6H may correspond to FIG. 4D, and FIG. 6I may correspond to FIG. 4B.FIG. 7A is a top view of a portion of a memory 700A (which may be various memories (the memory 100) disclosed herein) according to several embodiments of the present disclosure. FIG. 7B is a top view of a portion of a memory 700B (which may be various memories disclosed herein) according to several embodiments of the present disclosure. Fig. 7C is a cross section in the x-z plane viewed along any of the lines 7C-7C in Figs. 7A and 7B.The memories 700A and 700B respectively include corresponding string drivers 740A and 740B that can be directly above the ladder structure (for example, the corresponding ladder structure 403-1 and 403-2 of the blocks 435-1 and 435-2). One of the string drivers 740A or one of the string drivers 700B may be directly above and coupled to the steps of the corresponding ladder structure (for example, the ladder structure 403-1), and the other of the string drivers 740A or the string drivers The other one of 740B may be directly above and coupled to the step of another corresponding step structure (for example, the step structure 403-2).The string drivers 740A may respectively include single crystal semiconductor fins 788A (e.g., single crystal silicon fins Piece) corresponding group. The corresponding gate 747 may be located on each corresponding group of the fin 788A. For example, the corresponding portion of the corresponding group of the single crystal semiconductor fin 788A covered by the corresponding gate 747 may be the corresponding channel region 749.Each corresponding string driver 740A may include a corresponding source/drain 744A (eg, N+ source/drain), which may be similar to the source/source drain 444 and may be coupled to the steps of the corresponding ladder structure. For example, the corresponding contact 790 may couple each corresponding source/drain 744A to the step of the corresponding ladder structure. It should be noted that the corresponding contact 790 may be located below its corresponding source/drain 744A.Source/drain 745A (e.g. N+ source/drain) (which can be similar to source/drain 445 and can be shared (e.g. shared) by corresponding string drivers 740A) can be between corresponding groups of fins 788A . The contact 792 can couple the source/drain 745A to the circuit system, which is configured to supply an access signal to the source/drain 745A and thus to the corresponding step coupled to the corresponding string driver 740A after activating the corresponding string driver 740A . It should be noted that the contact 792 can be located above the source/drain 745A.In some examples, the corresponding conductive region 793A (eg, N-region) may be between the corresponding gate 747 and the corresponding source/drain 744A. For example, the corresponding conductive region 793A including the portion of the fin 788A in the corresponding region 793A may be conductively doped (eg to N-conductivity). In some examples, the corresponding conductive region 794A (eg, N-region) may be between the corresponding gate 747 and the source/drain 745A. For example, the corresponding conductive area 794A including the portion of the fin 788A in the corresponding area 794A may be conductively doped (eg to N-conductivity).In FIG. 7B, a group of single crystal semiconductor fins 788B is formed in a single crystal semiconductor 730B (which may be a single crystal semiconductor 430, a single crystal semiconductor 530, or a single crystal semiconductor segment 630). The string drivers 740B may respectively include corresponding parts of the group of single crystal semiconductor fins 788B. For example, the group of single crystal semiconductor fins 788B can be shared by the string driver 740B. The corresponding gate 747 of the corresponding string driver 740B may be located on the corresponding portion of the group of single crystal semiconductor fins 788B. For example, the corresponding portion of the single crystal semiconductor fin 788B covered by the corresponding gate 747 may be the corresponding channel region 749.Each corresponding string driver 740B may include a corresponding source/drain 744B (eg, N+ source/drain), which may be similar to the source/source drain 444 and may be coupled to the steps of the corresponding ladder structure. For example, each respective source/drain 744B may include a corresponding portion of the group of fins 788B, such that the corresponding portion of the group of fins 788B is conductively doped (e.g., to N+ conductivity). The corresponding contact 790 can couple the corresponding source/drain 744B to the steps of the corresponding ladder structure. It should be noted that the corresponding contact 790 may be located below its corresponding source/drain 744B.The source/drain 745B (eg, N+ source/drain) (which may be similar to the source/drain 445 and shared (eg shared) by the corresponding string driver 740B) may be between the corresponding control gates 746. The contact 792 may couple the source/drain 745B to the circuit system, which is configured to supply an access signal to the source/drain 745B after activating the corresponding string driver 740B and thus to the corresponding rung coupled to the corresponding string driver 740B . For example, the source/drain 745B may include a corresponding portion of the group of fins 788B such that the corresponding portion of the group of fins 788B is conductively doped (e.g., to N+ conductivity). It should be noted that the contact 792 can be located above the source/drain 745A.In some examples, the corresponding conductive region 793B (eg, N-region) may be between the corresponding gate 747 and the corresponding source/drain 744B. For example, the corresponding conductive region 793B including the portion of the fin 788A in the corresponding region 793B may be conductively doped (e.g., to N-conductivity). In some examples, the corresponding conductive region 794B (eg, N-region) may be between the corresponding gate 747 and the source/drain 745B. For example, the corresponding conductive region 794B including the portion of the fin 788A in the corresponding region 794B may be conductively doped (eg to N-conductivity).In FIG. 7C, the single crystal semiconductors 730A and 730B and the fins 788A and 788B in FIGS. 7A and 7B are generally referred to as the single crystal semiconductor 730 and the fin 788, respectively. In FIG. 7C, the dielectric 758 (which may be the dielectric 458 or the dielectric 658) may be located above the memory array 706 (which may be the various memory arrays described herein). For example, the dielectric 758 may be directly above the stepped structure (such as the stepped structure 103, 403-1, or 403-2) and may extend above the memory cell area of the array 706 (which may be the various memory cell areas disclosed herein) .The dielectric 796 (which may be an oxide) may be formed on the dielectric 758 (e.g., in direct physical contact with the dielectric 758). The single crystal semiconductor 730 may be located above the dielectric 796, and thus may be directly above the stepped structure or the memory cell area. In some examples, the single crystal semiconductor 730 may be attached to the upper surface of the dielectric 796, as previously described in connection with FIGS. 5A to 5C. The fin 788 may be formed of a single crystal semiconductor 730 such that the fin 788 extends from the upper surface of the dielectric 796.The corresponding gate dielectric 748 (which may be the gate dielectric 448 or 648) may be formed around a portion of the corresponding fin 788. For example, the corresponding gate dielectric 748 may be in direct physical contact with the corresponding fin 788 and may be adjacent to the top and sides of the corresponding fin 788. The gate 747 may be formed on the gate dielectric 748 (eg, and in direct physical contact with the gate dielectric 748).The gate 747 may be adjacent to the top and sides of the corresponding gate dielectric 748. This can increase the capacitive coupling area between the gate 747 and the fin 788 compared to the capacitive coupling area between the planar gate and the planar single crystal semiconductor. Thus, for the same capacitive coupling area, the fin structure can occupy less space along the x-direction than a planar structure to thereby allow a higher string driver density (more string drivers) above the array 706.FIG. 8A is a top view of a portion of a memory 800 (which may be various memories disclosed herein) according to several embodiments of the present disclosure. 8B and 8C are various cross-sectional views associated with FIG. 8A according to several embodiments of the present disclosure. Fig. 8B is a cross section in the x-z plane viewed along any of the lines 8B-8B in Fig. 8A. Fig. 8C is a cross section in the x-z plane viewed along any of the lines 8C-8C in Fig. 8A.The memory 800 includes corresponding groups of string drivers 840-(N-2) to 840-N that can be located directly above the ladder structure (for example, the corresponding ladder structures 403-1 and 403-2 of blocks 435-1 and 435-2), respectively. For example, string drivers 840-(N-2) to 840-N may replace string drivers 440-(N-2) to 440-N, respectively.The string drivers 840-(N-2) to 840-N of the corresponding group can be directly located above the steps 827-(N-2) to 827-N of the corresponding ladder structure (as shown in Figures 8B and 8C) and coupled respectively Go to rung 827-(N-2) to 827-N. It should be noted that the steps 827-(N-2) to 827-N may respectively include access lines 814-(N-2) to 814-N, which may be access lines 414-(N-2) to 414-N And may be respectively located above the dielectric 802 (which may be the dielectric 102 or 402).The string driver 840 from each group may include a corresponding portion of the single crystal semiconductor fin 830. For example, the string driver 840-(N-2) from each group may include a corresponding portion of the fin 830-(N-2); the string driver 840-(N-1) from each group may include the fin 830 -The corresponding part of (N-1); and the string driver 840-N from each group may include the corresponding part of the fin 830-N. In some examples, the fins 830-(N-2) to 830-N may replace the single crystal semiconductors 430-(N-2) to 430-N, respectively.Each of the string drivers 840 from each group may include a corresponding source/drain 844 (e.g., N+ source/drain), which may be similar to the source/drain 444 and may be coupled to the corresponding stepped structure Corresponding steps. For example, the corresponding source/drain 844 of the corresponding string driver 840 may be formed in the corresponding portion of the corresponding fin 830. The corresponding contact 890 can couple each corresponding source/drain 844 to the corresponding step. For example, the source/drain 844 located in the fins 830-(N-2) to 830-N can be respectively coupled to the access lines 814-(N-2) to 814-N through the contact 890, as shown in FIG. Shown in 8C. It should be noted that the corresponding contact 890 can pass through its corresponding source/drain 844.A source/drain 845 (eg, N+ source/drain) (which may be similar to the source/drain 445) may be formed in each fin 840 between the corresponding string drivers corresponding to the corresponding fin 840. For example, the source/drain 845 in the fin 830-(N-2) may be between the string drivers 840-(N-2) and shared by the string drivers 840-(N-2); the fin 830- The source/drain 845 in (N-1) may be between the string driver 840-(N-1) and shared by the string driver 840-(N-1); and the source in the fin 830-N The /drain 845 may be between the string drivers 840-N and shared by the string drivers 840-N. Corresponding contacts 892 can couple each corresponding source/drain 845 to a circuit system that is configured to supply access signals to the corresponding source/drain 845 after activating the corresponding string driver 840 and thus to the shared corresponding The corresponding step 827 of the corresponding string driver 840 of the source/drain 845. It should be noted that the contact 892 may be located above its corresponding source/drain 845.The respective gate 847 (which may be the gate 447) may be commonly coupled to each string driver 840. The corresponding portions of the fins 830-(N-2) to 830-N covered by the corresponding gates 847 may be the corresponding channel regions 849 of the corresponding string drivers of the corresponding group. In some examples, the corresponding gate 847 may be coupled to receive a control signal for activating the string driver 840 coupled to the corresponding gate 847. It should be noted that the string driver 840 may be a finFET.A corresponding conductive region 850 (eg, N-region) (which may be similar to the conductive region 450) may be formed in each corresponding fin 830 between the gate 847 and the source/drain 844, respectively. A corresponding conductive region 851 (eg, N-region) (which may be similar to the conductive region 451) may be formed in each corresponding fin 830 between the gate 847 and the source/drain 845, respectively.In FIGS. 8B and 8C, the dielectric 858 (which may be the dielectric 458 or the dielectric 658) may be directly above the stepped structure 803 (which may be a part of the stepped structure 103 or the stepped structure 403). For example, the dielectric 858 may be located above the dielectric 856 (which may be the dielectric 456 and may be located on the stepped structure 803). The dielectric 896 (which may be an oxide) may be formed on the dielectric 858 (e.g., in direct physical contact with the dielectric 858). The fin 830 may be formed of a single crystal semiconductor attachable to the upper surface of the dielectric 896, as previously described in connection with FIGS. 5A to 5C. The fin 830 may extend from the upper surface of the dielectric 896.The corresponding gate dielectric 848 (which may be the gate dielectric 448, 648, or 748) may be formed around a portion of the corresponding fin 830. For example, the corresponding gate dielectric 848 may be in direct physical contact with the corresponding fin 830 and may be adjacent to the top and sides of the corresponding fin 830.The gate 847 may be formed on the gate dielectric 848 (eg, and in direct physical contact with the gate dielectric 848). The gate 847 may be adjacent to the top and sides of the corresponding gate dielectric 848. This can increase the capacitive coupling area between the gate 847 and the fin 830 compared to the capacitive coupling area between the planar control gate and the planar single crystal semiconductor. This allows for a higher string driver density so that the corresponding string driver can be directly above each corresponding step 827 and coupled to each corresponding step through a pen direct point 890, as shown in Figure 8C. For example, the corresponding contact 890 may pass through its corresponding source/drain 844.9 is a cross-sectional view in the x-z plane of a portion of a memory 900 (which may be a part of various memories disclosed herein) according to several embodiments of the present disclosure.The memory 900 may include a stacked memory array 906, which may be part of a stacked memory array 106, for example. The array 906 may include a memory cell area 901 (which may be a part of the memory cell area 101) and a ladder structure 903 (which may be a part of the ladder structure 103) adjacent to the memory cell area 901. The group of string drivers 940-1 to 940-N may be directly above the ladder structure 903. For example, the string driver 940 may be various string drivers disclosed herein.The step structure 903 may include steps 927-1 to 927-N that may be between the uppermost step 916 and the lowermost step 916. The array 906 may include a (eg, vertical) stack of access lines 914-1 to 914-N along the z-direction, such that the steps 927-1 to 927-N include access lines 914-1 to 914-N, respectively. Each step 927 may include a corresponding access line 914 on the corresponding dielectric 902. The uppermost step 916 may include the upper select line 912 on the dielectric 902, and the lowermost step 916 may include the lower select line 914 on the dielectric 902, which may be located on the semiconductor 907 (which may be the semiconductor 107).The string drivers 940-1 to 940-N may be directly located on the access lines 914-1 to 914-N and coupled to the access lines 914-1 to 914-N, respectively. In some examples, the string drivers 940-1 to 940-N may include single crystal semiconductors 930-1 to 930-N, which may be single crystal semiconductors 430, single crystal semiconductors 430, single crystal semiconductor segments 630, fin type Single crystal semiconductor 730A, fin type single crystal semiconductor 730B, or single crystal semiconductor fin 830.The string drivers 940-1 to 940-N and therefore the single crystal semiconductors 930-1 to 930-N may be located on the dielectric 958, which may be the dielectric 458, 658, 758, or 858 and may be located in the memory cell area 901 and the ladder structure 903 On and therefore on the array 906. For example, the dielectric 958 may be located on the dielectric 956, and the dielectric 956 may be the dielectric 456 or 856 and may be located on the memory cell area 901 and the step structure 903. The single crystal semiconductors 930-1 to 930-N are coupled to the steps 927-1 to 927-N through contacts 929-1 to 929-N, respectively.Access lines 914-1 to 914-N may be coupled to memory cells 910-1 to 910-N, respectively. The memory cells 910-1 to 910-N may be coupled in series to form a string-series coupled memory cell that may be adjacent to the semiconductor structure 905 (e.g., it may vertically pass through the memory cell region 901) (which may be the semiconductor structure 105).The string may be between the select transistor 908 and the select transistor 909. For example, the select transistor 908 may be located at the intersection of the upper select line 912 and the semiconductor structure 905, and the select transistor 909 may be located at the intersection of the lower select line 912 and the semiconductor structure 905.Each of the memory cells 910-1 to 910-N may, for example, include a charge storage structure 9101, such as a charge trap or a floating gate, at the intersection of the semiconductor structure 905 and the corresponding access line 910. Each of the memory cells 910-1 to 910-N may include a dielectric 9103, such as a barrier dielectric, which may be between the corresponding access line 914 and the corresponding charge storage structure 9101. For example, the dielectric 9103 of the memory cell 910-i may be between the access line 914-i and the charge storage structure 9101 of the memory cell 910-i.Each of the memory cells 910-1 to 910-N may include a dielectric 9105, such as a tunneling dielectric, which may be interposed between the corresponding charge storage structure 9101 and the semiconductor structure 905. For example, the dielectric 9105 of the memory cell 910-i may be between the charge storage structure 9101 and the semiconductor structure 905 of the memory cell 910-i. For example, the dielectric 9103, the charge storage structure 9101, and the dielectric 9105 may completely surround the semiconductor structure 905 and may be located at the intersection of the access line 914 and the semiconductor structure 905.The selection transistor 909 may include a control gate that may be included in the lower selection line 912. The dielectric 9108 (eg, gate dielectric) of the selection transistor 909 may be between the lower selection line 912 and the semiconductor structure 905. For example, the lower selection line 912 and the dielectric 9108 and therefore the selection transistor 909 can completely surround the semiconductor structure 905.The selection transistor 908 may include a control gate that may be included in the upper selection line 912. The dielectric 9110 (eg, gate dielectric) of the selection transistor 908 may be between the upper selection line 912 and the semiconductor structure 905. For example, the upper selection line 912 and the dielectric 9110 and therefore the selection transistor 908 may completely surround the semiconductor structure 905. The data line 922 may be coupled to the end of the semiconductor structure 905 and therefore to the select transistor 908 through the contact 924, for example.Figure 10 is a block diagram of a device in the form of a computing system 10120 according to several embodiments of the present disclosure. The computing system 10120 includes a memory system 10122, which may be, for example, a storage system, such as an SSD, a UFS device, an eMMC device, and so on. However, the embodiments are not limited to a specific type of memory system. For example, the memory system 10122 may serve as the main memory of the system 10120.As shown in FIG. 10, the memory system 10122 may include a controller 10125, which may be referred to as a memory system controller, because the controller 10125 may control the memory 10128, which may be various memories disclosed herein. The controller 10125 is coupled to the host 10130 and the memory 10128. For example, the memory 10128 may include several memory devices (such as dies, chips, etc.) and may serve as a memory (such as a main memory) and/or the storage capacity of the computing system 10120.The memory 10128 may be coupled to the controller 10125 via an interface 10133 (such as a memory interface). The interface 10133 may include a data bus and may support various standards and/or conform to various interface types (such as double data rate (DDR), etc.) . The controller 10125 can receive commands from the host 10130, such as read and write commands. For example, the controller 10125 may receive host data written to the memory 10122 from the host 10130 via the host interface 10137. As used herein, the memory system 10122, the controller 10125, the memory 10128, or the controller 10140 can also be individually regarded as "devices."The host 10130 may be, for example, a host system, such as a personal laptop computer, a desktop computer, a digital camera, a mobile device (such as a cellular phone), a web server, an Internet of Things (IoT) enabled device, or a memory card reader And various other types of hosts. For example, the host 10130 may include one or more processors capable of accessing the memory 10128 (e.g., via the controller 10125) through the interface 10137, which may include a bus. The interface 10137 may be a standardized interface, such as Serial Advanced Technology Attachment (SATA), Peripheral Component Interconnect Express (PCIe) or Universal Serial Bus (USB), and various other interfaces.The memory 10128 may include a plurality of memory arrays 1006 (for example, collectively referred to as an array 1006) and a controller 10140 (which may be referred to as an embedded controller). In some examples, the array 1006 may be a stacked memory array (eg, a 3D NAND array), which may be the array 106 or 906. String drivers (such as the various string drivers disclosed herein) may be located above the memory array 1006. For example, the memory array 1006 may include a ladder structure. The steps of the ladder structure may be commonly coupled to corresponding levels of non-volatile memory cells in the memory array 1006, respectively. The corresponding string drivers above the memory array 1006 may include corresponding single crystal semiconductor structures respectively coupled to the steps.The controller 10140 can be located inside the memory 10128 and can receive commands (such as write commands, read commands, etc.) from the controller 10125 via the memory interface 10133. The controller 10140 may include a state machine and/or a sequencer. The controller 10140 may be configured to control the operation of the memory 10128.In the above detailed description, reference is made to the drawings constituting a part of the present disclosure, and specific examples are shown in the drawings through explanation. In the drawings, the same reference symbols describe substantially similar components in all several views. Other examples may be utilized, and structural, logical, and/or electrical changes may be made without departing from the scope of the present disclosure.The figures in this article follow the numbering convention, where the first or first few digits correspond to the figure number and the remaining digits identify the elements or components in the figure. Similar elements or components between different figures can be identified by using similar numbers. For example, 130 may refer to element "30" in FIG. 1, and similar elements in FIG. 4A may be referred to as 430. It should be understood that the elements shown in the various embodiments herein can be added, exchanged, and/or eliminated to provide several additional embodiments of the present disclosure. In addition, it should be understood that the proportions and relative scales of the elements provided in the figures are intended to illustrate the embodiments of the present disclosure and should not be regarded as intended to be limiting.As used herein, "a number" or "a certain amount" of something can refer to one or more of such things. For example, a number or a certain amount of memory cells may refer to one or more memory cells. "Multiple" something means two or more. As used herein, multiple actions performed at the same time refer to actions that at least partially overlap within a certain period of time. As used herein, the term "coupled" may include electrical coupling, direct coupling and/or direct connection without intervening elements (for example, through direct physical contact), intervening element coupling and/or connection or wireless coupling. The term "coupled" may further include two or more elements that cooperate or interact with each other (e.g., in a causal relationship).Although specific examples have been illustrated and described herein, those of ordinary skill in the art should understand that arrangements calculated to achieve the same result can replace the specific embodiments shown. The present disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It should be understood that the above description has been made in an illustrative manner, not a restrictive manner. The scope of one or more examples of the present disclosure should be determined with reference to the appended claims and the full scope of equivalents authorized by this claim. |
An improved MOS transistor and method for making it are described. The MOS transistor's source and drain have a first conductivity type and are separated from each other by a first region having a second conductivity type opposite to the first conductivity type. A second region, also having the second conductivity type, is formed adjacent to the drain and is separated from the first region by the drain. |
What is claimed is: 1. A method of forming an MOS transistor comprising:defining a gate electrode above a first region that lies between a source region and a drain region; forming a source and a drain, which have a first conductivity type, that are separated from each other by the first region, which has a second conductivity type opposite to the first conductivity type; and forming a second region, also having the second conductivity type, which is formed adjacent to the drain and is separated from the first region by the drain. 2. The method of claim 1 wherein the second region is formed by masking the source region, then depositing n-type impurities into the unmasked drain region.3. The method of claim 2 wherein arsenic is implanted into the unmasked drain region followed by implanting boron into the unmasked drain region, then applying a sufficient amount of heat for a sufficient amount of time to cause the boron to form a P+ drain that separates an arsenic containing N+ second region from an N+ first region.4. A method of forming a high power PMOS transistor comprising:defining a gate electrode above an N- first region that lies between a source region and a drain region; forming a P+ source and a P+ drain separated from each other by the N- first region; and forming an N+ second region adjacent to the P+ drain and separated from the N- first region by the P+ drain. 5. The method of claim 4 wherein the N+ second region is formed by depositing then etching a polysilicon layer to define the gate electrode;forming spacers on the sides of the etched polysilicon layer to cover part of the source region and part of the drain region; masking the source region; then implanting arsenic into the unmasked drain region to form the N+ second region, wherein the spacer, which covers part of the drain region, separates the implanted arsenic from the edge of the polysilicon layer. 6. The method of claim 5 further comprising implanting boron into the unmasked drain region, after implanting the arsenic, followed by applying a sufficient amount of heat for a sufficient amount of time to cause the boron to form the P+ drain that separates the N+ second region from the N- first region.7. The method of claim 6 wherein the boron containing P+ drain encloses the arsenic containing N+ second region and forms the base of an NPN bipolar device, and wherein the N+/P+ junction that separates the P+ drain from the N+ second region lies beneath the spacer, which covers part of the drain region.8. The method of claim 7 wherein the N+ second region is separated from the N- first region by between about 500 and about 1,000 angstroms.9. A method for making a PMOS transistor, having a gate electrode, a source and a drain, comprising forming a region having n-type conductivity adjacent to the drain, which is separated from an n-well by the drain. |
This is a Divisional Application of Ser. No. 09/374,057 filed Aug. 12, 1999, now U.S. Pat. No. 6,177,705.FIELD OF THE INVENTIONThe present invention relates to semiconductor devices and a method for making them.BACKGROUND OF THE INVENTIONA personal computer's microprocessor may operate at a substantially lower voltage than the voltage at which the memory controller operates. In such a system, when signals are transmitted between the microprocessor and the memory controller, the input nodes of the microprocessor may be exposed to the higher voltage and the output nodes may be required to support that higher voltage. In addition, the output nodes may be required to source relatively high currents, when signals are to be transmitted from the microprocessor to the bus.Certain features are currently added to a microprocessor, when it will be exposed to such voltages and when it must source such currents. To protect the microprocessor, when exposed to relatively high voltages, the microprocessor must include special circuitry. In essence, such special circuitry steps the voltage applied to the microprocessor's input nodes down to the microprocessor's operating voltage. To enable the output nodes to provide high currents, those nodes generally comprise wide devices. Adding such design features to the device requires setting aside relatively large chip areas, which are used for those purposes, and requires a relatively complex circuit design. Eliminating such features, which reduces die size and simplifies circuit design, is desirable.Accordingly, there is a need for a semiconductor device that can tolerate high voltages and is capable of outputting relatively large currents. There is a need for such a device that does not require wide devices to generate high output currents or complex circuitry to protect it from high voltages. The MOS transistor of the present invention enables the production of such a device.SUMMARY OF THE INVENTIONThe present invention covers an MOS transistor that includes a gate electrode, a source and a drain. The source and the drain have a first conductivity type and are separated from each other by a first region having a second conductivity type that is opposite to the first conductivity type. A second region, which also has the second conductivity type, is formed adjacent to the drain and is separated from the first region by the drain. The present invention also covers a method for making such an MOS transistor.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic representing a cross-section of the transistor of the present invention.FIGS. 2a-2e are schematics representing cross-sections of structures that may result after certain steps are used, when making the transistor represented by FIG. 1.DETAILED DESCRIPTION OF THE PRESENT INVENTIONAn improved MOS transistor and method for making it are described. FIG. 1 is a schematic representing a cross-section of the transistor of the present invention that includes gate electrode 100 formed on gate oxide 101. Gate electrode 100 may be formed from a heavily doped polysilicon layer or the combination of such a layer and a silicide. On the sides of gate electrode 100 are spacers 102 and 103. Spacers 102 and 103 may be formed from the combination of a relatively thin silicon dioxide layer and a relatively thick silicon nitride layer.Beneath gate oxide 101 are source 104 and drain 105. Source 104 and drain 105 each have a first conductivity type and are separated from each other by first region 106. First region 106 has a second conductivity type that is opposite to the first conductivity type. The transistor of the present invention further includes second region 107, which also has the second conductivity type. Second region 107 is formed adjacent to drain 105 and is separated from first region 106 by drain 105.Although the MOS transistor of the present invention may be NMOS or PMOS, in a preferred embodiment it is a PMOS transistor. When a PMOS transistor, source 104 and drain 105 are each doped P±with source 104 and drain 105 preferably including lightly doped (i.e., P-) extensions that extend underneath gate electrode 100. First region 106 is preferably doped N- and second region 107 is doped N+. N+ second region 107, P+ drain 105 and N- first region 106 form an NPN bipolar device, and P+source 104, N- first region 106 and P+ drain 105 form a PNP bipolar device. Because, in operation, current fed into N- first region 106 feeds the base of the PNP device and current fed into P+ drain 105 feeds the base of the NPN device, the resulting structure forms a thyristor.Set forth below is a description of a preferred process for making the MOS transistor described above. This description is made with reference to FIGS. 2a-2e, which provide schematics representing cross-sections of structures that result after using certain steps.To make the transistor represented by FIG. 1, ions are implanted into an epitaxial layer formed on a bulk silicon substrate to create well 220. When forming a PMOS transistor, n-type impurities (e.g., arsenic, phosphorus or both) are deposited to form n-well 220. After gate oxide 201 is formed on n-well 220, a polysilicon layer is deposited on gate oxide 201, then etched to define a gate electrode. Etched polysilicon layer 210 lies above first region 206, which lies between source region 211 and drain region 212. Polysilicon layer 210 will already be heavily doped (P+), or will be doped P+ by the subsequent P+ source/drain implant. Part of first region 206 will ultimately form the transistor's channel.Following a relatively low dose tip implant step (for forming extensions of the source and drain), which may be followed by a tilted halo implant step, spacers 202 and 203 are formed on the sides of polysilicon layer 210. Spacer 202 covers part of source region 211, and spacer 203 covers part of drain region 212.Spacers 202 and 203 may be formed by first depositing a relatively thin layer of silicon dioxide then depositing on that layer a relatively thick layer of silicon nitride. A subsequent anisotropic etch step removes those two layers, except along the sides of polysilicon layer 210. A cross-section representing the resulting structure is shown in FIG. 2a-where "+" represents the deposition of p-type impurities (e.g., boron) into source region 211 and drain region 212. This structure, which is like those typically made when forming MOS transistors, may be made using conventional materials and process steps, as will be apparent to those skilled in the art.After forming spacers 202 and 203, a layer of photoresist 213 may be deposited over the resulting structure using conventional materials and process steps. Such a layer is deposited at this point in current processes to enable the masking of the structures that will form an integrated circuit's PMOS devices from the N+implants that will be applied to form the source and drain for the integrated circuit's NMOS devices.To form the transistor of the present invention, a portion of layer 213 that lies above drain region 212 is removed, preferably at the same time layer 213 is etched to expose the NMOS devices, forming the structure represented by FIG. 2b. By removing that portion of layer 213, drain region 212 is exposed to the subsequent N+ implant step, which is used to form the source and drain for NMOS devices. During that implant step, n-type impurities, preferably arsenic, are deposited into the portion of drain region 212 that is not covered by spacer 203. The designation "-", shown in FIG. 2c, represents the implantation of n-type impurities into the unmasked part of drain region 212.After the N+ implant step, photoresist layer 213 is removed. After masking off the NMOS devices (not shown), p-type impurities, preferably boron, are implanted into the portions of source region 211 and drain region 212, which are not covered by spacers 202 and 203. FIG. 2d shows that p-type impurities (designated "+"), for forming the heavily doped portions of the source and drain, are implanted only into unmasked portions of source and drain regions 211 and 212.Following the P+ source/drain implant step, a conventional heating step (or steps) is applied to activate the dopants. That heating step causes the dopants to diffuse both vertically and laterally. Because boron diffuses faster than arsenic, when boron is used to form P+ source and drain 204 and 205 and arsenic is used to form N+ second region 207, the boron will diffuse further into the n-well than the arsenic. As a result, at the end of the heating step, the boron that forms drain 205 will enclose the arsenic containing N+ second region 207, such that P+ drain 205 separates N+ region 207 from N- region 206. FIG. 2e represents the resulting structure.In sum, to make the PMOS transistor of the present invention, which provides a gate controlled thyristor function, the conventional process for making such a transistor is varied only slightly. To make such a device, the conventional process is modified to enable removal of the photoresist layer from the drain region of the PMOS device prior to the N+ source/drain implant. This can be accomplished by simply altering the mask used to expose the NMOS devices so that the PMOS drain region will also be exposed to the N+ source/drain implant.When making a PMOS transistor, following the teachings of the present invention, N+ second region 207 preferably comprises arsenic at a concentration between about 10<19 >and about 10<21>. In addition, P+ drain 205 preferably comprises boron at a concentration between about 10<18 >and about 10<20>, and N- first region 206 preferably comprises arsenic or phosphorus at a concentration between about 10<16 >and about 10<18>. In the resulting transistor, N+ second region 207 preferably is separated from N- first region 206 by between about 500 and about 1,000 angstroms.Although one way to form the MOS transistor of the present invention requires nothing more than altering the mask used to define the regions that will be exposed to the N+ source/drain implant, the invention is clearly not so limited. Additional masking and implantation steps may be used to vary the dopant concentrations and profiles of N+ second region 207, P+ drain 205 and N- first region 206, or to otherwise optimize device features for use in particular applications.The following examples show how the PMOS transistor of the present invention operates under different bias conditions. In those examples, bias applied to source 104, drain 105, second region 107 (the cathode, in keeping with conventional nomenclature for thyristors), first region 106 (formed within the n-well), and the gate electrode are Vs, Vd, Vk, Vnw, and Vg respectively.EXAMPLE 1Vg=1.3V Vs=1.3V Vd=0V Vk=0V Vnw=1.3VUnder these bias conditions, the PMOS transistor is off. Because the PNP and NPN devices do not have any base/emitter bias (Vk-Vd=0V, and Vs-Vnw=0 v), those devices are off, too.EXAMPLE 2Vg=0V Vs=1.3V Vk=0 to 0.6V Vnw=1.3VHere, the PMOS is on, causing holes to be injected into the drain, which raises the P+ drain potential. This forward biases the cathode/drain junction (i.e., the junction between N+ second region 107 and P+ drain 105), causing injection of electrons from N+ second region 107 into P+ drain 105. By bipolar action, these electrons flow into N- first region 106, which forms a base current for the PNP transistor, which turns it on and causes higher current to flow into the cathode, i.e., N+ second region 107.This NPN/PNP loop can latch depending on the loop gain, enabling conduction of relatively large currents with low voltage drop. As long as a sufficient voltage differential remains between the N+ cathode and the P+ drain to forward bias the N+ cathode to the P+ drain, the device will continue sourcing current. When Vg is returned to 1.3V, the PMOS transistor turns off, which reduces the output current. The NPNP thyristor will likewise turn off, when the required holding voltage is not maintained on the cathode.EXAMPLE 3Vs=1.3V Vk>0.6V Vnw=1.3VIn this case, the voltage drop across the NPNP structure is not sufficient to hold the NPN/PNP loop in the latched position. Under these conditions, the overall current flow is controlled by turning on the PMOS device. Although the NPNP structure provides additional gain to the PMOS current, that structure does not latch. As a result, device switching is fully under gate electrode control.EXAMPLE 4Vs=1.3V Vk>1.3V Vnw=1.3VUnder these conditions, N+ second region 107 and P+ drain 105 are reversed biased. As a result, the voltage drops significantly across the N+/P+ junction. This ensures that the high cathode voltage (Vk) will not apply to the drain 105, gate electrode 100, and gate oxide 101 regions, which protects gate oxide 101 from voltage related damage.The transistor represented by FIG. 1 thus may be used to form input/output nodes for an integrated circuit that can source relatively large amounts of current and power and that may be exposed to voltages that are higher than the device's operating voltage. Such capability results from that transistor's NPNP structure being able to source relatively high currents at low forward resistance, while also providing a reverse voltage blocking function. As indicated above, such a device may tolerate relatively high output node voltages without stressing the reliability of the gate oxide, as the N+ cathode/P+ drain will be reverse biased, which will ensure that the bias applied to the gate oxide will be lower than the bias applied to the output node.In addition, devices made in accordance with the present invention may withstand exposure to electrostatic discharge. An ESD event may cause a significant amount of charge to be applied to an output node. If that charge causes the N+/P+ junction to be forward biased, the thyristor will fire, which will dissipate the charge and protect the device's circuitry. Conversely, if the charge causes a reverse biased condition, the high N+/P+ leakage can likewise serve to quickly dissipate charge.Features shown in the above referenced drawings are not intended to be drawn to scale, nor are they intended to be shown in precise positional relationship. Additional process steps that may be used to make the embodiments described above have been omitted when not useful to describe aspects of the present invention.Although the foregoing description has specified an MOS transistor that includes certain features, and has specified certain materials and process steps for making such a device, those skilled in the art will appreciate that many modifications and substitutions may be made. Accordingly, it is intended that all such modifications, alterations, substitutions and additions be considered to fall within the spirit and scope of the invention as defined by the appended claims. |
Techniques are described for controlling operation of both a host device and a wearable display device connected to the host device based on a use status of the wearable display device. The techniques include automatically determining a use status of a wearable display device based on feedback from one or more touch sensors within the wearable display device that indicates whether the wearable display device is worn by a user. Based on the determined use status, the wearable display device controls its own operation (e.g., controls operation of display screens of the wearable display device, a communication session with the host device, and display processing of data received from the host device). The wearable display device also sends an indication of the use status to the host device. The host device then controls its own data processing for the wearable display device based on the indicated use status. |
A method of controlling a wearable display device connected to a host device, the method comprising: determining, with the wearable display device, a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; sending, with the wearable display device, an indication of the use status of the wearable display device to the host device; controlling, with the wearable display device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device; and controlling, with the wearable display device, operation of the wearable display device based on the use status of the wearable display device.The method of claim 1, wherein determining the use status of the wearable display device comprises: generating an oscillation frequency based on the feedback from the touch sensors of the wearable display device, wherein the oscillation frequency changes when the touch sensors are in contact with the user; and based on a comparison of the oscillation frequency and a threshold frequency value, determining whether the wearable display device is in use or not in use.The method of claim 1, wherein determining the use status of the wearable display device comprises continuously determining the use status of the wearable display device, and generating a direct processor interrupt request based on a change in the use status.The method of claim 1, wherein controlling operation of the wearable display device comprises controlling operation of one or more display screens of the wearable display device, a 30 communication session with the host device, and display processing of data received from the host device.The method of claim 1, wherein sending the indication of the use status of the wearable display device to the host device comprises sending a virtual processor interrupt request to a processor of the host device, and wherein controlling the data processing performed at the host device comprises requesting, according to the virtual processor interrupt request, the processor of the host device to one of enable, disable or reduce data processing performed at the host device to generate multimedia data for presentation on the wearable display device.The method of claim 1, wherein determining the use status of the wearable display device comprises determining that the wearable display device is in use, and wherein controlling operation of the wearable display device comprises one or more of establishing a communication session with the host device, activating one or more display screens of the wearable display device, and enabling display processing of data received from the host device.The method of claim 1, wherein determining the use status of the wearable display device comprises determining that the wearable display device is in use, and wherein controlling the data processing performed at the host device comprises requesting a processor of the host device to enable data processing at the host device to generate multimedia data for presentation on the wearable display device.The method of claim 1, wherein determining the use status of the wearable display device comprises determining that the wearable display device is not in use, and wherein controlling operation of the wearable display device comprises entering a reduced power state.The method of claim 1, wherein determining the use status of the wearable display device comprises determining that the wearable display device is not in use, and wherein controlling operation of the wearable display device comprises one or more of disabling display processing of data received from the host device, deactivating one or more display screens of the wearable display device, and dismantling a communication session with the host device.The method of claim 9, further comprising initiating a disconnect timer, wherein controlling operation of the wearable display device comprises: 31 prior to expiration of the disconnect timer, reducing display processing of data received from the host device until the display screens are deactivated; and upon expiration of the disconnect timer, dismantling the communication session with the host device.The method of claim 1, wherein determining the use status of the wearable display device comprises determining that the wearable display device is not in use, and wherein controlling the data processing performed at the host device comprises requesting a processor of the host device to disable data processing at the host device to not generate multimedia data for presentation on the wearable display device.The method of claim 1, wherein the wearable display device comprises a wireless head-mounted display (WHMD) device formed as glasses that include at least one of the touch sensors located on a bridge of the glasses and at least two of the touch sensors located on temple arms of the glasses.A method of controlling a host device connected to a wearable display device, the method comprising: receiving, with the host device, an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; and controlling, with the host device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device.The method of claim 13, wherein receiving the indication of the use status of the wearable display device comprises receiving, with a processor of the host device, a virtual processor interrupt request from the wearable display device that requests the processor of the host device to one of enable, disable or reduce data processing performed at the host device to generate multimedia data for presentation on the wearable display device. 32The method of claim 13, further comprising controlling, with the host device, operation of a communication session with the wearable display device and data transmission to the wearable display device based on the indicated use status of the wearable display device.The method of claim 13, wherein receiving the indication of the use status of the wearable display device comprises receiving an indication that the wearable display device is in use, and wherein controlling data processing comprises enabling data processing at the host device to generate multimedia data for presentation on the wearable display device.The method of claim 13, wherein receiving the indication of the use status of the wearable display device comprises receiving an indication that the wearable display device is not in use, and wherein controlling data processing comprises disabling data processing at the host device to not generate multimedia data for presentation on the wearable display device.The method of claim 17, further comprising, upon receiving the indication that the wearable display device is not in use, generating a message for the user that the wearable display device has entered a reduced power state.A wearable display device connected to a host device, the wearable display device comprising: one or more touch sensors; and one or more processors configured to determine a use status of the wearable display device based on feedback from the touch sensors that indicates whether the wearable display device is worn by a user, send an indication of the use status of the wearable display device to the host device, control data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device, and control operation of the wearable display device based on the use status of the wearable display device.The wearable display device of claim 19, wherein the one or more processors are configured to: 33 generate an oscillation frequency based on the feedback from the touch sensors of the wearable display device, wherein the oscillation frequency changes when the touch sensors are in contact with the user; and based on a comparison of the oscillation frequency and a threshold frequency value, determine whether the wearable display device is in use or not in use.The wearable display device of claim 19, wherein the one or more processors are configured to continuously determine the use status of the wearable display device, and generate a direct processor interrupt request based on a change in the use status.The wearable display device of claim 19, wherein the one or more processors are configured to control operation of one or more display screens of the wearable display device, a communication session with the host device, and display processing of data received from the host device.The wearable display device of claim 19, wherein the one or more processors are configured to send a virtual processor interrupt request to a processor of the host device requesting the processor of the host device to one of enable, disable or reduce data processing performed at the host device to generate multimedia data for presentation on the wearable display device.The wearable display device of claim 19, wherein, based on the wearable display device being in use, the one or more processors are configured to one or more of establish a communication session with the host device, activate one or more display screens of the wearable display device, and enable display processing of data received from the host device.The wearable display device of claim 19, wherein, based on the wearable display device being in use, the one or more processors are configured to request a processor of the host device to enable data processing at the host device to generate multimedia data for presentation on the wearable display device.The wearable display device of claim 19, wherein, based on the wearable display device not being in use, the one or more processors are configured to enter a reduced power state. 34The wearable display device of claim 19, wherein, based on the wearable display device not being in use, the one or more processors are configured to one or more of disable display processing of data received from the host device, deactivate one or more display screens of the wearable display device, and dismantle a communication session with the host device.The wearable display device of claim 27, wherein the one or more processors are configured to: initiate a disconnect timer; prior to expiration of the disconnect timer, reduce display processing of data received from the host device until the display screens are deactivated; and upon expiration of the disconnect timer, dismantle the communication session with the host device.The wearable display device of claim 19, wherein, based on the wearable display device not being in use, the one or more processors are configured to request a processor of the host device to disable data processing at the host device to not generate multimedia data for presentation on the wearable display device.The wearable display device of claim 19, wherein the wearable display device comprises a wireless head-mounted display (WHMD) device formed as glasses that include at least one of the touch sensors located on a bridge of the glasses and at least two of the touch sensors located on temple arms of the glasses.A host device connected to a wearable display device, the host device comprising: a memory configured to store data; and one or more processors connected to the memory and configured to receive an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, and control data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device. 35The host device of claim 31, wherein the one or more processors are configured to receive a virtual processor interrupt request from the wearable display device that requests the one or more processors of the host device to one of enable, disable or reduce data processing performed at the host device to generate multimedia data for presentation on the wearable display device.The host device of claim 31, wherein the one or more processors are configured to control operation of a communication session with the wearable display device and data transmission to the wearable display device based on the indicated use status of the wearable display device.The host device of claim 31, wherein, based on an indication that the wearable display device is in use, the one or more processors are configured to enable data processing at the host device to generate multimedia data for presentation on the wearable display device.The host device of claim 31, wherein, based on an indication that the wearable display device is not in use, the one or more processors are configured to disable data processing at the host device to not generate multimedia data for presentation on the wearable display device.The host device of claim 35, wherein, based on the indication that the wearable display device is not in use, the one or more processors are configured to generate a message for the user that the wearable display device has entered a reduced power state.A wearable display device connected to a host device, the wearable display device comprising: means for determining a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; means for sending an indication of the use status of the wearable display device to the host device; means for controlling data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device; and 36 means for controlling operation of the wearable display device based on the use status of the wearable display device.The wearable display device of claim 37, further comprising means for continuously determining the use status of the wearable display device, and means for generating a direct processor interrupt request based on a change in the use status.The wearable display device of claim 37, further comprising means for controlling operation of one or more display screens of the wearable display device, a communication session with the host device, and display processing of data received from the host device.The wearable display device of claim 37, further comprising means for sending a virtual processor interrupt request to a processor of the host device requesting the processor of the host device to one of enable, disable or reduce data processing performed at the host device to generate multimedia data for presentation on the wearable display device.A host device connected to a wearable display device, the host device comprising: means for receiving an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; and means for controlling data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device.The host device of claim 41, further comprising means for receiving a virtual processor interrupt request from the wearable display device requesting a processor of the host device to one of enable, disable or reduce data processing performed at the host device to generate multimedia data for presentation on the wearable display device.The host device of claim 41, further comprising means for controlling operation of a communication session with the wearable display device and data transmission to the wearable display device based on the indicated use status of the wearable display device. 37A non-transitory computer-readable medium comprising instructions for controlling a wearable display device connected to a host device, the instructions when executed cause one or more programmable processors to: determine, with the wearable display device, a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; send, with the wearable display device, an indication of the use status of the wearable display device to the host device; control, with the wearable display device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device; and control, with the wearable display device, operation of the wearable display device based on the use status of the wearable display device.A non-transitory computer-readable medium comprising instructions for controlling a host device connected to a wearable display device, the instructions when executed cause one or more programmable processors to: receive, with the host device, an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; and control, with the host device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device. |
CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 WEARABLE DISPLAY DEVICE USE-BASED DATA PROCESSING CONTROL TECHNICAL FIELD 100011 The disclosure relates to processing of multimedia data and, more particularly, control over processing of multimedia data. BACKGROUND 100021 Wireless display (WD) systems include at least one host device and at least one client device that communicate over a wireless network. For example, a Wi-Fi Direct (WFD) system includes multiple devices communicating over a Wi-Fi network. The host device acts as a wireless access point and sends multimedia data, which may include audio video (AV) data, audio data, and/or video data, to one or more client devices participating in a particular peer-to-peer (P2P) group communication session using one or more wireless communication standards, e.g., IEEE 802.11. The multimedia data may be played back at both a display of the host device and displays at each of the client devices. More specifically, each of the participating client devices processes the received multimedia data for presentation on its display screen and audio equipment. in addition, the host device may perform at least some processing of the multimedia data for presentation on the client devices. 100031 The host device and one or more of the client devices may be either wireless devices or wired devices with wireless communication capabilities. in one example, as wired devices, one or more of the host device and the client devices may comprise televisions, monitors, projectors, set-top boxes, DVD or Blu-Ray Disc players, digital video recorders, laptop or desktop personal computers, video game consoles, and the like, that include wireless communication capabilities. In another example, as wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PDAs), portable media players, or other flash memory devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, or other types of wireless communication devices (WCDs). 100041 In some examples, at least one of the client devices may comprise a wearable display device. A wearable display device may comprise any type of wired or wireless display device that is worn on a user's body. As an. example, the wearable display device may comprise a wireless head-worn display or wireless head-mounted display CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 2 (WI-IMD) that is worn on a user's head in order to position one or more display screens in front of the user's eyes. The host device is typically responsible for performing at least some processing of the multimedia data for display on the wearable display device. In the case of wireless devices, both of the host device and the wearable display device may be powered by limited battery resources. Improved battery life and battery life conservation are, therefore, of paramount concern when designing WCDs and wireless wearable display devices. SUMMARY 100051 In general, this disclosure relates to techniques for controlling operation of both a host device and a wearable display device connected to the host device based on a use status of the wearable display device. A wearable display device typically includes a manual on/off switch and, when switched on, the wearable display device may process data received from a host device for display on the wearable display device. Conventionally, the host device processes and sends data to the wearable display device, and the wearable display device processes and displays the received data regardless of whether the user is actually wearing the wearable display device for use viewing and interacting with the displayed data. In the case of wireless devices, the continuous processing is an unnecessary drain on the relatively short battery cycle-lives of both the wearable display device and the host device. 100061 The techniques of this disclosure include automatically determining a use status of a wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user. Based on the determined use status, the wearable display device controls its own operation. For example, the wearable display device may control operation of display screens of the wearable display device, a communication session with the host device, and display processing of data received from the host device. The wearable display device also sends an indication of the use status to the host device. The host device may then control its own data processing for the wearable display device based on the indicated use status of the wearable display device. 100071 In one example, this disclosure is directed to a method of controlling a wearable display device connected to a host device, the method comprising determining, with the wearable display device, a use status of the wearable display device based on feedback CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 3 from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, sending, with the wearable display device, an indication of the use status of the wearable display device to the host device to control data processing at the host device for the wearable display device, and controlling, with the wearable display device, operation of the wearable display device based on the use status of the wearable display device 100081 In another example, this disclosure is directed to a method of controlling a host device connected to a wearable display device, the method comprising receiving, with the host device, an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, and controlling, with the host device, data processing at the host device for the wearable display device based on the indicated use status of the wearable display device. 100091 In a further example, this disclosure is directed to a wearable display device connected to a host device, the wearable display device comprising one or more touch sensors, and one or more processors configured to determine a use status of the wearable display device based on feedback from the touch sensors that indicates whether the wearable display device is worn by a user, send an indication of the use status of the wearable display device to the host device to control data processing for the wearable display device at the host device, and control operation of the wearable display device based on the use status of the wearable display device. 100101 In another example, this disclosure is directed to a host device connected to a wearable display device, the host device comprising one or more processors configured to receiving an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, and control data processing for the wearable display device based on the indicated use status of the wearable display device. 100111 In an additional example, this disclosure is directed to a wearable display device connected to a host device, the wearable display device comprising means for determining a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, means for sending an indication of the use status of the CA 02920708 2016-02-08 55158-141 4 wearable display device to the host device to control data processing for the wearable display device at the host device, and means for controlling operation of the wearable display device based on the use status of the wearable display device. [0012] In a further example, this disclosure is directed to a host device connected to a wearable display device, the host device comprising means for receiving an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, and means for controlling data processing at the host device for the wearable display device based on the indicated use status of the wearable display device. [0013] In another example, this disclosure is directed to a computer-readable medium comprising instructions for controlling a wearable display device connected to a host device, the instructions when executed cause one or more programmable processors to determine, with the wearable display device, a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, send, with the wearable display device, an indication of the use status of the wearable display device to the host device to control data processing for the wearable display device at the host device, and control, with the wearable display device, operation of the wearable display device based on the use status of the wearable display device. [0014] In a further example, this disclosure is directed to a computer- readable medium comprising instructions for controlling a host device connected to a wearable display device, the instructions when executed cause one or more programmable processors to receive, with the host device, an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, and control, with the host device, data processing for the wearable display device based on the indicated use status of the wearable display device. [0014a] According to one aspect of the present invention, there is provided a method of controlling a wearable display device connected to a host device, the method comprising: determining, with the wearable display device, a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device CA 02920708 2016-02-08 55158-141 4a is worn by a user; sending, with the wearable display device, an indication of the use status of the wearable display device to the host device; controlling, with the wearable display device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device; and controlling, with the wearable display device, operation of the wearable display device based on the use status of the wearable display device. [0014b] According to another aspect of the present invention, there is provided a method of controlling a host device connected to a wearable display device, the method comprising: receiving, with the host device, an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; and controlling, with the host device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device. [0014c] According to still another aspect of the present invention, there is provided a wearable display device connected to a host device, the wearable display device comprising: one or more touch sensors; and one or more processors configured to determine a use status of the wearable display device based on feedback from the touch sensors that indicates whether the wearable display device is worn by a user, send an indication of the use status of the wearable display device to the host device, control data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device, and control operation of the wearable display device based on the use status of the wearable display device. [0014d] According to yet another aspect of the present invention, there is provided a host device connected to a wearable display device, the host device comprising: a memory configured to store data; and one or more processors connected to the memory and configured to receive an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user, and control data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device. CA 02920708 2016-02-08 55158-141 4b 10014e] According to a further aspect of the present invention, there is provided a wearable display device connected to a host device, the wearable display device comprising: means for determining a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; means for sending an indication of the use status of the wearable display device to the host device; means for controlling data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device; and means for controlling operation of the wearable display device based on the use status of the wearable display device. [0014f1 According to yet a further aspect of the present invention, there is provided a host device connected to a wearable display device, the host device comprising: means for receiving an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; and means for controlling data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device. [0014g] According to still a further aspect of the present invention, there is provided a non-transitory computer-readable medium comprising instructions for controlling a wearable display device connected to a host device, the instructions when executed cause one or more programmable processors to: determine, with the wearable display device, a use status of the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; send, with the wearable display device, an indication of the use status of the wearable display device to the host device; control, with the wearable display device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indication of the use status of the wearable display device; and control, with the wearable display device, operation of the wearable display device based on the use status of the wearable display device. 10014h] According to another aspect of the present invention, there is provided a non-transitory computer-readable medium comprising instructions for controlling a host device connected to a wearable display device, the instructions when executed cause one or more programmable processors I I CA 2920708 2017-04-05 55158-141 4c to: receive, with the host device, an indication of a use status of the wearable display device, wherein the use status of the wearable display device is determined at the wearable display device based on feedback from one or more touch sensors of the wearable display device that indicates whether the wearable display device is worn by a user; and control, with the host device, data processing performed at the host device to generate multimedia data for presentation on the wearable display device based on the indicated use status of the wearable display device. [0015] The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings. CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 BRIEF DESCRIPTION OF DRAWINGS [0016] FIG. 1 is a block diagram illustrating a Wireless Display (WD) system including a host device and a wearable display device. [0017] FIG. 2 is a block diagram illustrating the host device and wearable display device from FIG 1 in greater detail. 100181 FIG. 3 is a block diagram illustrating an example of a wearable display device as a head-mounted display (IIMD) formed as glasses with touch sensors. [0019] FIG. 4 is conceptual diagram illustrating an example parallel-plate capacitor. [0020] FIG. 5 is a circuit diagram illustrating an example RC-oscillator circuit including a touch sensor within the wearable display device from FIG. 3. [0021] FIG. 6 is a block diagram illustrating a location sensing unit included in the wearable display device from FIG. 2 in greater detail. 100221 FIG. 7 is a block diagram illustrating the host device from FIG. 2 in greater detail. [0023] FIG. 8 is a flowchart illustrating an example operation of determining a usc status of a wearable display connected to a host device, and controlling processing at the host device and the wearable display device based on the use status. [0024] FIG 9 is a flowchart illustrating an example operation of receiving an indication of a use status of a wearable display device at a host device, and controlling processing at the host device based on the indicated use status. [0025] FIG. 10 is a flowchart illustrating an example operation of a location sensing unit included in a wireless head-mounted display (WHMD) device and related control mechanisms of the WHIVID device. DETAILED DESCRIPTION 100261 FIG. 1 is a block diagram illustrating a Wireless Display (WD) system 10 including a host device 12 and a wearable display device 16. In the example of FIG. 1, WD system 10 includes host device 12 and only one client device, i.e., wearable display device 16. In other examples, WD system 10 may include additional client devices (not shown), which may comprise wearable display devices, wireless devices or wired devices with wireless communication capabilities. 100271 In some examples, WD system 10 may conform to the Wi-Fi Direct (WFD) standard defined by the Wi-Fi Alliance. The WFD standard enables device-to- device CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 6 communication over Wi-Fi networks, i.e., wireless local area networks, in which the devices negotiate their roles as either access points or client devices. WD system 10 may include one or more base stations (not shown) that support a plurality of wireless networks over which a peer-to-peer (P2P) group communication session may be established between host device 12, wearable display device 16, and other participating client devices. A communication service provider or other entity may centrally operate and administer one or more of these wireless networks using a base station as a network hub. 100281 According to the WFD standard, host device .12 may act as a wireless access point and receive a request from wearable display device 16 to establish a P2P group communication session. For example, host device 12 may establish the P2P group communication session between host device 12 and wearable display device 16 using the Real-Time Streaming Protocol (RTSP). The P2P group communication session may be established over a wireless network, such as a Wi-Fi network that uses a wireless communication standard, e.g., IEEE 802.11a, 802.11g, or 802.11 n improvements to previous 802.11 standards. Additional information regarding wireless networks may be found in Gast, M., "802.11 (R) Wireless Networks: The Definitive Guide," O'Reilly, April 2002. 100291 Once the P2P group communication session is established, host device 12 may send multimedia data, which may include audio video (AV) data, audio data, and/or video data, to wearable display device 16, and any other client devices, participating in the particular P2P group communication session. For example, host device 12 may send the multimedia data to wearable display device 16 using the Real-time Transport protocol (RTP). The multimedia data may be played back at both a display of host device 12 and display screens of wearable display device 16. For example, wearable display device 16 may process the multimedia data received from host device 12 for presentation on its display screens and audio equipment. In addition, host device 12 may perform at least some processing of the multimedia data for presentation on wearable display device 16. 100301 A user of wearable display device 16 may provide user input via an interface, such as a human interface device (HID), included within or connected to wearable display device 16. An HID may comprise one or more of a touch display, an input device sensitive to an input object (e.g., a finger, stylus, etc.), a keyboard, a tracking ball, a mouse, a joystick, a remote control, a microphone, or the like. Wearable display CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 7 device 16 sends the provided user input to host device 12. In some examples, wearable display device 16 sends the user input over a reverse channel architecture referred to as a user input back channel (131BC). In this way, host device 12 may respond to the user input provided at wearable display device 16. For example, host device 12 may process the received user input and apply any effect of the user input on subsequent data sent to wearable display device 16. 100311 Host device 12 may be either a wireless device or a wired device with wireless communication capabilities. In one example, as a wired device, host device 12 may comprise one of a television, monitor, projector, set-top box, DVD or Blu-Ray Disc player, digital video recorder, laptop or desktop personal computer, video game console, and the like, that includes wireless communication capabilities. In another example, as a wireless device, host device 12 may comprise one of a mobile telephone, portable computer with a wireless communication card, personal digital assistant (PDA), portable media player, or other flash memory device with wireless communication capabilities, including a so-called "smart" phone and "smart" pad or tablet, or another type of wireless communication device (WCD). 100321 Wearable display device 16 may comprise any type of wired or wireless display device that is worn on a user's body. As an example, wearable display device 16 may comprise a head-worn display or a head-mounted display (HMD) that is worn on a user's head in order to position one or more display screens in front of the user's eyes. In general, the display screens of wearable display device 16 may comprise one of a variety of display screens such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display screen. 100331 In one example, wearable display device 16 may comprise a HMD device formed as glasses that include display screens in one or more of the eye lenses, and also include a nose bridge and temple arms to be worn on a user's face. As another example, wearable display device 16 may comprise a HMD device formed as goggles that includes display screens in separate eye lenses or a single display screen, and that also includes at least one strap to hold the goggles on the user's head. Although wearable display device 16 is primarily described in this disclosure as being a HMD, in other examples wearable display device 16 may comprise display devices that are worn on other portions of the user's body, such as on the user's neck, shoulders, arm or wrist. Specific examples of HMDs an.d their operation are described in more detail in Rolland, CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 8 J. & Hun, H., "Head-Mounted Display Systems," Encyclopedia of Optical Engineering, 2005. 100341 In WD system 10, host device 12 and wearable display device 16 are typically wireless devices. For example, wearable display device 16 may comprise a wireless T-IMD (WT.-IMD) that connects wirelessly to host device 12, and host device 12 may comprise a WCD, such as a mobile smart phone or smart pad. In this example, in addition to typical WCD operations, host device 12 performs at least some multimedia data processing for presentation on wearable display device 16 and user input processing from user interface interactivity at wearable display device 16. Host device 12 may perform these operations with a power manager sourced by a rechargeable battery that is limited by size and weight in order to fit within the structure of a handheld device. 100351 The power manager and battery for wearable display device 16 may be even further limited because wearable display device 16 is intended to be worn on the user's body. Since wearable display device 16 may be a HMD worn on the user's head, the structure of wearable display device 16 needs to be small and lightweight enough to remain comfortable during use. These size and weight restrictions may result in relatively small batteries being included in wearable display device 16 compared to other mobile devices. Wearable display device 16, therefore, may need to perform multimedia data processing for presentation and user interface interactivity with a power manager sourced by a rechargeable battery that is limited by size, weight, balance, thermal, and health constraints. 100361 The WFD standard does provide some power management protocols for devices, such as host device 12, that operate as access points, namely the Opportunistic Power Save protocol and the Notice of Absence protocol. Both of these power management protocols enable a device operating as an access point to save power by going to sleep during either convenient or pre-planned periods, without dismantling a P2P group communication session with the one or more client devices. More information regarding these WFD power management protocols is available in Camps-Mur. D., et al., "Designing Energy Efficient Access Points with Wi-Fl Direct," The International Journal of Computer and Telecommunications Networking, Vol. 55, Issue 13, September 2011. 100371 Wearable display device 16 may include a manual on/off switch (not shown) and, when switched on, wearable display device 16 processes data received from host CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 9 device 12 for display on wearable display device 16. Merely turning on wearable display device 16, however, does not indicate whether a user is actually wearing wearable display device 16 for use viewing and interacting with the displayed data. Conventionally, a host device will process and send data to a wearable display device, and the wearable display device will process and display the received data regardless of whether the user is actually wearing the wearable display device. In the case of wireless devices, the continuous processing is an unnecessary drain on the short battery cycle- life of both the wearable display device and the host device. 100381 Wearable display device 16 necessarily requires a user to wear the device for use, so the operation of wearable display device 16 and the related multimedia data processing at host device 12 is only needed when the user is actually wearing the device. Because the user has to wear wearable display device 16, the use of wearable display device 16 may be intrusive and interfere with the user's normal activities. The use of wearable display device 16, therefore, may be arbitrarily interrupted, and it is unlikely that the user will remember to manually turn off wearable display device 16. 100391 In general, this disclosure relates to techniques for controlling operation of both host device 12 and wearable display device 16 connected to host device 12 based on a use status, i.e., whether in use or not in use, of wearable display device 16. According to the techniques, the use status of wearable display device 16 is automatically detected to minimize unnecessary processing and conserve battery cycle-life at both host device 12 and wearable display device 16 without relying on user interaction. As illustrated in FIG. 1, wearable display device 16 includes a location sensing unit 20 configured to automatically determine whether wearable display device 16 is worn by a user for use viewing and/or interacting with the displayed data. 100401 The techniques of this disclosure include the use of wearable display device 16 including one or more touch sensors (not shown in FIG. 1) positioned at locations that are in contact or close proximity with the user when the user is wearing wearable display device 16. In an example where wearable display device 16 comprises a WHM.D device formed as glasses, wearable display device 16 may include at least one sensor on a nose bridge and at least two sensors on temple arms that will be in contact with the user's nose and ears, respectively, when the glasses are worn. In this way, the touch sensors will be unavoidably in contact with the user when the user is wearing and using the wearable display device. In other examples, wearable display device 16 may include more or fewer touch sensors positioned at different locations depending on the CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 form or shape of the device. In addition, in some cases, wearable display device 16 may include touch sensors capable of being triggered by close proximity to the user's body without requiring actual contact with the user's body. 100411 According to the techniques, location sensing unit 20 automatically determines a use status of wearable display device 16 based on feedback from the touch sensors of wearable display device 16. The feedback indicates to location sensing unit 20 whether wearable display device 16 is being worn by the user. Based on the determined use status, wearable display device 16 controls its own operation. For example, wearable display device 16 may control operation of one or more of display screens of wearable display device 16, the communication session with host device 12, and display processing of data received from host device 12. Wearable display device 16 also sends an indication of the use status to host device 12. Host device 12 may then control its own data processing for wearable display device 16 based on the indicated use status of wearable display device 16. 100421 FIG. 2 is a block diagram illustrating host device 12 and wearable display device 16 from FIG. 1 in greater detail. For purposes of this disclosure, host device 12 and wearable display device 16 will primarily be described as being wireless devices with limitations on battery size and weight, resulting in short battery cycle-life. For example, host device 12 may comprise a smart phone or smart pad, or other handheld WCD, and wearable display device 16 may comprise a WHMD device. In other examples, however, host device 12 and wearable display device 16 may comprise either wireless devices or wired devices with wireless communication capabilities. 100431 In the example illustrated in FIG. 2, host device 12 includes an application processor 30, a system interrupt processor 34, a wireless controller 36, a connection processor 38, a multimedia processor 42 and a display 44. Application processor 30 includes a user input WO processor 32. In other examples, host device 12 may comprise additional functional units or modules used to control and perform WCD operations. As an example, a more detailed version of host device 12 is described below with respect to FIG. 7. 100441 As illustrated in FIG. 2, wearable display device 16 includes location sensing unit 20, wireless controller 46, connection processor 48, controller 50, multimedia processor 52, display screens 54 and touch sensors 56. Controller 50 comprises a main controller for wearable display device 16, and controls the overall operation of wearable display device 16. Location sensing unit 20 and touch sensors 56 of wearable display CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 ii device 16 and their operation in accordance with the techniques of this disclosure are described in more detail below and with respect to FIGS. 3-6. 100451 In general, host device 12 processes multimedia data for presentation on its own display 44, and may also process multimedia data for presentation on wearable display device 16. In addition, wearable display device 16 may receive user input via an interface, such as a HID, and may send the user input to host device 12 for processing. In FIG. 2, the transfer of both multimedia data and user input between host device 12 and wearable display device 16 is illustrated as a path 62. 100461 To transfer multimedia data from host device 12 to wearable display device 16, path 62 may begin at application processor 30. Application processor 30 provides an environment in which a variety of applications may run on host device 12. Example applications include texting applications, email applications, video or picture slideshow applications, presentation applications, video conferencing applications, and the like. Application processor 30 may receive data for use by these applications from internal or external storage location and/or internal or external sensors or cameras associated with host device 12. The applications running on application processor 30, in turn, generate multimedia data for presentation to a user of host device 12 and/or wearable display device 16. In other examples, path 62 may begin at multimedia processor 42 or some other functional device that either generates multimedia data or receives multimedia data directly from the storage locations and/or sensors or cameras. 100471 Multimedia processor 42 may display process the received multimedia data for presentation on display 44 of host device 12. In addition, multimedia processor 42 may process the received multimedia data for transmission and presentation on wearable display device 16. In the latter case, wireless controller 36 packages the processed data for transmission. Packaging the processed data may include grouping the data into packets, frames or cells that may depend on the wireless communication standard used over Wi-Fi network 40. Connection processor 38 then transmits the processed data to wearable display device 16 using Wi-Fi network 40. Connection processor 38 manages the connections of host device 12, including a P2P group communication session with wearable display device 16 over Wi-Fi network 40, and the transmission and receipt of data over the connections. 100481 The transfer of the multimedia data continues along path 62 at wearable display device 16 when connection processor 48 receives the transmitted data from host device 12. Similar to connection processor 38 of host device 12, connection processor 48 of CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 12 wearable display device 16 manages the connections of wearable display device16, including a P2P group communication session with host device 12 over Wi-Fl network 40, and the transmission and receipt of data over the connections. Wireless controller 46 unpackages the received data for processing by multimedia processor 52. Multimedia processor 52 then display processes the received data for presentation on display screens 54 of wearable display device 16. 100491 To transfer user input from wearable display device 16 to host device 12, path 62 may be followed, in reverse from that described above, beginning at multimedia processor 52. Multimedia processor 52 may receive user input via a HID or other user interface (not shown) included within or connected to wearable display device 16. Wireless controller 46 packages the user input, and connection processor 48 transmits the packaged user input over Wi-Fi network 4010 host device 12. At host device 12, connection processor 38 receives the transmitted user input, and wireless controller 36 unpackages the received user input for processing by multimedia processor 42 and Ul processor 32. In this way, host device 12 may respond to the user input by applying any effect of the user input on data processing at multimedia processor 42 and/or the applications running on application processor 30. 100501 Conventionally, host device 12 and wearable display device 16 would continue operating as described above until some user interaction occurred to disconnect, put to sleep, or power off wearable display device 16. Continuously processing data for display on wearable display device 16 regardless of whether the user is wearing wearable display device 16, however, consumes substantial power resources of both host device 12 and wearable display device 16. To conserve battery-cycle life, the techniques of this disclosure include location sensing unit 20 and touch sensors 56 in wearable display device 16 in order to enable an automatic determination of a use status of wearable display device 16, i.e., whether wearable display device 16 is worn by a user for use viewing andlor interacting with the displayed data. In addition, the techniques include notifying host device 12 of the use status of wearable display device 16. In this way, the techniques enable wearable display device 16 to automatically enter a reduced power state, in which all components except location sensing unit 20 are shut down, without relying on user interaction to disconnect, put to sleep, or power off wearable display device 16. The techniques also allow host device 12 to disable data processing at host device 12 for wearable display device 16 when wearable display device 16 is not in use. CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 13 100511 Location sensing unit 20 of wearable display device 16 is designed to always be operating even when the remaining components of wearable display device 16 are asleep or powered down. In some cases, a portion of controller 50 responsible for the operation of location sensing unit 20 may also remain powered on. In order to remain "always on," location sensing unit 20 is designed to consume ultra-low power, e.g., approximately 10 microwatts (RW). In addition, location sensing unit 20 may require negligible additional hardware at wearable display device 16. Location sensing unit 20 may also avoid engaging user input controls that would unnecessarily engage host device 12 and may be used for some application specific 151 controls at wearable display device 16 to minimize latency. 100521 Location sensing unit 20 receives feedback from touch sensors 56 that indicates whether wearable display device 16 is worn by a user. Based on the feedback, location sensing unit 20 continuously determines the use status of wearable display device 16. As described in more detail below, in some cases, location sensing unit 20 may generate an oscillation frequency that changes based on whether touch sensors 56 are in contact with the user's body, and determine the use status of wearable display device 16 based on a comparison of the generated oscillation frequency and a threshold frequency value. 100531 Touch sensors 56 may be positioned within wearable display device 16 at locations that will be in contact or close proximity with the user when the user is wearing wearable display device 16. An example in which wearable display device 16 comprises a WHMD device formed as glasses is described in more detail with respect to FIG. 3. In some cases, each of touch sensors 56 may comprise a capacitance touch sensor that increases an oscillation frequency generated by location sensing unit 20. In this example, when the oscillation frequency generated by location sensing unit 20 is greater than a threshold frequency value, location sensing unit 20 determines that wearable display device 16 is in use. 100541 When a change in the use status occurs, e.g., a user puts on or takes off the wearable display device 16, location sensing unit 20 may inform controller 50 of the determined use status via a direct processor interrupt request 58. In other examples, location sensing unit 20 may continuously send use status indications to controller 50 regardless of whether a change in use status has occurred. Controller 50, in turn, may generate a virtual processor interrupt request 60 to indicate the use status of wearable display device 16 to host device 12. As illustrated in FIG. 2, virtual processor interrupt request 60 is packaged by wireless controller 46 and transmitted by connection CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 14 processor 48 over Wi-Fi network 40 to host device 12. At host device 12, connection processor 38 receives the transmitted virtual processor interrupt request 60, and wireless controller 36 unpackages the received user input for processing by system interrupt processor 34 and application processor 30. 100551 In the case where wearable display device 16 is in the reduced power state and a user puts on wearable display device for use, location sensing unit 20 receives feedback from touch sensors 56 indicating that wearable display device 16 is being worn by the user. Based on the feedback, location sensing unit 20 determines that wearable display device 16 is in use, and indicates the determined use to controller 50. For example, location sensing unit 20 may send direct processor interrupt request 58 to controller 50 to wake-up or activate the other components of wearable display device 16. Controller 50 controls operation of wearable display device 16 based on the indication of the use status of wearable display device 16. For example, controller SO may instruct connection processor 48 to establish a communication session with host device 12. In addition, controller 50 may enable display processing at multimedia processor 52 of data received from host device 12, and activate display screens 54 of wearable display device 16 in order to display the processed data. 100561 Upon receiving the indication from location sensing unit 20 that wearable display device 16 is in use, controller 50 also sends an indication that wearable display device is in use to host device 12. For example, controller 50 may send virtual processor interrupt request 60 to host device 12. Application processor 30 of host device 12 controls data processing at host device 12 for wearable display device 16 based on the indication of the use status of wearable display device 16. For example, application processor 30 may enable data processing at multimedia processor 42 for transmission and display on wearable display device 16. In some cases, application processor 30 may also instruct connection processor 38 to establish the communication session with wearable display device 16, and transmit the processed data to wearable display device 16 based on the indication that wearable display device 16 is in use. In addition, application processor 30 may enable Vi processor 32 to process any user input received from wearable display device 16, and adjust the application processing and data processing based on the received use input. 100571 In the case where wearable display device 16 is in use and a user removes the wearable display device, location sensing unit 20 receives feedback from touch sensors 56 indicating that wearable display device 16 is not worn by the user. Based on the CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 feedback, location sensing unit 20 determines that wearable display device 16 is no longer in use, and indicates the determined use to controller 50. For example, location sensing unit 20 may send direct processor interrupt request 58 to controller 50 to put to sleep, shut-down, or otherwise deactivate the other components of wearable display device 16. Controller 50 controls operation of wearable display device 16 based on the indication of the use status of wearable display device 16. For example, controller 50 may disable display processing at multimedia processor 52 of data received from host device 12, and deactivate display screens 54 of wearable display device 16. Controller 50 may also instruct connection processor 48 to dismantle the communication session with host device 12. 100581 Upon receiving the indication from location sensing unit 20 that wearable display device 16 is not in use, controller 50 also sends an indication that wearable display device is not in use to host device 12. For example, controller 50 may send virtual processor interrupt request 60 to host device 12. Application processor 30 of host device 12 controls data processing at host device 12 for wearable display device 16 based on the indication of the use status of wearable display device 16. For example, application processor 30 may disable data processing at multimedia processor 42 for transmission and display on wearable display device 16. In some cases, application processor 30 may also instruct connection processor 38 to dismantle the communication session with wearable display device 16, and cease transmission of data to wearable display device 16 based on the indication that wearable display device 16 is in use. In addition, application processor 30 may disable Ul processor 32 from processing any user input received from wearable display device 16. In this way, the techniques of this disclosure may improve battery cycle-life and may reduce unnecessary data processing at both wearable display device 16 and host device 12. 10059.1 FIG. 3 is a block diagram illustrating an example of wearable display device 16 as a HMD formed as glasses with touch sensors 56A-56C ("touch sensors 56"). As illustrated in FIG. 2 described above, wearable display device 16 includes wireless controller 46 that prepares data for transmission using the P2P group communication session with host device 12 established over Wi-Fi network 40, controller 50 that controls operation of wearable display device 16, and multimedia processor 52 that performs display processing of data received from host device 12. In the illustrated example, the lenses of the glasses comprise display screens 54 for which multimedia processor 52 processes video data for presentation to the user. In addition, wearable CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 16 display device 16 includes speakers MA and 64B ("speakers 64") for which multimedia processor 52 processes audio data for presentation to the user. 100601 As illustrated in FIG. 3, wearable display device 16, formed as glasses, includes display screens 54 in the eye lenses held together by a nose bridge 63, and temple arms 65A and 65B ("temple arms 65") that enable wearable display device 16 to be worn on a user's face. In this example, touch sensors 56 are positioned at locations that will be unavoidably in contact with the user's body when the user is wearing wearable display device 16. In the illustrated example, wearable display device 16 includes a touch sensor 56C positioned on the nose bridge of the glasses, and touch sensors 56A and 56B positioned on the temple arms of the glasses that will be in contact with the user's nose and ears, respectively, when the glasses arc worn. In other cases, wearable display device 16 may include touch sensors capable of being triggered by close proximity to the user's body without requiring actual contact with the user's body. In this case, the touch sensors may be positioned at locations on wearable display device 16 that will at least be in close proximity to the user's body, but not physically touching the user's body. 100611 Location sensing unit 20 of wearable display device 16 includes a touch transducer 66 and a touch detector 68. Touch transducer 66 is directly connected to each of touch sensors 56 to receive the feedback from touch sensors 56. Touch transducer 66 converts the "touch" feedback from touch sensors 56 into electrical feedback. In cases where location sensing unit 20 generates an oscillation frequency to determine the use status of wearable display device 16, touch transducer 66 may convert the feedback from touch sensors 56 into additional capacitance that causes the generated oscillation frequency to increase when touch sensors 56 are in contact with the user's body. 100621 Touch detector 68 receives the converted feedback from touch transducer 66 that indicates whether one or more of touch sensors 56 are in contact with the user's body, and determines whether wearable display device 16 is in use based on the feedback. More specifically, touch detector 68 may compare the oscillation frequency generated based on the feedback from touch sensors 56 with a threshold frequency value. For example, when the generated oscillation frequency is greater than the threshold frequency value, touch detector 68 may determine that wearable display device 16 is being worn by the user for use. Touch detector 68 may then send a direct processor CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 17 interrupt request to controller 50 to indicate the determined use status of wearable display device 16. 100631 In the illustrated example, wearable display device 16 includes three touch sensors 56. In other examples, wearable display device 16 may include more or fewer touch sensors. In some cases, it may be advantages to use two or more of touch sensors 56 so that location sensing unit 20 is capable of detecting whether all touch sensors 56 are in contact or close proximity with the user and wearable display device 16 is being properly worn for use, or whether less than all of touch sensors 56 are in contact with the user and wearable display device 16 is incorrectly positioned or being held at one or more of touch sensors 56. For example, location sensing unit 20 will generate the highest oscillation frequency when all of touch sensors 56 arc simultaneously in contact with a surface of the user's body, indicating that the user is wearing wearable display device 16 for use. The threshold frequency value may be a preset value that requires all of touch sensors 56 to be in contact with the user. In other examples, the threshold frequency value may be a preset value that requires at least one of touch sensors 56 to be in contact with the user. 100641 As described above, location sensing unit 20 may be designed to be "always on." Touch transducer 66 may, therefore, continually receive feedback from touch sensors 56 and convert the feedback for location sensing unit 20 to generate a constantly updating oscillation frequency. In addition, touch detector 68 may continually compare the updated oscillation frequency with the threshold frequency value to determine a current use status of wearable display device 16. 100651 In some cases, touch detector 68 sends a direct processor interrupt request to controller 50 to indicate the use status only when a change occurs in the determined use status of wearable display device 16. In this way, controller 50 is only notified of the use status when a wake-up or shut-down operation needs to be performed. in other cases, touch detector 68 continually sends an indication of the use status to controller 50, and controller 50 then detects when a change in the use status has occurred to control operation of wearable display device 16, and sends an indication of the use status change to host device 12. In either case, the use status determination, and subsequent wake-up or shut-down operation may be performed as background processes of wearable display device 16. 100661 In the illustrated example, wearable display device 16 is a HMD formed as glasses. In other examples, wearable display device 16 may comprise any type of wired CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 18 or wireless display device that is worn on a user's body, including HMDs with a different form factor than shown in FIG. 3. As an example, wearable display device 16 may comprise a HMD device formed as goggles that includes display screens in separate eye lenses or a single display screen., and that also includes at least one strap to hold the goggles on the user's head. As some examples, wearable display device 16 may comprise a display device that is worn on other portions of the user's body, such as on the user's neck or shoulders. [00671 FIG. 4 is conceptual diagram illustrating an example parallel-plate capacitor 70. According to the techniques of this disclosure, parallel-plate capacitor 70 may be associated with one of touch sensors 56 included in wearable display device 16 from FIG. 3. Capacitor 70 includes a top plate 72A and a bottom plate 72B ("plates 72") positioned parallel to each other, and a dielectric material 74 sandwiched between plates 72A and 72B. In FIG. 4, dielectric material 74 is indicated as having an actual permittivity equal to the product of the relative permittivity, Cr, of dielectric material 74 and the permittivity of free space, cr. The permittivity of dielectric material 74 indicates the ability of dielectric material 74 to transmit an electric field. 100681 In general, the capacitance of parallel-plate capacitor 70 indicates the ability of capacitor 70 to store an electric charge. The capacitance of parallel-plate capacitor 70 is dependent on the area of plates 72, the distance between plates 72, and the relative permittivity or dielectric constant of dielectric material 74 between plates 72. Specifically, the capacitance of parallel-plate capacitor 70 is equal to C = 603'0'0 dj, where A represents the area of plates 72 and d represents the distance between plates 72. 100691 FIG. 5 is a circuit diagram illustrating an example RC-oscillator circuit 75 including touch sensor 56A within wearable display device 16 from FIG. 3. In some examples, RC-oscillator circuit 75 may be considered a relaxation oscillator. RC- oscillator circuit 75 includes an amplifier that generates an oscillation frequency based on frequency selective input provided by an RC network, which includes at least one resistor (R) and at least one capacitor (C). [00701 In the example illustrated in. FIG. 5, RC-oscillator circuit 75 also includes touch sensor 56A of wearable display device 16 from FIG. 3. Touch sensor 56A may comprise a capacitance touch sensor that includes a plate or electrode that is positioned within wearable display device 16 such that touch sensor 56A will be in contact with the user when wearable display device 16 is worn. When touch sensor 56A is in contact CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 19 with the user's body, a capacitor is created in which the user's skin acts as a dielectric material and the Earth acts as a ground for the electrode of touch sensor 56A. 100711 In one example illustrated in FIG. 5, when touch sensor 56A is not in contact with a surface of the user's body, current 78 does not flow to touch sensor 56A and the generated oscillation frequency depends only on R and C. This oscillation frequency may be considered the baseline or default oscillation frequency of RC- oscillator circuit 75. In another example illustrated in FIG. 5, when touch sensor 56A is in contact with the user's body, current 76 flows to touch sensor 56A and through the user's body to ground. In this case, the capacitance of the user's body, e.g., Crouch, is added to the RC network. The additional capacitance changes the overall RC time-constant of the RC network and alters the generated oscillation frequency. The techniques of this disclosure use the altered oscillation frequency value to determine whether wearable display device 16 is worn by the user. 100721 FIG. 6 is a block diagram illustrating location sensing unit 20 included in wearable display device 16 from FIG. 2 in greater detail. As illustrated in FIG. 2 described above, wearable display device 16 includes wireless controller 46 that prepares data for transmission using the P2P group communication session with host device 12 established over Wi-Fi network 40, controller 50 that controls operation of wearable display device 16, and multimedia processor 52 that performs display processing of data received from host device 12 for presentation on display screens 54. Furthermore, as illustrated in FIG. 3 described above, location sensing unit 20 of wearable display device 16 includes touch transducer 66 that receives feedback from touch sensors 56 and touch detector 68 that determines a use status of wearable display device based on the feedback converted by touch transducer 66. 100731 In the illustrated example of FIG. 6, location sensing unit 20 further includes an RC oscillation circuit that generates an oscillation frequency based on the feedback from touch sensors 56. The RC-oscillator circuit may operate substantially similar to RC-oscillator circuit 75 from FIG. 5 with the inclusion of additional capacitance touch sensors. Touch sensors 56 are illustrated in FIG. 6 as additional capacitors included in the RC-oscillator circuit of location sensing unit 20 that are connected to an earth ground through a user's body. Each of touch sensors 56 may operate substantially similar to touch sensor 56A described with respect to FIG. 5. 100741 When wearable display device 16 is first powered on, location sensing unit 20 activates a grounding circuit 69 to ground all of touch sensors 56 for a preset period of CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 time. During that period, the RC-oscillator circuit generates a default oscillation frequency for wearable display device 16 when touch sensors 56 are not in contact with the user's body. Once the default oscillation frequency is determined, location sensing unit 20 may begin the use status determination operation. 100751 Touch transducer 66 receives feedback from touch sensors 56 during a scan timer period and the RC-oscillator circuit generates an oscillation frequency based on the feedback. The scan timer period may be a preset period of time during which the RC-oscillator circuit of location sensing unit 20 generates the oscillation frequency based on feedback from touch sensors 56. The scan timer period may allow the resulting oscillation frequency to stabilize before touch detector 68 compares the oscillation frequency to a threshold frequency value to determine a use status of wearable display device 16. 100761 When one or more of touch sensors 56 are in contact with the user's body, touch transducer 66 receives feedback as a faster capacitance discharge rate through the additional capacitors. This feedback from touch sensors 56 results in the RC- oscillator circuit generating a higher oscillation frequency than when touch sensors 56 are not touched. Touch detector 68 then compares the higher oscillation frequency to the threshold frequency value to determine whether the frequency is high enough to indicate that the user is wearing wearable display device 16 for use. 100771 For example, the threshold frequency value may be a preset value that is less than the highest oscillation frequency, but greater than an oscillation frequency generated when none of touch sensors 56 are in contact with the user's body. In some cases, the threshold frequency value may be preset such that touch detector 68 only determines that wearable display device 16 is in use when all of touch sensors 56 are in contact with the user. In other cases, the threshold frequency value may be preset such that touch detector 68 determines that wearable display device 16 is in use when at least one of touch sensors 56 is in contact with the user. 100781 As illustrated in FIG 6, wearable display device 16 also includes a power manager 79 that may store battery status information that reflects whether wearable display device 16 is wall plugged or using its battery reserve, and if using the battery reserve, the level of remaining battery power. In some cases, the battery status information may be displayed to the user of wearable display device 16, e.g., using a small battery icon, lights or sounds to indicate different battery conditions. Power manager 79 may update the battery status information almost continuously to reflect an CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 21 accurate battery status to the user of wearable display device 16. In some cases, when the battery reserve is below a minimum value, power manager 79 may initiate a shut- down or sleep operation for wearable display device 16 regardless of its use status. 100791 FIG. 7 is a block diagram illustrating host device 12 from FIG. 2 in greater detail. In the illustrated example, host device 12 includes application processor 30 with Ul processor 32, system interrupt processor 34, wireless controller 36, connection processor 38, multimedia processor 42, display 44, external memory 80, local memory 82, general purpose graphics processing unit (GPGP13) 84, application data manager 86, display processor 88, battery monitoring system 90 and security manager 92. 100801 In general, application processor 30, UI processor 32, system interrupt processor 34, wireless controller 36, connection processor 38, multimedia processor 42 operate as described above with respect to FIG. 2. Applications running on application processor 30 generate multimedia data, e.g., AV data, video data, or audio data, for presentation to a user of host device 12 and/or wearable display device 16 or some other client device connected to host device 12. In some cases, multimedia processor 42 may process the same video data for display on both display 44 and an external display of wearable display device 16 or another client device. In other cases, multimedia processor 42 may process video data for display on only one of display 44 and an external display. 100811 To present the data on host device 12, multimedia processor 42 may perform some pre-processing, and display processor 88 performs display processing of the video data for presentation on display 44. In the case of audio data, multimedia processor 42 may again perform some pre-preprocessing, and an audio processor (not shown) may perform further audio processing for presentation on one or more speakers (not shown) of host device 12. To present the data on wearable display device 16 or some other client device connected to host device 12, multimedia processor 42 may perform some pre-processing, and wireless controller 36 and connection processor 38 then respectively package and transmit the processed data to the client device via Wi-Fi network 40. Connection processor 38 manages connections of host device 12 over Wi- Fi network 40. In other examples, connection processor 38 may manage a 3G or 4G modem connection, a global positioning system (GPS) connection, and/or a Bluetooth connection. 100821 in some cases, the data stored in external memory 80 may be received from an external storage device, such a flash drive, via a peripheral interface, e.g., a universal CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 22 serial bus (USB) interface or a secure digital (SD) card interface. Data stored in external memory 80 may also be received from storage or in real-time from a private network or a public network, e.g., the Internet, via connection processor 38. Application data manager 86 may move data for the applications from external memory 80 and local memory 82 for easier access by application processor 30. In addition, GPGPU 84 may perform any graphics processing for video game applications or other applications that require 3D representations. [00831 Host device 12 also includes battery monitoring system 90 that monitors a battery status of host device 12. Battery monitoring system 90 may store battery status information that reflects whether host device 12 is wall plugged or using its battery reserve, and if using the battery reserve, the level of remaining battery power. In some cases, the battery status information may be displayed to the user of host device 12, e.g., using a small battery icon, lights or sounds to indicate different battery conditions. Battery monitoring system 90 may update the battery status information almost continuously to reflect an. accurate battery Status to the user of host device 12. 100841 The components of host device 12 illustrated in FIG. 7 are merely exemplary. In other examples, host device 12 may include more, fewer, and/or different components. The components of host device 12 may be implemented as any of a variety of suitable circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. Display 44 in host device 12 may comprise one of a variety of display devices such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display screen. 100851 External memory 80 and local memory 82 in host device 12 may comprise any of a wide variety of volatile or non-volatile memory, including but not limited to random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, and the like. External memory 80 and local memory 82 may comprise computer- readable storage media for storing media data, as well as other kinds of data. External memory 80 and local memory 82 additionally store instructions and program code that are executed by application processor 30 and/or multimedia processor 42 as part of performing the techniques described in this disclosure. CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 23 100861 FIG 8 is a flowchart illustrating an example operation of determining a use status of a wearable display device (WDD) connected to a host device, and controlling processing at the host device and the wearable display device based on the use status. The example operation is described with respect to wearable display device 16 connected to host device 12 from FIGS. 1 and 2. 100871 Location sensing unit 20 of WDD 16 determines a use status of WDD 16 based on feedback received from one or more touch sensors 56 included in WDD 16 (100). Touch sensors 56 may be positioned on WDD 16 at locations that will be in contact or close proximity with the user when the user is wearing WDD 16 for use. In some examples, WDD 16 comprises a wireless head-mounted display (WHMD) device formed as glasses, as illustrated in FIG. 3, including at least one of the touch sensors, e.g., touch sensor 56C, located on a bridge of the glasses and at least two of the touch sensors, e.g., touch sensors 56A and 56B, located on temple arms of the glasses. 100881 Location sensing unit 20 may include an oscillator circuit that uses a combination of resistors and capacitors to generate an oscillation frequency. In this example, each of touch sensors 56 connected to WDD 16 adds capacitance to the oscillator circuit. When one or more of touch sensors 56 are in contact with a surface of the user's body (e.g., the user's head or face), the feedback from touch sensors 56 comprises a faster capacitance discharge rate through the additional capacitors, which results in the oscillator circuit generating a higher oscillation frequency than when touch sensors 56 are not touched. In this example, location sensing unit 20 determines the use status of WDD 16 by generating an oscillation frequency based on the feedback from touch sensors 56 and comparing the resulting oscillation frequency to a threshold frequency value to determine whether the user is wearing WDD 16 for use. 100891 When the oscillation frequency is greater than the threshold frequency value, location sensing unit 20 determines that WDD 16 is in use. On the contrary, when the oscillation frequency is less than or equal to the threshold frequency value, location sensing unit 20 determines that WDD 16 is not in use. Location sensing unit 20 will generate the highest oscillation frequency when all of touch sensors 56 are simultaneously in contact with a surface of the user's body, indicating that the user is wearing WDD 16 for use. The threshold frequency value, therefore, may be a preset value that is less than the highest oscillation frequency, but greater than an oscillation frequency generated when none of touch sensors 56 are in contact with a surface of the user's body. In some cases, the threshold frequency value may be preset such that CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 24 location sensing unit 20 only determines that WDD 16 is in use when all of touch sensors 56 are in contact with the user. In other cases, the threshold frequency value may be preset such that location sensing unit 20 determines that WDD 16 is in use when at least one of touch sensors 56 is in contact with the user. 100901 In some cases, location sensing unit 20 may continually determine the use status of WDD 16, and, either at fixed intervals or upon determining a change in the use status, send a direct processor interrupt request to controller 50 of WDD 16 indicating the use status of WDD 16. Controller 50, in turn, may send a virtual processor interrupt request to host device 12 indicating the use status of WDD 16. As one example, when location sensing unit 20 determines that WDD 16 is in use (YES branch of 102), controller 50 of WDD 16 sends an indication that WDD 16 is in use to host device 12 to enable data processing at host device 12 for display on WDD 16 (104). 100911 Controller 50 of WDD 16 also controls its own operation based on the use status of WDD 16. For example, when location sensing unit 20 determines that WDD 16 is in use (YES branch of 102), controller 50 of WDD 16 may establish a communication session, e.g., a peer-to-peer (P2P) wireless connection, with host device 12 (106). In addition, controller 50 of WDD 16 may activate display screens 54 of WDD 16 (108). Controller 50 of WDD 16 may also enable display processing by multimedia processor 52 of data received from host device 12 for display on WDD 16 (110). WDD 16 and host device 12 may continue operating in this full power state until location sensing unit 20 determines that WDD 16 is no longer in use by the user. 100921 As another example, when location sensing unit 20 determines that WDD 16 is not in use (NO branch of 102), controller 50 of WDD 16 sends an indication that WDD 16 is not in use to host device 12 to disable data processing at host device 12 for display on WDD 16 (112). Controller 50 of WDD 16 also controls its own operation based on the use status of WDD 16. For example, when location sensing unit 20 determines that WDD 16 is not in use (NO branch of 102), WDD 16 may enter a reduced power state. In this case, controller 50 of WDD 16 may disable display processing by multimedia processor 52 of data received from host device 12 for display on WDD 16 (114). In addition, controller 50 of WDD 16 may deactivate display screens 54 of WDD 16 (116). Controller 50 of WDD 16 may also dismantle a communication session, e.g., a peer-to- peer (P2P) wireless connection, with host device 12 (118). WDD 16 and host device 12 may continue operating in this reduced power state until location sensing unit 20 determines that WDD 16 is in use by the user. CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 100931 FIG 9 is a flowchart illustrating an example operation of receiving an indication of a use status of a wearable display device (WDD) at a host device, and controlling processing at the host device based on the indicated use status. The example operation is described with respect to host device 12 connected to wearable display device 16 from FIGS. 1 and 2. 100941 Host device 12 receives an indication of a use status of WDD 16 from controller 50 of WDD 16 (120). As described above with respect to FIG. 8, location sensing unit 20 of WDD 16 determines the use status of WDD 16 based on feedback received from one or more touch sensors 56 included in WDD 16 that indicate whether a user is wearing WDD 16 for use, and indicates the use status to controller 50 of WDD 16 using a direct processor interrupt request. In some cases, application processor 30 of host device 12 receives a virtual processor interrupt request from controller 50 of WDD 16 indicating the use status of WDD 16. Application processor 30 of host device 12 may receive the virtual processor interrupt requests indicating the use status of WDD 16 either at fixed intervals or upon a change in. the use status of WDD 16. 100951 Application processor 30 of host device 12 controls data processing at host device 12 for WDD 16 based on the use status of WDD 16. In some cases, application processor 30 may also control operation of a communication session, e.g., a peer-to-peer (P2P) wireless connection, with WDD 16 and data transmission to WDD 16 over the communication session based on the indicated use status of WDD 16. As one example, when host device 12 receives an indication that WDD 16 is in use (YES branch of 122), application processor 30 enables processing of data by multimedia processor 42 of host device 12 for display on WDD 16 (124). Host device 12 may continue operating in this full power state until application processor 30 of host device 12 receives an indication that WDD 16 is no longer in use by the user. 100961 As another example, when host device 12 receives an indication that WDD 16 is not in use (NO branch of 122), application processor 30 disables processing of data by multimedia processor 42 of host device 12 for display on WDD 16 (126). In addition, application processor 30 may generate a message for the user of host device 12 and WDD 16 that WDD 16 has entered a reduced power state (128). In some examples, the generated message may be presented to the user on display 44 of host device 12. In this way, the user is notified that the WDD 16 has not been in use for some preset time period, and is automatically entering the reduced power state. Host device 12 may CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 26 continue operating in this reduced power state until application processor 30 of host device 12 receives an indication that WDD 16 is in use by the user. 100971 FIG. 10 is a flowchart illustrating an example operation of a location sensing unit included in a wireless head-mounted display (WHMD) device and related control mechanisms of the WHMD device. The example operation is described with respect to wearable display device 16 as WHMD 16 including location sensing unit 20 and controller 50 from FIG. 2. 00981 Beginning with a "wake-up" of WHMD 16 (140), location sensing unit 20 of WHMD 16 receives feedback from touch sensors 56 within WHMD 16 during a scan timer period. The wake-up mechanism may be the manual turning on of WHMD 16 by a user. The scan timer period may be a preset period of time during which location sensing unit 20 generates an oscillation frequency based on feedback from touch sensors 56. The scan timer period may allow the resulting oscillation frequency to stabilize before location sensing unit 20 makes a determination of the use status of WHMD 16. [00991 When the scan timer expires (YES branch of 142), location sensing unit 20 determines whether WHMD 16 is in location, i.e., worn by a user for use (144). When WHMD 16 is in location (YES branch of 144), location sensing unit 20 sends a direct processor interrupt request to controller 50 of WHMD 16 to wake-up controller 50 and the other components of WHMD 16. If WHMD 16 is not P2P-connected to host device 12 (NO branch of 150), controller 50 may initiate establishment of a communication session, e.g., a P2P-group, with host device 12 (152). Once WHMD 16 is P2P- connected to host device 12 (YES branch of 150), controller 50 sends a virtual interrupt request to host device 12 indicating that WHMD 16 is in use to enable data processing at host device 12 for WHMD 16. Controller 50 may then control operation of WHMD 16 in a full power state, as described above with respect to FIG. 8. WHMD 16 may operate in the full power state until location sensing unit 20 determines that WH MD 16 is no longer worn by the user for use (144). 101001 When WHMD 16 is not in location (NO branch of 144), location sensing unit 20 directs controller 50 to initiate a disconnect timer for WHMD 16 (146). The disconnect timer period may be a preset period of time during which controller 50 shuts down WHMD 16 prior to dismantling the P2P wireless connection with host device 12. For example, before the disconnect timer has expired (NO branch of 146), controller 50 may reduce or minimize display processing at WHMD 16 of AV data received from host device 12 until display screens 54 are deactivated and sound is muted (154). For CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 27 example, controller 50 may reduce the quality of service (QoS) of the data rendering for display. Controller 50 also sends a virtual interrupt request to host device 12 indicating that WHMD 16 is not in use to disable data processing at host device 12 for WHMD 16. After the disconnect timer has expired (YES branch of 146), controller 50 may initiate the dismantling of the P2P wireless connection, i.e., a P2P-connection power save mode, between WHMD 16 and host device 12 (148). 101011 Once controller 50 disables display processing of data at WHMD 16 (YES branch of 156) and dismantles the P2P wireless connection (148), WHMD 16 enters a reduced power state, as described above with respect to FIG. 8 until location sensing unit 20 determines that WHMD 16 is worn by the user for use. In addition, based on the indication from controller 50, host device 12 disables processing of data at host device 12 for WHMD 16 and generates user messages related to the reduced power state of WHMD 16 (158). WHMD 16 may operate in the reduced power state until location sensing unit 20 determines that WHMD 16 is worn by the user for use (144). 101021 In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In some examples, computer- readable media may comprise non-transitory computer-readable media. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. 101031 By way of example, and not limitation, such computer-readable media can comprise non-transitory media such as RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair. digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and CA 02920708 2016-02-08 WO 2015/034617 PCT/US2014/049814 28 microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DvD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. 101041 The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. 101051 The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. 101061 Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims. |